{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955","id":1003999469,"node_id":"PR_kwDODunzps4sHuRu","number":2955,"title":"Update legacy Python image for CI tests in Linux","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632299127000,"updated_at":1632302608000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2955","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955.patch"},"body":"Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights:\r\n\r\n- Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host.\r\n\r\n- Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images.\r\n\r\nMore info: https:\/\/circleci.com\/docs\/2.0\/circleci-images","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954","id":1003904803,"node_id":"PR_kwDODunzps4sHa8O","number":2954,"title":"Run tests in parallel","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```","There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`"],"created_at":1632294044000,"updated_at":1632297373000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954.patch"},"body":"Run CI tests in parallel to speed up the test suite.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952","id":1002704096,"node_id":"PR_kwDODunzps4sDU8S","number":2952,"title":"Fix missing conda deps","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632237781000,"updated_at":1632285599000,"closed_at":1632238244000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952.patch"},"body":"`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail.\r\n\r\nFix #2932.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951","id":1001267888,"node_id":"PR_kwDODunzps4r-lGs","number":2951,"title":"Dummy labels no longer on by default in `to_tf_dataset`","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.","Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features"],"created_at":1632162419000,"updated_at":1632232857000,"closed_at":1632219272000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2951","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951.patch"},"body":"After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950","id":1001085353,"node_id":"PR_kwDODunzps4r-AKu","number":2950,"title":"Fix fn kwargs in filter","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632150626000,"updated_at":1632154979000,"closed_at":1632151681000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2950","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950.patch"},"body":"#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/2927\r\n\r\nI fixed that and added a test to make sure it doesn't happen again (for either map or filter)\r\n\r\nFix #2927","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949","id":1001026680,"node_id":"PR_kwDODunzps4r90Pt","number":2949,"title":"Introduce web and wiki config in triviaqa dataset","user":{"login":"shirte","id":1706443,"node_id":"MDQ6VXNlcjE3MDY0NDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1706443?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shirte","html_url":"https:\/\/github.com\/shirte","followers_url":"https:\/\/api.github.com\/users\/shirte\/followers","following_url":"https:\/\/api.github.com\/users\/shirte\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shirte\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shirte\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shirte\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shirte\/orgs","repos_url":"https:\/\/api.github.com\/users\/shirte\/repos","events_url":"https:\/\/api.github.com\/users\/shirte\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shirte\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632147443000,"updated_at":1632262631000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949.patch"},"body":"The TriviaQA paper suggests that the two subsets (Wikipedia and Web)\r\nshould be treated differently. There are also different leaderboards\r\nfor the two sets on CodaLab. For that reason, introduce additional\r\nbuilder configs in the trivia_qa dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948","id":1000844077,"node_id":"PR_kwDODunzps4r9PdV","number":2948,"title":"Fix minor URL format in scitldr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632136292000,"updated_at":1632143908000,"closed_at":1632143908000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948.patch"},"body":"While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947","id":1000798338,"node_id":"PR_kwDODunzps4r9GIP","number":2947,"title":"Don't use old, incompatible cache for the new `filter`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632133139000,"updated_at":1632155109000,"closed_at":1632145382000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2947","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947.patch"},"body":"#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation.\r\n\r\nHowever the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). \r\n\r\nThis is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result.\r\n\r\nTo fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0\r\n\r\nThis way the new `filter` outputs are now considered different from the old ones from the caching point of view.\r\n\r\nThis should fix #2943\r\n\r\ncc @anton-l","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946","id":1000754824,"node_id":"PR_kwDODunzps4r89f8","number":2946,"title":"Update meteor score from nltk update","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632130126000,"updated_at":1632130559000,"closed_at":1632130559000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2946","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946.patch"},"body":"It looks like there were issues in NLTK on the way the METEOR score was computed.\r\nA fix was added in NLTK at https:\/\/github.com\/nltk\/nltk\/pull\/2763, and therefore the scoring function no longer returns the same values.\r\n\r\nI updated the score of the example in the docs","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2945","id":1000624883,"node_id":"I_kwDODunzps47pFLz","number":2945,"title":"Protect master branch","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool, I think we can do both :)","@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable\/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history)."],"created_at":1632120421000,"updated_at":1632139287000,"closed_at":1632139216000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:\r\n- 00cc036fea7c7745cfe722360036ed306796a3f2\r\n- 13ae8c98602bbad8197de3b9b425f4c78f582af1\r\n- ...\r\n\r\nI propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:\r\n- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch\r\n - Currently, simple merge commits are already disabled\r\n - I propose to disable rebase merging as well\r\n- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~\r\n - ~~This protection would reject direct pushes to master branch~~\r\n - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~\r\n- [x] Protect the master branch only from direct pushing of **merge commits**\r\n - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).\r\n - No need to disable\/re-enable this protection on each release \r\n\r\nThis purpose of this Issue is to open a discussion about this problem and to agree in a solution.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2944","id":1000544370,"node_id":"I_kwDODunzps47oxhy","number":2944,"title":"Add `remove_columns` to `IterableDataset ` ","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632110460000,"updated_at":1632110460000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"c4\", 'realnewslike', streaming =True, split='train')\r\ndataset = dataset.remove_columns('url')\r\n```\r\n```\r\nAttributeError: 'IterableDataset' object has no attribute 'remove_columns'\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nIt would be nice to have `.remove_columns()` to match the `Datasets` api. \r\n\r\n\r\n**Describe alternatives you've considered**\r\n\r\nThis can be done with a single call to `.map()`, \r\n\r\nI can try to help add this. \ud83e\udd17","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2943","id":1000355115,"node_id":"I_kwDODunzps47oDUr","number":2943,"title":"Backwards compatibility broken for cached datasets that use `.filter()`","user":{"login":"anton-l","id":26864830,"node_id":"MDQ6VXNlcjI2ODY0ODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26864830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anton-l","html_url":"https:\/\/github.com\/anton-l","followers_url":"https:\/\/api.github.com\/users\/anton-l\/followers","following_url":"https:\/\/api.github.com\/users\/anton-l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anton-l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anton-l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anton-l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anton-l\/orgs","repos_url":"https:\/\/api.github.com\/users\/anton-l\/repos","events_url":"https:\/\/api.github.com\/users\/anton-l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anton-l\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?","If it's easy enough to implement, then yes please \ud83d\ude04 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.","Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR","I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available","Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !","Fixed by #2947."],"created_at":1632068197000,"updated_at":1632155143000,"closed_at":1632155142000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nAfter upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with \r\n`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`\r\n\r\nRelated feature: https:\/\/github.com\/huggingface\/datasets\/pull\/2836\r\n\r\n:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) \r\n\r\n## Workaround\r\nRemove the cache for the given dataset, e.g. `rm -rf ~\/.cache\/huggingface\/datasets\/librispeech_asr`.\r\n\r\n## Steps to reproduce the bug\r\n1. Delete `~\/.cache\/huggingface\/datasets\/librispeech_asr` if it exists.\r\n\r\n2. `pip install datasets==1.11.0` and run the following snippet:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nids = [\"1272-141231-0000\"]\r\nds = load_dataset(\"patrickvonplaten\/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nds = ds.filter(lambda x: x[\"id\"] in ids)\r\n```\r\n3. `pip install datasets==1.12.1` and re-run the code again\r\n\r\n## Expected results\r\nSame result as with the previous `datasets` version.\r\n\r\n## Actual results\r\n```bash\r\nReusing dataset librispeech_asr (.\/.cache\/huggingface\/datasets\/librispeech_asr\/clean\/2.1.0\/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)\r\nLoading cached processed dataset at .\/.cache\/huggingface\/datasets\/librispeech_asr\/clean\/2.1.0\/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1\/cache-cd1c29844fdbc87a.arrow\r\nTraceback (most recent call last):\r\n File \".\/repos\/transformers\/src\/transformers\/models\/wav2vec2\/try_dataset.py\", line 5, in \r\n ds = ds.filter(lambda x: x[\"id\"] in ids)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 185, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py\", line 398, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 2169, in filter\r\n indices = self.map(\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1686, in map\r\n return self._map_single(\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 185, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py\", line 398, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1896, in _map_single\r\n return Dataset.from_file(cache_file_name, info=info, split=self.split)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 343, in from_file\r\n return cls(\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 282, in __init__\r\n self.info.features = self.info.features.reorder_fields_as(inferred_features)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/features.py\", line 1151, in reorder_fields_as\r\n return Features(recursive_reorder(self, other))\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/features.py\", line 1140, in recursive_reorder\r\n raise ValueError(f\"Keys mismatch: between {source} and {target}\" + stack_position)\r\nValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}\r\n\r\nProcess finished with exit code 1\r\n\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942","id":1000309765,"node_id":"PR_kwDODunzps4r7tY6","number":2942,"title":"Add SEDE dataset","user":{"login":"Hazoom","id":13545154,"node_id":"MDQ6VXNlcjEzNTQ1MTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13545154?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Hazoom","html_url":"https:\/\/github.com\/Hazoom","followers_url":"https:\/\/api.github.com\/users\/Hazoom\/followers","following_url":"https:\/\/api.github.com\/users\/Hazoom\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Hazoom\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Hazoom\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Hazoom\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Hazoom\/orgs","repos_url":"https:\/\/api.github.com\/users\/Hazoom\/repos","events_url":"https:\/\/api.github.com\/users\/Hazoom\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Hazoom\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.","Hi @Hazoom,\r\n\r\nYou were right: the non-passing test had nothing to do with this PR.\r\n\r\nUnfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n- your commits repeated two times\r\n- and commits which are not yours from the master branch\r\n\r\nIf you would like to clean your pull request, please make:\r\n```\r\ngit reset --hard 587b93a\r\ngit fetch upstream master\r\ngit merge upstream\/master\r\ngit push --force origin sede\r\n```","> Hi @Hazoom,\r\n> \r\n> You were right: the non-passing test had nothing to do with this PR.\r\n> \r\n> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n> \r\n> * your commits repeated two times\r\n> * and commits which are not yours from the master branch\r\n> \r\n> If you would like to clean your pull request, please make:\r\n> \r\n> ```\r\n> git reset --hard 587b93a\r\n> git fetch upstream master\r\n> git merge upstream\/master\r\n> git push --force origin sede\r\n> ```\r\n\r\nThanks @albertvillanova ","> Nice! Just one final request before approving your pull request:\r\n> \r\n> As you have updated the \"QuerySetId\" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:\r\n> \r\n> ```\r\n> rm datasets\/sede\/dataset_infos.json\r\n> datasets-cli test datasets\/sede --save_infos --all_configs\r\n> ```\r\n\r\n@albertvillanova Good catch, just fixed it."],"created_at":1632057084000,"updated_at":1632139643000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2942","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942.patch"},"body":"This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.\r\n\r\nPlease see our paper for more details: https:\/\/arxiv.org\/abs\/2106.05006","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2941","id":1000000711,"node_id":"I_kwDODunzps47mszH","number":2941,"title":"OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError","user":{"login":"ayaka14732","id":68557794,"node_id":"MDQ6VXNlcjY4NTU3Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/68557794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ayaka14732","html_url":"https:\/\/github.com\/ayaka14732","followers_url":"https:\/\/api.github.com\/users\/ayaka14732\/followers","following_url":"https:\/\/api.github.com\/users\/ayaka14732\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ayaka14732\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ayaka14732\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ayaka14732\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ayaka14732\/orgs","repos_url":"https:\/\/api.github.com\/users\/ayaka14732\/repos","events_url":"https:\/\/api.github.com\/users\/ayaka14732\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ayaka14732\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried `unshuffled_original_da` and it is also not working"],"created_at":1631961553000,"updated_at":1631982333000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nCannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\n>>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko')\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}]\r\n```\r\n\r\n## Expected results\r\n\r\nLoading is successful.\r\n\r\n## Actual results\r\n\r\nLoading throws above error.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940","id":999680796,"node_id":"PR_kwDODunzps4r6EUF","number":2940,"title":"add swedish_medical_ner dataset","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631908985000,"updated_at":1632216774000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2940","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940.patch"},"body":"Adding the Swedish Medical NER dataset, listed in \"Biomedical Datasets - BigScience Workshop 2021\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939","id":999639630,"node_id":"PR_kwDODunzps4r58Gu","number":2939,"title":"MENYO-20k repo has moved, updating URL","user":{"login":"cdleong","id":4109253,"node_id":"MDQ6VXNlcjQxMDkyNTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4109253?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cdleong","html_url":"https:\/\/github.com\/cdleong","followers_url":"https:\/\/api.github.com\/users\/cdleong\/followers","following_url":"https:\/\/api.github.com\/users\/cdleong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cdleong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cdleong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cdleong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cdleong\/orgs","repos_url":"https:\/\/api.github.com\/users\/cdleong\/repos","events_url":"https:\/\/api.github.com\/users\/cdleong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cdleong\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631905314000,"updated_at":1632238297000,"closed_at":1632238296000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2939","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939.patch"},"body":"Dataset repo moved to https:\/\/github.com\/uds-lsv\/menyo-20k_MT, now editing URL to match.\r\n\r\nhttps:\/\/github.com\/uds-lsv\/menyo-20k_MT\/blob\/master\/data\/train.tsv is the file we're looking for","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938","id":999552263,"node_id":"PR_kwDODunzps4r5qwa","number":2938,"title":"Take namespace into account in caching","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `\/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https:\/\/github.com\/huggingface\/datasets-preview-backend\/blob\/master\/benchmark\/scripts\/serialize.py. That way, all the datasets are one-level deep directories","IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\ncc @Pierrci ","> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\nout of curiosity: where is it enforced?","> where is it enforced?\r\n\r\nNowhere yet but we should :) feel free to track in internal tracker and\/or implement, as this will be useful in the future","Thanks for the trick, I'm doing the change :)\r\nWe can use\r\n`~\/.cache\/huggingface\/datasets\/username___dataset_name` for the data\r\n`~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/username___dataset_name` for the python files"],"created_at":1631897853000,"updated_at":1632242634000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938.patch"},"body":"Loading a dataset \"username\/dataset_name\" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads \"dataset_name\" without specifying the username, it would reload the dataset from the cache instead of failing.\r\n\r\nI changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:\r\n\r\n`~\/.cache\/huggingface\/datasets\/username\/dataset_name` for the data\r\n`~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/username\/dataset_name` for the python files\r\n<\/s>\r\nEDIT: actually using three underscores:\r\n`~\/.cache\/huggingface\/datasets\/username___dataset_name` for the data\r\n`~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/username___dataset_name` for the python files\r\n\r\nThis PR should fix the issue https:\/\/github.com\/huggingface\/datasets\/issues\/2842\r\n\r\ncc @stas00 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2937","id":999548277,"node_id":"I_kwDODunzps47k-V1","number":2937,"title":"load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied","user":{"login":"daqieq","id":40532020,"node_id":"MDQ6VXNlcjQwNTMyMDIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40532020?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/daqieq","html_url":"https:\/\/github.com\/daqieq","followers_url":"https:\/\/api.github.com\/users\/daqieq\/followers","following_url":"https:\/\/api.github.com\/users\/daqieq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/daqieq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/daqieq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/daqieq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/daqieq\/orgs","repos_url":"https:\/\/api.github.com\/users\/daqieq\/repos","events_url":"https:\/\/api.github.com\/users\/daqieq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/daqieq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB\/s]\r\nDownloading: 2.71kB [00:00, ?B\/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio\/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\\r\n1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nDownloading: 334MB [01:17, 4.32MB\/s]\r\nDataset wiki_bio downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi\r\ns data.\r\n```\r\n\r\nThis kind of error messages usually happen because:\r\n- Your running Python script hasn't write access to that directory\r\n- You have another program (the File Explorer?) already browsing inside that directory","Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.\r\n\r\nRunning on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.\r\n\r\nThat leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).\r\n\r\nIf it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue."],"created_at":1631897530000,"updated_at":1632189875000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nStandard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wiki_bio')\r\n```\r\n\r\n## Expected results\r\nIt is expected that the dataset downloads without any errors.\r\n\r\n## Actual results\r\nPermissionError see trace below:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio\/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\site-packages\\datasets\\load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\site-packages\\datasets\\builder.py\", line 644, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\contextlib.py\", line 120, in __exit__\r\n next(self.gen)\r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\site-packages\\datasets\\builder.py\", line 598, in incomplete_dir\r\n os.rename(tmp_dir, dirname)\r\nPermissionError: [WinError 5] Access is denied: 'C:\\\\Users\\\\username\\\\.cache\\\\huggingface\\\\datasets\\\\wiki_bio\\\\default\\\\1.1.0\\\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\\\Users\\\\username\\\\.cache\\\\huggingface\\\\datasets\\\\wiki_bio\\\\default\\\\1.1.0\\\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'\r\n```\r\nBy commenting out the os.rename() [L604](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/builder.py#L604) and the shutil.rmtree() [L607](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.\r\n\r\nIt seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https:\/\/github.com\/conan-io\/conan\/issues\/6560) with similar issue with os.rename() if it helps debug this issue.\r\n\r\n## Environment info\r\n- `datasets` version: 1.12.1\r\n- Platform: Windows-10-10.0.22449-SP0\r\n- Python version: 3.8.12\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936","id":999521647,"node_id":"PR_kwDODunzps4r5knb","number":2936,"title":"Check that array is not Float as nan != nan","user":{"login":"Iwontbecreative","id":494951,"node_id":"MDQ6VXNlcjQ5NDk1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/494951?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Iwontbecreative","html_url":"https:\/\/github.com\/Iwontbecreative","followers_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/followers","following_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/orgs","repos_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/repos","events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631895401000,"updated_at":1632217145000,"closed_at":1632217144000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2936","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936.patch"},"body":"The Exception wants to check for issues with StructArrays\/ListArrays but catches FloatArrays with value nan as nan != nan.\r\nPass on FloatArrays as we should not raise an Exception for them.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935","id":999518469,"node_id":"PR_kwDODunzps4r5j8B","number":2935,"title":"Add Jigsaw unintended Bias","user":{"login":"Iwontbecreative","id":494951,"node_id":"MDQ6VXNlcjQ5NDk1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/494951?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Iwontbecreative","html_url":"https:\/\/github.com\/Iwontbecreative","followers_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/followers","following_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/orgs","repos_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/repos","events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Note that the tests seem to fail because of a bug in an Exception at the moment, see: https:\/\/github.com\/huggingface\/datasets\/pull\/2936 for the fix","@lhoestq implemented your changes, I think this might be ready for another look."],"created_at":1631895151000,"updated_at":1632269548000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2935","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935.patch"},"body":"Hi,\r\n\r\nHere's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. \r\n\r\nThis requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2934","id":999477413,"node_id":"I_kwDODunzps47ktCl","number":2934,"title":"to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I did some investigation and, as it seems, the bug stems from [this line](https:\/\/github.com\/huggingface\/datasets\/blob\/8004d7c3e1d74b29c3e5b0d1660331cd26758363\/src\/datasets\/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!","Thanks a lot for investigating !"],"created_at":1631892413000,"updated_at":1632155004000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"To reproduce:\r\n```python\r\nimport datasets as ds\r\nimport weakref\r\nimport gc\r\n\r\nd = ds.load_dataset(\"mnist\", split=\"train\")\r\nref = weakref.ref(d._data.table)\r\ntfd = d.to_tf_dataset(\"image\", batch_size=1, shuffle=False, label_cols=\"label\")\r\ndel tfd, d\r\ngc.collect()\r\nassert ref() is None, \"Error: there is at least one reference left\"\r\n```\r\n\r\nThis causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.\r\n\r\nMoreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.\r\n\r\ncc @Rocketknight1 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933","id":999392566,"node_id":"PR_kwDODunzps4r5MHs","number":2933,"title":"Replace script_version with revision","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm also fine with the removal in 1.15"],"created_at":1631887479000,"updated_at":1632131530000,"closed_at":1632131530000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2933","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933.patch"},"body":"As discussed in https:\/\/github.com\/huggingface\/datasets\/pull\/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files).\r\n\r\nThis PR replaces the parameter name `script_version` with `revision`.\r\n\r\nThis way, we are also aligned with:\r\n- Transformers: `AutoTokenizer.from_pretrained(..., revision=...)`\r\n- Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2932","id":999317750,"node_id":"I_kwDODunzps47kGD2","number":2932,"title":"Conda build fails","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Why 1.9 ?\r\n\r\nhttps:\/\/anaconda.org\/HuggingFace\/datasets currently says 1.11","Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 "],"created_at":1631882962000,"updated_at":1632238270000,"closed_at":1632238270000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nCurrent `datasets` version in conda is 1.9 instead of 1.12.\r\n\r\nThe build of the conda package fails.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931","id":998326359,"node_id":"PR_kwDODunzps4r1-JH","number":2931,"title":"Fix bug in to_tf_dataset","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!"],"created_at":1631804883000,"updated_at":1631811698000,"closed_at":1631811697000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2931","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931.patch"},"body":"Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2930","id":998154311,"node_id":"I_kwDODunzps47fqBH","number":2930,"title":"Mutable columns argument breaks set_format","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"assignees":[{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Pushed a fix to my branch #2731 "],"created_at":1631795242000,"updated_at":1631800253000,"closed_at":1631800253000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIf you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"glue\", \"cola\")\r\n\r\ncolumn_list = [\"idx\", \"label\"]\r\ndataset.set_format(\"python\", columns=column_list)\r\ncolumn_list[1] = \"foo\" # Change the list after we call `set_format`\r\ndataset['train'][:4].keys()\r\n```\r\n\r\n## Expected results\r\n```python\r\ndict_keys(['idx', 'label'])\r\n```\r\n\r\n## Actual results\r\n```python\r\ndict_keys(['idx'])\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929","id":997960024,"node_id":"PR_kwDODunzps4r015C","number":2929,"title":"Add regression test for null Sequence","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631782713000,"updated_at":1631867039000,"closed_at":1631867039000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2929","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929.patch"},"body":"Relates to #2892 and #2900.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928","id":997941506,"node_id":"PR_kwDODunzps4r0yUb","number":2928,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631781560000,"updated_at":1631795734000,"closed_at":1631795734000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2927","id":997654680,"node_id":"I_kwDODunzps47dwCY","number":2927,"title":"Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, I'm looking into it :)","Fixed by #2950."],"created_at":1631754842000,"updated_at":1632155002000,"closed_at":1632155001000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nUpgrading to 1.12 caused `dataset.filter` call to fail with \r\n\r\n> get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels\r\n\r\n\r\n## Steps to reproduce the bug\r\n```pythondef \r\n\r\nfilter_good_rows(\r\n ex: Dict,\r\n valid_rel_labels: Set[str],\r\n valid_ner_labels: Set[str],\r\n tokenizer: PreTrainedTokenizerFast,\r\n) -> bool:\r\n \"\"\"Get the good rows\"\"\"\r\n encoding = get_encoding_for_text(text=ex[\"text\"], tokenizer=tokenizer)\r\n ex[\"encoding\"] = encoding\r\n for relation in ex[\"relations\"]:\r\n if not is_valid_relation(relation, valid_rel_labels):\r\n return False\r\n for span in ex[\"spans\"]:\r\n if not is_valid_span(span, valid_ner_labels, encoding):\r\n return False\r\n return True\r\n \r\ndef get_dataset(): \r\n loader_path = str(Path(__file__).parent \/ \"prodigy_dataset_builder.py\")\r\n ds = load_dataset(\r\n loader_path,\r\n name=\"prodigy-dataset\",\r\n data_files=sorted(file_paths),\r\n cache_dir=cache_dir,\r\n )[\"train\"]\r\n\r\n valid_ner_labels = set(vocab.ner_category)\r\n valid_relations = set(vocab.relation_types.keys())\r\n ds = ds.filter(\r\n filter_good_rows,\r\n fn_kwargs=dict(\r\n valid_rel_labels=valid_relations,\r\n valid_ner_labels=valid_ner_labels,\r\n tokenizer=vocab.tokenizer,\r\n ),\r\n keep_in_memory=True,\r\n num_proc=num_proc,\r\n )\r\n\r\n```\r\n\r\n`ds` is a `DatasetDict` produced by a jsonl dataset.\r\nThis runs fine on 1.11 but fails on 1.12\r\n\r\n**Stack Trace**\r\n\r\n\r\n\r\n## Expected results\r\n\r\nI expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11\r\n\r\n## Actual results\r\n```\r\ntf_ner_rel_lib\/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl\r\n ds = ds.filter(\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:185: in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py:398: in wrapper\r\n out = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2169: in filter\r\n indices = self.map(\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:1686: in map\r\n return self._map_single(\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:185: in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py:398: in wrapper\r\n out = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2048: in _map_single\r\n batch = apply_function_on_filtered_inputs(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\ninputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...}\r\nindices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0\r\n\r\n def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0):\r\n \"\"\"Utility to apply the function on a selection of columns.\"\"\"\r\n nonlocal update_data\r\n fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]\r\n if offset == 0:\r\n effective_indices = indices\r\n else:\r\n effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset\r\n processed_inputs = (\r\n> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n )\r\nE TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels'\r\n\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:1939: TypeError\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1\r\n- Platform: Mac\r\n- Python version: 3.8.9\r\n- PyArrow version: pyarrow==5.0.0\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2926","id":997463277,"node_id":"I_kwDODunzps47dBTt","number":2926,"title":"Error when downloading datasets to non-traditional cache directories","user":{"login":"dar-tau","id":45885627,"node_id":"MDQ6VXNlcjQ1ODg1NjI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45885627?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dar-tau","html_url":"https:\/\/github.com\/dar-tau","followers_url":"https:\/\/api.github.com\/users\/dar-tau\/followers","following_url":"https:\/\/api.github.com\/users\/dar-tau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dar-tau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dar-tau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dar-tau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dar-tau\/orgs","repos_url":"https:\/\/api.github.com\/users\/dar-tau\/repos","events_url":"https:\/\/api.github.com\/users\/dar-tau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dar-tau\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631735986000,"updated_at":1631736135000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. \r\n\r\n## Steps to reproduce the bug\r\n```bash\r\nln -s \/path\/to\/netapp\/.cache ~\/.cache\r\n```\r\n\r\n```python\r\nload_dataset(\"imdb\")\r\n```\r\n\r\n## Expected results\r\nSuccessfully loading IMDB dataset\r\n\r\n## Actual results\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, \r\nnum_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0,\r\n dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'),\r\n 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected':\r\n SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded':\r\n SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.1.2\r\n- Platform: Ubuntu \r\n- Python version: 3.8\r\n\r\n## Extra notes\r\nStranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction:\r\n - With `cache_dir=\"\/path\/to\/netapp\/.cache\"` the same thing happens.\r\n - However, when linking `~\/netapp\/` to `\/path\/to\/netapp` *and* setting `cache_dir=\"~\/netapp\/.cache\/huggingface\/datasets\"` - it does work\r\n - On the other hand, when linking `~\/.cache` to `~\/netapp\/.cache` without using `cache_dir`, it does work anymore.\r\n\r\nWhile I could test it only for a NetApp device, it might have to do with any other mounted FS.\r\n\r\nThanks :)\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925","id":997407034,"node_id":"PR_kwDODunzps4rzJ9s","number":2925,"title":"Add tutorial for no-code dataset upload","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu\/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu\/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```","Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps:\/\/47389-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet"],"created_at":1631732082000,"updated_at":1632248034000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2925","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925.patch"},"body":"This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2924","id":997378113,"node_id":"I_kwDODunzps47cshB","number":2924,"title":"\"File name too long\" error for file locks","user":{"login":"gar1t","id":184949,"node_id":"MDQ6VXNlcjE4NDk0OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/184949?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gar1t","html_url":"https:\/\/github.com\/gar1t","followers_url":"https:\/\/api.github.com\/users\/gar1t\/followers","following_url":"https:\/\/api.github.com\/users\/gar1t\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gar1t\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gar1t\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gar1t\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gar1t\/orgs","repos_url":"https:\/\/api.github.com\/users\/gar1t\/repos","events_url":"https:\/\/api.github.com\/users\/gar1t\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gar1t\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152\/src\/datasets\/utils\/filelock.py#L135-L135","Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info."],"created_at":1631729810000,"updated_at":1632232993000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nGetting the following error when calling `load_dataset(\"gar1t\/test\")`:\r\n\r\n```\r\nOSError: [Errno 36] File name too long: '\/.cache\/huggingface\/datasets\/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'\r\n```\r\n\r\n## Steps to reproduce the bug\r\n\r\nWhere the user cache dir (e.g. `~\/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"gar1t\/test\")\r\n```\r\n\r\n## Expected results\r\n\r\nExpect the function to return without an error.\r\n\r\n## Actual results\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 644, in download_and_prepare\r\n self._save_info()\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 765, in _save_info\r\n with FileLock(lock_path):\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 403, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 36] File name too long: '\/.cache\/huggingface\/datasets\/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2923","id":997351590,"node_id":"I_kwDODunzps47cmCm","number":2923,"title":"Loading an autonlp dataset raises in normal mode but not in streaming mode","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631727878000,"updated_at":1631727878000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nThe same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"severo\/autonlp-data-sentiment_detection-3c8bcd36\", split=\"train\", streaming=False)\r\n## raises an error\r\n\r\nload_dataset(\"severo\/autonlp-data-sentiment_detection-3c8bcd36\", split=\"train\", streaming=True)\r\n## does not raise an error\r\n```\r\n\r\n## Expected results\r\n\r\nBoth calls should raise the same error\r\n\r\n## Actual results\r\n\r\nCall with streaming=False:\r\n\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 5825.42it\/s]\r\nUsing custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b\r\nDownloading and preparing dataset json\/autonlp-data-sentiment_detection-3c8bcd36 to \/home\/slesage\/.cache\/huggingface\/datasets\/json\/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b\/0.0.0\/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50...\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 15923.71it\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 3346.88it\/s]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1187, in _prepare_split\r\n writer.write_table(table)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 418, in write_table\r\n pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 418, in \r\n pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)\r\n File \"pyarrow\/table.pxi\", line 1249, in pyarrow.lib.Table.__getitem__\r\n File \"pyarrow\/table.pxi\", line 1825, in pyarrow.lib.Table.column\r\n File \"pyarrow\/table.pxi\", line 1800, in pyarrow.lib.Table._ensure_integer_index\r\nKeyError: 'Field \"splits\" does not exist in table schema'\r\n```\r\n\r\nCall with `streaming=False`:\r\n\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 6000.43it\/s]\r\nUsing custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 46916.15it\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 148734.18it\/s]\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1.dev0\r\n- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922","id":997332662,"node_id":"PR_kwDODunzps4ry6-s","number":2922,"title":"Fix conversion of multidim arrays in list to arrow","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631726496000,"updated_at":1631726572000,"closed_at":1631726505000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2922","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922.patch"},"body":"Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation.\r\nHowever in #2361 we started to keep numpy arrays in order to keep their dtypes.\r\nIt works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays.\r\n\r\nIn this PR I added two strategies:\r\n- one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case)\r\n- one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2921","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2921","id":997325424,"node_id":"I_kwDODunzps47cfpw","number":2921,"title":"Using a list of multi-dim numpy arrays raises an error \"can only convert 1-dimensional array values\"","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631725931000,"updated_at":1631726505000,"closed_at":1631726505000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"This error has been introduced in https:\/\/github.com\/huggingface\/datasets\/pull\/2361\r\n\r\nTo reproduce:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\nd = Dataset.from_dict({\"a\": [np.zeros((2, 2))]})\r\n```\r\nraises\r\n```python\r\nTraceback (most recent call last):\r\n File \"playground\/ttest.py\", line 5, in \r\n d = Dataset.from_dict({\"a\": [np.zeros((2, 2))]}).with_format(\"torch\")\r\n File \"\/Users\/quentinlhoest\/Desktop\/hf\/nlp\/src\/datasets\/arrow_dataset.py\", line 458, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping)\r\n File \"\/Users\/quentinlhoest\/Desktop\/hf\/nlp\/src\/datasets\/table.py\", line 365, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n File \"pyarrow\/table.pxi\", line 1639, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow\/array.pxi\", line 332, in pyarrow.lib.asarray\r\n File \"pyarrow\/array.pxi\", line 223, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/Users\/quentinlhoest\/Desktop\/hf\/nlp\/src\/datasets\/arrow_writer.py\", line 107, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\/array.pxi\", line 306, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920","id":997323014,"node_id":"PR_kwDODunzps4ry4_u","number":2920,"title":"Fix unwanted tqdm bar when accessing examples","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631725751000,"updated_at":1631726304000,"closed_at":1631726304000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2920","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920.patch"},"body":"A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default\r\n\r\nFix #2919 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2919","id":997127487,"node_id":"I_kwDODunzps47bvU_","number":2919,"title":"Unwanted progress bars when accessing examples","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["doing a patch release now :)"],"created_at":1631714710000,"updated_at":1631726509000,"closed_at":1631726303000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples:\r\n```python\r\nIn [1]: import datasets as ds \r\n\r\nIn [2]: d = ds.Dataset.from_dict({\"a\": [0, 1, 2]}).with_format(\"torch\") \r\n\r\nIn [3]: d[0] \r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 3172.70it\/s]\r\nOut[3]: {'a': tensor(0)}\r\n```\r\n\r\nThis is because the pytorch formatter calls `map_nested` that uses progress bars\r\n\r\ncc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2918","id":997063347,"node_id":"I_kwDODunzps47bfqz","number":2918,"title":"`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https:\/\/github.com\/intake\/filesystem_spec\/issues\/389\r\n\r\nI will ask them if they are planning to fix it...","Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https:\/\/raw.githubusercontent.com\/allenai\/scitldr\/master\/SciTLDR-Data\/SciTLDR-FullText\/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```","Thanks for investigating @albertvillanova ! \ud83e\udd17 "],"created_at":1631711167000,"updated_at":1632127898000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nTrying to load the `\"FullText\"` config of the `\"scitldr\"` dataset with `streaming=True` raises an error from `aiohttp`:\r\n```python\r\nClientPayloadError: 400, message='Can not decode content-encoding: gzip'\r\n```\r\n\r\ncc @lhoestq \r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\niter_dset = iter(\r\n load_dataset(\"scitldr\", name=\"FullText\", split=\"test\", streaming=True)\r\n)\r\n\r\nnext(iter_dset)\r\n```\r\n\r\n## Expected results\r\nReturns the first sample of the dataset\r\n\r\n## Actual results\r\nCalling `__next__` crashes with the following Traceback:\r\n\r\n```python\r\n----> 1 next(dset_iter)\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 339\r\n 340 def __iter__(self):\r\n--> 341 for key, example in self._iter():\r\n 342 if self.features:\r\n 343 # we encode the example for ClassLabel feature types for example\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\datasets\\iterable_dataset.py in _iter(self)\r\n 336 else:\r\n 337 ex_iterable = self._ex_iterable\r\n--> 338 yield from ex_iterable\r\n 339\r\n 340 def __iter__(self):\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 76\r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n 80\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\scitldr\\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\\scitldr.py in _generate_examples(self, filepath, split)\r\n 162\r\n 163 with open(filepath, encoding=\"utf-8\") as f:\r\n--> 164 for id_, row in enumerate(f):\r\n 165 data = json.loads(row)\r\n 166 if self.config.name == \"AIC\":\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\implementations\\http.py in read(self, length)\r\n 496 else:\r\n 497 length = min(self.size - self.loc, length)\r\n--> 498 return super().read(length)\r\n 499\r\n 500 async def async_fetch_all(self):\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\spec.py in read(self, length)\r\n 1481 # don't even bother calling fetch\r\n 1482 return b\"\"\r\n-> 1483 out = self.cache._fetch(self.loc, self.loc + length)\r\n 1484 self.loc += len(out)\r\n 1485 return out\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\caching.py in _fetch(self, start, end)\r\n 378 elif start < self.start:\r\n 379 if self.end - end > self.blocksize:\r\n--> 380 self.cache = self.fetcher(start, bend)\r\n 381 self.start = start\r\n 382 else:\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\asyn.py in wrapper(*args, **kwargs)\r\n 86 def wrapper(*args, **kwargs):\r\n 87 self = obj or args[0]\r\n---> 88 return sync(self.loop, func, *args, **kwargs)\r\n 89\r\n 90 return wrapper\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\asyn.py in sync(loop, func, timeout, *args, **kwargs)\r\n 67 raise FSTimeoutError\r\n 68 if isinstance(result[0], BaseException):\r\n---> 69 raise result[0]\r\n 70 return result[0]\r\n 71\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\asyn.py in _runner(event, coro, result, timeout)\r\n 23 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 24 try:\r\n---> 25 result[0] = await coro\r\n 26 except Exception as ex:\r\n 27 result[0] = ex\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\implementations\\http.py in async_fetch_range(self, start, end)\r\n 538 if r.status == 206:\r\n 539 # partial content, as expected\r\n--> 540 out = await r.read()\r\n 541 elif \"Content-Length\" in r.headers:\r\n 542 cl = int(r.headers[\"Content-Length\"])\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\aiohttp\\client_reqrep.py in read(self)\r\n 1030 if self._body is None:\r\n 1031 try:\r\n-> 1032 self._body = await self.content.read()\r\n 1033 for trace in self._traces:\r\n 1034 await trace.send_response_chunk_received(\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\aiohttp\\streams.py in read(self, n)\r\n 342 async def read(self, n: int = -1) -> bytes:\r\n 343 if self._exception is not None:\r\n--> 344 raise self._exception\r\n 345\r\n 346 # migration problem; with DataQueue you have to catch\r\n\r\nClientPayloadError: 400, message='Can not decode content-encoding: gzip'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.5\r\n- PyArrow version: 2.0.0\r\n- aiohttp version: 3.7.4.post0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2917","id":997041658,"node_id":"I_kwDODunzps47baX6","number":2917,"title":"windows download abnormal","user":{"login":"wei1826676931","id":52347799,"node_id":"MDQ6VXNlcjUyMzQ3Nzk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52347799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wei1826676931","html_url":"https:\/\/github.com\/wei1826676931","followers_url":"https:\/\/api.github.com\/users\/wei1826676931\/followers","following_url":"https:\/\/api.github.com\/users\/wei1826676931\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wei1826676931\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wei1826676931\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wei1826676931\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wei1826676931\/orgs","repos_url":"https:\/\/api.github.com\/users\/wei1826676931\/repos","events_url":"https:\/\/api.github.com\/users\/wei1826676931\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wei1826676931\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used","It is indeed an agency problem, thank you very, very much","Let me know if you have other questions :)\r\n\r\nClosing this issue now"],"created_at":1631709935000,"updated_at":1631812668000,"closed_at":1631812668000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??\r\n## Steps to reproduce the bug\r\n```python3.7 + windows\r\n![image](https:\/\/user-images.githubusercontent.com\/52347799\/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)\r\n\r\n\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\nIt can be downloaded normally.\r\n\r\n## Actual results\r\nit cann't\r\n\r\n## Environment info\r\n\r\n- `datasets` version:1.11.0\r\n- Platform:windows\r\n- Python version:3.7\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916","id":997003661,"node_id":"PR_kwDODunzps4rx5ua","number":2916,"title":"Add OpenAI's pass@k code evaluation metric","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https:\/\/huggingface.co\/docs\/datasets\/loading.html?highlight=rank#distributed-setup)\r\nYou can test to spawn several processes where each process would load the metric. Then in each process you add some references\/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references\/predictions\r\n\r\nLet me know if you have questions or if I can help","Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages."],"created_at":1631707543000,"updated_at":1631951964000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2916","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916.patch"},"body":"This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https:\/\/github.com\/openai\/human-eval) introduced in the [Codex paper](https:\/\/arxiv.org\/abs\/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`\/`references` convention.\r\n\r\nThe addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.\r\n\r\nA few open questions:\r\n\r\n- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?\r\n- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?\r\n- Naming: the implementation sticks to the `predictions`\/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915","id":996870071,"node_id":"PR_kwDODunzps4rxfWb","number":2915,"title":"Fix fsspec AbstractFileSystem access","user":{"login":"pierre-godard","id":3969168,"node_id":"MDQ6VXNlcjM5NjkxNjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3969168?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pierre-godard","html_url":"https:\/\/github.com\/pierre-godard","followers_url":"https:\/\/api.github.com\/users\/pierre-godard\/followers","following_url":"https:\/\/api.github.com\/users\/pierre-godard\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pierre-godard\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pierre-godard\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pierre-godard\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pierre-godard\/orgs","repos_url":"https:\/\/api.github.com\/users\/pierre-godard\/repos","events_url":"https:\/\/api.github.com\/users\/pierre-godard\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pierre-godard\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631698760000,"updated_at":1631705724000,"closed_at":1631705724000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2915","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915.patch"},"body":"This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2914","id":996770168,"node_id":"I_kwDODunzps47aYF4","number":2914,"title":"Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets","user":{"login":"pierre-godard","id":3969168,"node_id":"MDQ6VXNlcjM5NjkxNjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3969168?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pierre-godard","html_url":"https:\/\/github.com\/pierre-godard","followers_url":"https:\/\/api.github.com\/users\/pierre-godard\/followers","following_url":"https:\/\/api.github.com\/users\/pierre-godard\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pierre-godard\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pierre-godard\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pierre-godard\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pierre-godard\/orgs","repos_url":"https:\/\/api.github.com\/users\/pierre-godard\/repos","events_url":"https:\/\/api.github.com\/users\/pierre-godard\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pierre-godard\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closed by #2915."],"created_at":1631692446000,"updated_at":1631724557000,"closed_at":1631724556000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIn one of my project, I defined a custom fsspec filesystem with an entrypoint.\r\nMy guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https:\/\/github.com\/intake\/filesystem_spec\/blob\/0589358d8a029ed6b60d031018f52be2eb721291\/fsspec\/__init__.py#L55)).\r\nSo that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable.\r\nThis make the import of datasets failing as it is using that `fsspec.spec`.\r\n\r\n## Steps to reproduce the bug\r\nI could reproduce the bug with a dummy poetry project.\r\n\r\nHere is the pyproject.toml:\r\n```toml\r\n[tool.poetry]\r\nname = \"debug-datasets\"\r\nversion = \"0.1.0\"\r\ndescription = \"\"\r\nauthors = [\"Pierre Godard\"]\r\n\r\n[tool.poetry.dependencies]\r\npython = \"^3.8\"\r\ndatasets = \"^1.11.0\"\r\n\r\n[tool.poetry.dev-dependencies]\r\n\r\n[build-system]\r\nrequires = [\"poetry-core>=1.0.0\"]\r\nbuild-backend = \"poetry.core.masonry.api\"\r\n\r\n[tool.poetry.plugins.\"fsspec.specs\"]\r\n\"file2\" = \"fsspec.implementations.local.LocalFileSystem\"\r\n```\r\n\r\nThe only other file being a `debug_datasets\/__init__.py` empty file.\r\n\r\nThe overall structure of the project is as follows:\r\n```\r\n.\r\n\u251c\u2500\u2500 pyproject.toml\r\n\u2514\u2500\u2500 debug_datasets\r\n \u2514\u2500\u2500 __init__.py\r\n```\r\n\r\nThen, within the project folder run:\r\n\r\n```\r\npoetry install\r\npoetry run python\r\n```\r\n\r\nAnd in the python interpreter, try to import `datasets`:\r\n\r\n```\r\nimport datasets\r\n```\r\n\r\n## Expected results\r\nThe import should run successfully.\r\n\r\n## Actual results\r\n\r\nHere is the trace of the error I get:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/godarpi\/.cache\/pypoetry\/virtualenvs\/debug-datasets-JuFzTKL--py3.8\/lib\/python3.8\/site-packages\/datasets\/__init__.py\", line 33, in \r\n from .arrow_dataset import Dataset, concatenate_datasets\r\n File \"\/home\/godarpi\/.cache\/pypoetry\/virtualenvs\/debug-datasets-JuFzTKL--py3.8\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 48, in \r\n from .filesystems import extract_path_from_uri, is_remote_filesystem\r\n File \"\/home\/godarpi\/.cache\/pypoetry\/virtualenvs\/debug-datasets-JuFzTKL--py3.8\/lib\/python3.8\/site-packages\/datasets\/filesystems\/__init__.py\", line 30, in \r\n def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool:\r\nAttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem'\r\n```\r\n\r\n## Suggested fix\r\n\r\n`datasets\/filesystems\/__init__.py`, line 30, replace:\r\n```\r\n def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool:\r\n```\r\nby:\r\n```\r\n def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool:\r\n```\r\n\r\nI will come up with a PR soon if this effectively solves the issue.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: WSL2 (Ubuntu 20.04.1 LTS)\r\n- Python version: 3.8.5\r\n- PyArrow version: 5.0.0\r\n- `fsspec` version: 2021.8.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2913","id":996436368,"node_id":"I_kwDODunzps47ZGmQ","number":2913,"title":"timit_asr dataset only includes one text phrase","user":{"login":"margotwagner","id":39107794,"node_id":"MDQ6VXNlcjM5MTA3Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39107794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/margotwagner","html_url":"https:\/\/github.com\/margotwagner","followers_url":"https:\/\/api.github.com\/users\/margotwagner\/followers","following_url":"https:\/\/api.github.com\/users\/margotwagner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/margotwagner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/margotwagner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/margotwagner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/margotwagner\/orgs","repos_url":"https:\/\/api.github.com\/users\/margotwagner\/repos","events_url":"https:\/\/api.github.com\/users\/margotwagner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/margotwagner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)","Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `datasets` version: 1.4.1"],"created_at":1631653567000,"updated_at":1631693119000,"closed_at":1631693118000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe dataset 'timit_asr' only includes one text phrase. It only includes the transcription \"Would such an act of refusal be useful?\" multiple times rather than different phrases.\r\n\r\n## Steps to reproduce the bug\r\nNote: I am following the tutorial https:\/\/huggingface.co\/blog\/fine-tune-wav2vec2-english\r\n\r\n1. Install the dataset and other packages\r\n```python\r\n!pip install datasets>=1.5.0\r\n!pip install transformers==4.4.0\r\n!pip install soundfile\r\n!pip install jiwer\r\n```\r\n2. Load the dataset\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\n\r\ntimit = load_dataset(\"timit_asr\")\r\n```\r\n3. Remove columns that we don't want\r\n```python\r\ntimit = timit.remove_columns([\"phonetic_detail\", \"word_detail\", \"dialect_region\", \"id\", \"sentence_type\", \"speaker_id\"])\r\n```\r\n4. Write a short function to display some random samples of the dataset.\r\n```python\r\nfrom datasets import ClassLabel\r\nimport random\r\nimport pandas as pd\r\nfrom IPython.display import display, HTML\r\n\r\ndef show_random_elements(dataset, num_examples=10):\r\n assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\r\n picks = []\r\n for _ in range(num_examples):\r\n pick = random.randint(0, len(dataset)-1)\r\n while pick in picks:\r\n pick = random.randint(0, len(dataset)-1)\r\n picks.append(pick)\r\n \r\n df = pd.DataFrame(dataset[picks])\r\n display(HTML(df.to_html()))\r\n\r\nshow_random_elements(timit[\"train\"].remove_columns([\"file\"]))\r\n```\r\n\r\n## Expected results\r\n10 random different transcription phrases.\r\n\r\n## Actual results\r\n10 of the same transcription phrase \"Would such an act of refusal be useful?\"\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.4.1\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: not listed\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912","id":996256005,"node_id":"PR_kwDODunzps4rvhgp","number":2912,"title":"Update link to Blog in docs footer","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631640194000,"updated_at":1631692763000,"closed_at":1631692763000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2912","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912.patch"},"body":"Update link.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911","id":996202598,"node_id":"PR_kwDODunzps4rvW7Y","number":2911,"title":"Fix exception chaining","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631636369000,"updated_at":1631804684000,"closed_at":1631804684000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2911","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911.patch"},"body":"Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910","id":996149632,"node_id":"PR_kwDODunzps4rvL9N","number":2910,"title":"feat: \ud83c\udfb8 pass additional arguments to get private configs + info","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Included in https:\/\/github.com\/huggingface\/datasets\/pull\/2906"],"created_at":1631633059000,"updated_at":1631722749000,"closed_at":1631722746000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2910","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910.patch"},"body":"`use_auth_token` can now be passed to the functions to get the configs\r\nor infos of private datasets on the hub","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909","id":996002180,"node_id":"PR_kwDODunzps4rutdo","number":2909,"title":"fix anli splits","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631625035000,"updated_at":1631625035000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2909","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909.patch"},"body":"I can't run the tests for dummy data, facing this error \r\n\r\n`ImportError while loading conftest '\/home\/zaid\/tmp\/fix_anli_splits\/datasets\/tests\/conftest.py'.\r\ntests\/conftest.py:10: in \r\n from datasets import config\r\nE ImportError: cannot import name 'config' from 'datasets' (unknown location)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908","id":995970612,"node_id":"PR_kwDODunzps4rumwW","number":2908,"title":"Update Zenodo metadata with creator names and affiliation","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631623177000,"updated_at":1631629765000,"closed_at":1631629765000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2908","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908.patch"},"body":"This PR helps in prefilling author data when automatically generating the DOI after each release.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907","id":995968152,"node_id":"PR_kwDODunzps4rumOy","number":2907,"title":"add story_cloze dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631623013000,"updated_at":1631623013000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2907","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907.patch"},"body":"@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906","id":995962905,"node_id":"PR_kwDODunzps4rulH-","number":2906,"title":"feat: \ud83c\udfb8 add a function to get a dataset config's split names","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Should I add a section in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is"],"created_at":1631622682000,"updated_at":1632155739000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2906","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906.patch"},"body":"Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub\r\n\r\nQuestions:\r\n\r\n- I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?<\/strike> no -> reverted\r\n- Should I add a section in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/load_hub.rst? (there is no section for get_dataset_infos)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905","id":995843964,"node_id":"PR_kwDODunzps4ruL5X","number":2905,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631614577000,"updated_at":1631622337000,"closed_at":1631622337000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2905","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2904","id":995814222,"node_id":"I_kwDODunzps47WutO","number":2904,"title":"FORCE_REDOWNLOAD does not work","user":{"login":"anoopkatti","id":5278299,"node_id":"MDQ6VXNlcjUyNzgyOTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5278299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anoopkatti","html_url":"https:\/\/github.com\/anoopkatti","followers_url":"https:\/\/api.github.com\/users\/anoopkatti\/followers","following_url":"https:\/\/api.github.com\/users\/anoopkatti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anoopkatti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anoopkatti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anoopkatti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anoopkatti\/orgs","repos_url":"https:\/\/api.github.com\/users\/anoopkatti\/repos","events_url":"https:\/\/api.github.com\/users\/anoopkatti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anoopkatti\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue."],"created_at":1631612726000,"updated_at":1632129275000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWith GenerateMode.FORCE_REDOWNLOAD, the documentation says \r\n +------------------------------------+-----------+---------+\r\n | | Downloads | Dataset |\r\n +====================================+===========+=========+\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n +------------------------------------+-----------+---------+\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n +------------------------------------+-----------+---------+\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n +------------------------------------+-----------+---------+\r\n\r\nHowever, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nimport pandas as pd\r\nfrom datasets import load_dataset, GenerateMode\r\npd.DataFrame(range(5), columns=['numbers']).to_csv('\/tmp\/test.tsv.gz', index=False)\r\nee = load_dataset('csv', data_files=['\/tmp\/test.tsv.gz'], delimiter='\\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)\r\nprint(ee)\r\npd.DataFrame(range(10), columns=['numerals']).to_csv('\/tmp\/test.tsv.gz', index=False)\r\nee = load_dataset('csv', data_files=['\/tmp\/test.tsv.gz'], delimiter='\\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)\r\nprint(ee)\r\n\r\n```\r\n\r\n## Expected results\r\nDataset({\r\n features: ['numbers'],\r\n num_rows: 5\r\n})\r\nDataset({\r\n features: ['numerals'],\r\n num_rows: 10\r\n})\r\n\r\n## Actual results\r\nDataset({\r\n features: ['numbers'],\r\n num_rows: 5\r\n})\r\nDataset({\r\n features: ['numbers'],\r\n num_rows: 5\r\n})\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903","id":995715191,"node_id":"PR_kwDODunzps4rtxxV","number":2903,"title":"Fix xpathopen to accept positional arguments","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks!"],"created_at":1631606570000,"updated_at":1631609481000,"closed_at":1631608847000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903.patch"},"body":"Fix `xpathopen()` so that it also accepts positional arguments.\r\n\r\nFix #2901.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2902","id":995254216,"node_id":"MDU6SXNzdWU5OTUyNTQyMTY=","number":2902,"title":"Add WIT Dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@hassiahk is working on it #2810 ","WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps:\/\/techblog.wikimedia.org\/2021\/09\/09\/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research\/\r\nhttps:\/\/analytics.wikimedia.org\/published\/datasets\/one-off\/caption_competition\/training\/image_pixels\/","> @hassiahk is working on it #2810\r\n\r\nThank you @bhavitvyamalik! Added this issue so we could track progress \ud83d\ude04 . Just linked the PR as well for visibility. "],"created_at":1631561929000,"updated_at":1631567400000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *WIT*\r\n- **Description:** *Wikipedia-based Image Text Dataset*\r\n- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning\r\n](https:\/\/arxiv.org\/abs\/2103.01913)*\r\n- **Data:** *https:\/\/github.com\/google-research-datasets\/wit*\r\n- **Motivation:** (excerpt from their Github README.md)\r\n\r\n> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.\r\n> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.\r\n> - A collection of diverse set of concepts and real world entities.\r\n> - Brings forth challenging real-world test sets.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2901","id":995232844,"node_id":"MDU6SXNzdWU5OTUyMzI4NDQ=","number":2901,"title":"Incompatibility with pytest","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!"],"created_at":1631560337000,"updated_at":1631608847000,"closed_at":1631608847000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\npytest complains about xpathopen \/ path.open(\"w\")\r\n\r\n## Steps to reproduce the bug\r\n\r\nCreate a test file, `test.py`:\r\n\r\n```python\r\nimport datasets as ds\r\ndef load_dataset():\r\n ds.load_dataset(\"counter\", split=\"train\", streaming=True)\r\n```\r\n\r\nAnd launch it with pytest:\r\n\r\n```bash\r\npython -m pytest test.py\r\n```\r\n\r\n## Expected results\r\n\r\nIt should give something like:\r\n\r\n```\r\ncollected 1 item\r\n\r\ntest.py . [100%]\r\n\r\n======= 1 passed in 3.15s =======\r\n```\r\n\r\n## Actual results\r\n\r\n```\r\n============================================================================================================================= test session starts ==============================================================================================================================\r\nplatform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0\r\nrootdir: \/home\/slesage\/hf\/datasets-preview-backend, configfile: pyproject.toml\r\nplugins: anyio-3.3.1\r\ncollected 1 item\r\n\r\ntests\/queries\/test_rows.py . [100%]Traceback (most recent call last):\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pytest\/__main__.py\", line 5, in \r\n raise SystemExit(pytest.console_main())\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/config\/__init__.py\", line 185, in console_main\r\n code = main()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/config\/__init__.py\", line 162, in main\r\n ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 60, in _multicall\r\n return outcome.get_result()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/main.py\", line 316, in pytest_cmdline_main\r\n return wrap_session(config, _main)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/main.py\", line 304, in wrap_session\r\n config.hook.pytest_sessionfinish(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 55, in _multicall\r\n gen.send(outcome)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/terminal.py\", line 803, in pytest_sessionfinish\r\n outcome.get_result()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/cacheprovider.py\", line 428, in pytest_sessionfinish\r\n config.cache.set(\"cache\/nodeids\", sorted(self.cached_nodeids))\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/cacheprovider.py\", line 188, in set\r\n f = path.open(\"w\")\r\nTypeError: xpathopen() takes 1 positional argument but 2 were given\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.0\r\n- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900","id":994922580,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMyNzczNDkw","number":2900,"title":"Fix null sequence encoding","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631541308000,"updated_at":1631542663000,"closed_at":1631542662000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2900","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900.patch"},"body":"The Sequence feature encoding was failing when a `None` sequence was used in a dataset.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2892","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2899","id":994082432,"node_id":"MDU6SXNzdWU5OTQwODI0MzI=","number":2899,"title":"Dataset","user":{"login":"rcacho172","id":90449239,"node_id":"MDQ6VXNlcjkwNDQ5MjM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90449239?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rcacho172","html_url":"https:\/\/github.com\/rcacho172","followers_url":"https:\/\/api.github.com\/users\/rcacho172\/followers","following_url":"https:\/\/api.github.com\/users\/rcacho172\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rcacho172\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rcacho172\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rcacho172\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rcacho172\/orgs","repos_url":"https:\/\/api.github.com\/users\/rcacho172\/repos","events_url":"https:\/\/api.github.com\/users\/rcacho172\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rcacho172\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631432333000,"updated_at":1631463135000,"closed_at":1631463135000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2898","id":994032814,"node_id":"MDU6SXNzdWU5OTQwMzI4MTQ=","number":2898,"title":"Hug emoji","user":{"login":"Jackg-08","id":90539794,"node_id":"MDQ6VXNlcjkwNTM5Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90539794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jackg-08","html_url":"https:\/\/github.com\/Jackg-08","followers_url":"https:\/\/api.github.com\/users\/Jackg-08\/followers","following_url":"https:\/\/api.github.com\/users\/Jackg-08\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jackg-08\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jackg-08\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jackg-08\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jackg-08\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jackg-08\/repos","events_url":"https:\/\/api.github.com\/users\/Jackg-08\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jackg-08\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631417271000,"updated_at":1631463193000,"closed_at":1631463193000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897","id":993798386,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxOTA0ODk4","number":2897,"title":"Add OpenAI's HumanEval dataset","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)"],"created_at":1631353067000,"updated_at":1631804531000,"closed_at":1631804531000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2897","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897.patch"},"body":"This PR adds OpenAI's [HumanEval](https:\/\/github.com\/openai\/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896","id":993613113,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNzcwMTE3","number":2896,"title":"add multi-proc in `to_csv`","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631309709000,"updated_at":1631309709000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2896","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896.patch"},"body":"This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. \r\n\r\nResults on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1):\r\n```\r\nTime taken on 1 num_proc, 10000 batch_size 674.2055702209473\r\nTime taken on 4 num_proc, 10000 batch_size 425.6553490161896\r\n\r\nTime taken on 1 num_proc, 50000 batch_size 623.5897650718689\r\nTime taken on 4 num_proc, 50000 batch_size 380.0402421951294\r\n\r\nTime taken on 4 num_proc, 100000 batch_size 361.7168130874634\r\n```\r\nThis is a WIP as writing tests is pending for this PR. \r\n\r\nI'm also exploring [this](https:\/\/arrow.apache.org\/docs\/python\/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895","id":993462274,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNjQ0NTY2","number":2895,"title":"Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast","user":{"login":"arsarabi","id":12345848,"node_id":"MDQ6VXNlcjEyMzQ1ODQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12345848?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arsarabi","html_url":"https:\/\/github.com\/arsarabi","followers_url":"https:\/\/api.github.com\/users\/arsarabi\/followers","following_url":"https:\/\/api.github.com\/users\/arsarabi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arsarabi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arsarabi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arsarabi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arsarabi\/orgs","repos_url":"https:\/\/api.github.com\/users\/arsarabi\/repos","events_url":"https:\/\/api.github.com\/users\/arsarabi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arsarabi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631296617000,"updated_at":1632264601000,"closed_at":1632212315000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2895","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895.patch"},"body":"This PR partially addresses #2252.\r\n\r\n``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894","id":993375654,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5","number":2894,"title":"Fix COUNTER dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631290049000,"updated_at":1631291265000,"closed_at":1631291264000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2894","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894.patch"},"body":"Fix filename generating `FileNotFoundError`.\r\n\r\nRelated to #2866.\r\nCC: @severo.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893","id":993342781,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNTQ0NDQz","number":2893,"title":"add mbpp dataset","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it's fine to have the original schema"],"created_at":1631287650000,"updated_at":1631784942000,"closed_at":1631784942000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2893","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893.patch"},"body":"This PR adds the mbpp dataset introduced by Google [here](https:\/\/github.com\/google-research\/google-research\/tree\/master\/mbpp) as mentioned in #2816.\r\n\r\nThe dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. An open question is whether to harmonize the two schemas when loading the dataset or to preserve the original one. Since not all fields are overlapping the schema will not be exactly the same.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2892","id":993274572,"node_id":"MDU6SXNzdWU5OTMyNzQ1NzI=","number":2892,"title":"Error when encoding a dataset with None objects with a Sequence feature","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This has been fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/2900\r\nWe're doing a new release 1.12 today to make the fix available :)"],"created_at":1631283103000,"updated_at":1631542693000,"closed_at":1631542662000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"There is an error when encoding a dataset with None objects with a Sequence feature\r\n\r\nTo reproduce:\r\n```python\r\nfrom datasets import Dataset, Features, Value, Sequence\r\ndata = {\"a\": [[0], None]}\r\nfeatures = Features({\"a\": Sequence(Value(\"int32\"))})\r\ndataset = Dataset.from_dict(data, features=features)\r\n```\r\nraises\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n in \r\n 2 data = {\"a\": [[0], None]}\r\n 3 features = Features({\"a\": Sequence(Value(\"int32\"))})\r\n----> 4 dataset = Dataset.from_dict(data, features=features)\r\n[...]\r\n~\/datasets\/features.py in encode_nested_example(schema, obj)\r\n 888 if isinstance(obj, str): # don't interpret a string as a list\r\n 889 raise ValueError(\"Got a string but expected a list instead: '{}'\".format(obj))\r\n--> 890 return [encode_nested_example(schema.feature, o) for o in obj]\r\n 891 # Object with special encoding:\r\n 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks\r\n\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\nInstead, if should run without error, as if the `features` were not passed","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891","id":993161984,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxMzkwNjM2","number":2891,"title":"[WIP] Allow dynamic first dimension for ArrayXD","user":{"login":"rpowalski","id":10357417,"node_id":"MDQ6VXNlcjEwMzU3NDE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10357417?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rpowalski","html_url":"https:\/\/github.com\/rpowalski","followers_url":"https:\/\/api.github.com\/users\/rpowalski\/followers","following_url":"https:\/\/api.github.com\/users\/rpowalski\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rpowalski\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rpowalski\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rpowalski\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rpowalski\/orgs","repos_url":"https:\/\/api.github.com\/users\/rpowalski\/repos","events_url":"https:\/\/api.github.com\/users\/rpowalski\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rpowalski\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631274772000,"updated_at":1632142453000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2891","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891.patch"},"body":"Add support for dynamic first dimension for ArrayXD features. See issue [#887](https:\/\/github.com\/huggingface\/datasets\/issues\/887).\r\nFollowing changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary.\r\n\r\n@lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2890","id":993074102,"node_id":"MDU6SXNzdWU5OTMwNzQxMDI=","number":2890,"title":"0x290B112ED1280537B24Ee6C268a004994a16e6CE","user":{"login":"rcacho172","id":90449239,"node_id":"MDQ6VXNlcjkwNDQ5MjM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90449239?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rcacho172","html_url":"https:\/\/github.com\/rcacho172","followers_url":"https:\/\/api.github.com\/users\/rcacho172\/followers","following_url":"https:\/\/api.github.com\/users\/rcacho172\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rcacho172\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rcacho172\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rcacho172\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rcacho172\/orgs","repos_url":"https:\/\/api.github.com\/users\/rcacho172\/repos","events_url":"https:\/\/api.github.com\/users\/rcacho172\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rcacho172\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631267477000,"updated_at":1631274329000,"closed_at":1631274329000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2889","id":992968382,"node_id":"MDU6SXNzdWU5OTI5NjgzODI=","number":2889,"title":"Coc","user":{"login":"Bwiggity","id":90444264,"node_id":"MDQ6VXNlcjkwNDQ0MjY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90444264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bwiggity","html_url":"https:\/\/github.com\/Bwiggity","followers_url":"https:\/\/api.github.com\/users\/Bwiggity\/followers","following_url":"https:\/\/api.github.com\/users\/Bwiggity\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bwiggity\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bwiggity\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bwiggity\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bwiggity\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bwiggity\/repos","events_url":"https:\/\/api.github.com\/users\/Bwiggity\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bwiggity\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631259127000,"updated_at":1631274354000,"closed_at":1631274354000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2888","id":992676535,"node_id":"MDU6SXNzdWU5OTI2NzY1MzU=","number":2888,"title":"v1.11.1 release date","user":{"login":"fcakyon","id":34196005,"node_id":"MDQ6VXNlcjM0MTk2MDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34196005?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fcakyon","html_url":"https:\/\/github.com\/fcakyon","followers_url":"https:\/\/api.github.com\/users\/fcakyon\/followers","following_url":"https:\/\/api.github.com\/users\/fcakyon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fcakyon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fcakyon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fcakyon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fcakyon\/orgs","repos_url":"https:\/\/api.github.com\/users\/fcakyon\/repos","events_url":"https:\/\/api.github.com\/users\/fcakyon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fcakyon\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Probably 1.12 on monday :)\r\n","@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)"],"created_at":1631224395000,"updated_at":1631477915000,"closed_at":1631463339000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.\r\n\r\nWhen do you plan to publush v1.11.1 release?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887","id":992576305,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwODg4MTU3","number":2887,"title":"#2837 Use cache folder for lockfile","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631217356000,"updated_at":1632231578000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2887","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887.patch"},"body":"Fixes #2837 \r\n\r\nUse a cache folder directory to store the FileLock.\r\n\r\nThe issue was that the lock file was in a readonly folder.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2886","id":992534632,"node_id":"MDU6SXNzdWU5OTI1MzQ2MzI=","number":2886,"title":"Hj","user":{"login":"Noorasri","id":90416328,"node_id":"MDQ6VXNlcjkwNDE2MzI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90416328?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Noorasri","html_url":"https:\/\/github.com\/Noorasri","followers_url":"https:\/\/api.github.com\/users\/Noorasri\/followers","following_url":"https:\/\/api.github.com\/users\/Noorasri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Noorasri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Noorasri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Noorasri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Noorasri\/orgs","repos_url":"https:\/\/api.github.com\/users\/Noorasri\/repos","events_url":"https:\/\/api.github.com\/users\/Noorasri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Noorasri\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631213932000,"updated_at":1631274389000,"closed_at":1631274389000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2885","id":992160544,"node_id":"MDU6SXNzdWU5OTIxNjA1NDQ=","number":2885,"title":"Adding an Elastic Search index to a Dataset","user":{"login":"MotzWanted","id":36195371,"node_id":"MDQ6VXNlcjM2MTk1Mzcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36195371?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MotzWanted","html_url":"https:\/\/github.com\/MotzWanted","followers_url":"https:\/\/api.github.com\/users\/MotzWanted\/followers","following_url":"https:\/\/api.github.com\/users\/MotzWanted\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MotzWanted\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MotzWanted\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MotzWanted\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MotzWanted\/orgs","repos_url":"https:\/\/api.github.com\/users\/MotzWanted\/repos","events_url":"https:\/\/api.github.com\/users\/MotzWanted\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MotzWanted\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env"],"created_at":1631190099000,"updated_at":1632128781000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:\r\n\r\nReusing dataset squad (\/Users\/andreasmotz\/.cache\/huggingface\/datasets\/squad\/plain_text\/1.0.0\/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\n 90%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589 | 9501\/10570 [00:01<00:00, 6335.61docs\/s]\r\n\r\nNo error is thrown, but the indexing breaks ~90%.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nfrom datasets import load_dataset\r\nfrom elasticsearch import Elasticsearch\r\nes = Elasticsearch()\r\nsquad = load_dataset('squad', split='validation')\r\nindex_name = \"corpus\"\r\nes_config = {\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n \"analysis\": {\"analyzer\": {\"stop_standard\": {\"type\": \"standard\", \" stopwords\": \"_english_\"}}},\r\n },\r\n \"mappings\": {\r\n \"properties\": {\r\n \"idx\" : {\"type\" : \"keyword\"},\r\n \"title\" : {\"type\" : \"keyword\"},\r\n \"text\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"standard\",\r\n \"similarity\": \"BM25\"\r\n },\r\n }\r\n },\r\n}\r\nclass IndexBuilder:\r\n \"\"\"\r\n Elastic search indexing of a corpus\r\n \"\"\"\r\n def __init__(\r\n self,\r\n *args,\r\n #corpus : None,\r\n dataset : squad,\r\n index_name = str,\r\n query = str,\r\n config = dict,\r\n **kwargs,\r\n ):\r\n #instantiate HuggingFace dataset\r\n self.dataset = dataset\r\n #instantiate ElasticSearch config\r\n self.config = config\r\n self.es = Elasticsearch()\r\n self.index_name = index_name\r\n self.query = query\r\n def elastic_index(self):\r\n print(self.es.info)\r\n self.es.indices.delete(index=self.index_name, ignore=[400, 404])\r\n search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)\r\n return search_index\r\n def exact_match_method(self, index):\r\n scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)\r\n return scores, retrieved_examples\r\nif __name__ == \"__main__\":\r\n print(type(squad))\r\n Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)\r\n search_index = Index.elastic_index()\r\n scores, examples = Index.exact_match_method(search_index)\r\n print(scores, examples)\r\n for name in squad.column_names:\r\n print(type(squad[name]))\r\n```\r\n\r\n## Environment info\r\nWe run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.\r\n\r\nPoetry:\r\n- Python version: 3.8\r\n- PyArrow: 4.0.1\r\n- Elasticsearch: 7.13.4\r\n- datasets: 1.10.2\r\n\r\nLocal:\r\n- Python version: 3.8\r\n- PyArrow: 3.0.0\r\n- Elasticsearch: 7.7.1\r\n- datasets: 1.7.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884","id":992135698,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwNTA4MTE1","number":2884,"title":"Add IC, SI, ER tasks to SUPERB","user":{"login":"anton-l","id":26864830,"node_id":"MDQ6VXNlcjI2ODY0ODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26864830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anton-l","html_url":"https:\/\/github.com\/anton-l","followers_url":"https:\/\/api.github.com\/users\/anton-l\/followers","following_url":"https:\/\/api.github.com\/users\/anton-l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anton-l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anton-l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anton-l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anton-l\/orgs","repos_url":"https:\/\/api.github.com\/users\/anton-l\/repos","events_url":"https:\/\/api.github.com\/users\/anton-l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anton-l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ","Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken\/unstable links. You can get all necessary archives in this repo: https:\/\/huggingface.co\/datasets\/anton-l\/superb_source_data_dumps\/tree\/main\r\nAre we allowed to make these datasets public or would that violate the terms of their use?","@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us. \nFor example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(","> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.\r\n> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(\r\n\r\nI think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)?"],"created_at":1631188563000,"updated_at":1632129478000,"closed_at":1632128449000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2884","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884.patch"},"body":"This PR adds 3 additional classification tasks to SUPERB\r\n\r\n#### Intent Classification\r\nDataset URL seems to be down at the moment :( See the note below.\r\nS3PRL source: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/fluent_commands\/dataset.py\r\nInstructions: https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#ic-intent-classification---fluent-speech-commands\r\n\r\n#### Speaker Identification\r\nManual download script:\r\n```\r\nmkdir VoxCeleb1\r\ncd VoxCeleb1\r\n \r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partaa\r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partab\r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partac\r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partad\r\ncat vox1_dev* > vox1_dev_wav.zip\r\nunzip vox1_dev_wav.zip\r\n \r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_test_wav.zip\r\nunzip vox1_test_wav.zip\r\n \r\n# download the official SUPERB train-dev-test split\r\nwget https:\/\/raw.githubusercontent.com\/s3prl\/s3prl\/master\/s3prl\/downstream\/voxceleb1\/veri_test_class.txt\r\n```\r\nS3PRL source: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/voxceleb1\/dataset.py\r\nInstructions: https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#sid-speaker-identification\r\n\r\n#### Intent Classification\r\nManual download requires going through a slow application process, see the note below.\r\nS3PRL source: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/emotion\/IEMOCAP_preprocess.py\r\nInstructions: https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#er-emotion-recognition\r\n\r\n#### :warning: Note\r\nThese datasets either require manual downloads or have broken\/unstable links. You can get all necessary archives in this repo: https:\/\/huggingface.co\/datasets\/anton-l\/superb_source_data_dumps\/tree\/main","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883","id":991969875,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwMzYzNTQz","number":2883,"title":"Fix data URLs and metadata in DocRED dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631177734000,"updated_at":1631532271000,"closed_at":1631532271000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2883","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883.patch"},"body":"The host of `docred` dataset has updated the `dev` data file. This PR:\r\n- Updates the dev URL\r\n- Updates dataset metadata\r\n\r\nThis PR also fixes the URL of the `train_distant` split, which was wrong.\r\n\r\nFix #2882.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2882","id":991800141,"node_id":"MDU6SXNzdWU5OTE4MDAxNDE=","number":2882,"title":"`load_dataset('docred')` results in a `NonMatchingChecksumError` ","user":{"login":"tmpr","id":51313597,"node_id":"MDQ6VXNlcjUxMzEzNTk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51313597?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tmpr","html_url":"https:\/\/github.com\/tmpr","followers_url":"https:\/\/api.github.com\/users\/tmpr\/followers","following_url":"https:\/\/api.github.com\/users\/tmpr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tmpr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tmpr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tmpr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tmpr\/orgs","repos_url":"https:\/\/api.github.com\/users\/tmpr\/repos","events_url":"https:\/\/api.github.com\/users\/tmpr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tmpr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https:\/\/drive.google.com\/drive\/folders\/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.\r\n\r\nI'm fixing all this.\r\n\r\n"],"created_at":1631166902000,"updated_at":1631532270000,"closed_at":1631532270000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.\r\n\r\n## Steps to reproduce the bug\r\nIt is quasi only this code:\r\n```python\r\nimport datasets\r\ndata = datasets.load_dataset('docred')\r\n```\r\n\r\n## Expected results\r\nThe DocRED dataset should be loaded without any problems.\r\n\r\n## Actual results\r\n```\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n in \r\n----> 1 d = datasets.load_dataset('docred')\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 673 # Checksums verification\r\n 674 if verify_infos:\r\n--> 675 verify_checksums(\r\n 676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 677 )\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/drive.google.com\/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.11.0\r\n- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyArrow version: 5.0.0\r\n\r\nThis error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.\r\n\r\n## Remarks\r\n\r\n- I have already called `rm -rf \/home\/\/.cache\/huggingface`, i.e., I have tried clearing the cache.\r\n- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881","id":991639142,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwMDc1OTAy","number":2881,"title":"Add BIOSSES dataset","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631147736000,"updated_at":1631542840000,"closed_at":1631542840000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2881","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881.patch"},"body":"Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in \"Biomedical Datasets - BigScience Workshop 2021\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880","id":990877940,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI5NDIzMDMy","number":2880,"title":"Extend support for streaming datasets that use pathlib.Path stem\/suffix","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631090563000,"updated_at":1631193209000,"closed_at":1631193209000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2880","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880.patch"},"body":"This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`.\r\n\r\nRelated to #2876, #2874, #2866.\r\n\r\nCC: @severo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2879","id":990257404,"node_id":"MDU6SXNzdWU5OTAyNTc0MDQ=","number":2879,"title":"In v1.4.1, all TIMIT train transcripts are \"Would such an act of refusal be useful?\"","user":{"login":"rcgale","id":2279700,"node_id":"MDQ6VXNlcjIyNzk3MDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2279700?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rcgale","html_url":"https:\/\/github.com\/rcgale","followers_url":"https:\/\/api.github.com\/users\/rcgale\/followers","following_url":"https:\/\/api.github.com\/users\/rcgale\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rcgale\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rcgale\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rcgale\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rcgale\/orgs","repos_url":"https:\/\/api.github.com\/users\/rcgale\/repos","events_url":"https:\/\/api.github.com\/users\/rcgale\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rcgale\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https:\/\/github.com\/huggingface\/datasets\/commit\/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that should work.\r\n\r\nOn the other hand, would it be possible for @patrickvonplaten to update the [blog post](https:\/\/huggingface.co\/blog\/fine-tune-wav2vec2-english) with the correct version of `datasets`?","I just proposed a change in the blog post.\r\n\r\nI had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.\r\n\r\nI still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem (\"Error: the requested data set requires `datasets>=1.5.0`.\"). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data.","Also, thank you for a quick and helpful reply!"],"created_at":1631040825000,"updated_at":1631120119000,"closed_at":1631092348000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nUsing version 1.4.1 of `datasets`, TIMIT transcripts are all the same.\r\n\r\n## Steps to reproduce the bug\r\nI was following this tutorial\r\n- https:\/\/huggingface.co\/blog\/fine-tune-wav2vec2-english\r\n\r\nBut here's a distilled repro:\r\n```python\r\n!pip install datasets==1.4.1\r\nfrom datasets import load_dataset\r\ntimit = load_dataset(\"timit_asr\", cache_dir=\".\/temp\")\r\nunique_transcripts = set(timit[\"train\"][\"text\"])\r\nprint(unique_transcripts)\r\nassert len(unique_transcripts) > 1\r\n```\r\n## Expected results\r\nExpected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.\r\n\r\n## Actual results\r\nEvery train transcript was \"Would such an act of refusal be useful?\" Every test transcript was \"The bungalow was pleasantly situated near the shore.\"\r\n\r\n## Environment info\r\n- `datasets` version: 1.4.1\r\n- Platform: Darwin-18.7.0-x86_64-i386-64bit\r\n- Python version: 3.7.9\r\n- PyTorch version (GPU?): 1.9.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: tried both\r\n- Using distributed or parallel set-up in script?: no\r\n- \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2878","id":990093316,"node_id":"MDU6SXNzdWU5OTAwOTMzMTY=","number":2878,"title":"NotADirectoryError: [WinError 267] During load_from_disk","user":{"login":"Grassycup","id":1875064,"node_id":"MDQ6VXNlcjE4NzUwNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1875064?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Grassycup","html_url":"https:\/\/github.com\/Grassycup","followers_url":"https:\/\/api.github.com\/users\/Grassycup\/followers","following_url":"https:\/\/api.github.com\/users\/Grassycup\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Grassycup\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Grassycup\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Grassycup\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Grassycup\/orgs","repos_url":"https:\/\/api.github.com\/users\/Grassycup\/repos","events_url":"https:\/\/api.github.com\/users\/Grassycup\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Grassycup\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631027705000,"updated_at":1631027705000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nTrying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails.\r\nPerforming the same operation succeeds on non-windows environment (AWS Sagemaker).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Followed https:\/\/huggingface.co\/docs\/datasets\/filesystems.html#loading-a-processed-dataset-from-s3\r\n\r\nfrom datasets import load_from_disk\r\nfrom datasets.filesystems import S3FileSystem\r\n\r\n\r\ns3_file = \"output of save_to_disk\"\r\n\r\ns3_filesystem = S3FileSystem()\r\n\r\nload_from_disk(s3_file, fs=s3_filesystem)\r\n```\r\n\r\n## Expected results\r\nload_from_disk succeeds without error\r\n\r\n## Actual results\r\nSeems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it.\r\n```\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\weakref.py\", line 566, in __call__\r\n return info.func(*info.args, **(info.kwargs or {}))\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 817, in _cleanup\r\n cls._rmtree(name)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n [Previous line repeated 2 more times]\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 805, in onerror\r\n cls._rmtree(path)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 599, in _rmtree_unsafe\r\n onerror(os.scandir, path, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 596, in _rmtree_unsafe\r\n with os.scandir(path) as scandir_it:\r\nNotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\\\Users\\\\grassycup\\\\AppData\\\\Local\\\\Temp\\\\tmp45f_qbma\\\\tests3bucket\\\\output\\\\test_output\\\\train\\\\dataset.arrow'\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\weakref.py\", line 566, in __call__\r\n return info.func(*info.args, **(info.kwargs or {}))\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 817, in _cleanup\r\n cls._rmtree(name)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n [Previous line repeated 2 more times]\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 805, in onerror\r\n cls._rmtree(path)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 599, in _rmtree_unsafe\r\n onerror(os.scandir, path, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 596, in _rmtree_unsafe\r\n with os.scandir(path) as scandir_it:\r\nNotADirectoryError: [WinError 267] The directory name is invalid:\r\n'C:\\\\Users\\\\grassycup\\\\AppData\\\\Local\\\\Temp\\\\tmp45f_qbma\\\\tests3bucket\\\\output\\\\test_output\\\\train\\\\dataset.arrow'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19042-SP0\r\n- Python version: 3.8.11\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2877","id":990027249,"node_id":"MDU6SXNzdWU5OTAwMjcyNDk=","number":2877,"title":"Don't keep the dummy data folder or dataset_infos.json when resolving data files","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631023744000,"updated_at":1631023744000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files.\r\n\r\nThere are already a few exceptions:\r\n- files starting with \".\" are ignored\r\n- the dataset card \"README.md\" is ignored\r\n- any file named \"config.json\" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure)\r\n\r\nHowever any data files in a folder named \"dummy\" should be ignored as well as they should only be used to test the dataset.\r\nSame for \"dataset_infos.json\" which should only be used to get the `dataset.info`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876","id":990001079,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4NjU3MDc2","number":2876,"title":"Extend support for streaming datasets that use pathlib.Path.glob","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am thinking that ideally we should call `fs.glob()` instead...","Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs."],"created_at":1631022225000,"updated_at":1631267449000,"closed_at":1631267448000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2876","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876.patch"},"body":"This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.\r\n\r\nRelated to #2874, #2866.\r\n\r\nCC: @severo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2875","id":989919398,"node_id":"MDU6SXNzdWU5ODk5MTkzOTg=","number":2875,"title":"Add Congolese Swahili speech datasets","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631016830000,"updated_at":1631016830000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Congolese Swahili speech corpora\r\n- **Data:** https:\/\/gamayun.translatorswb.org\/data\/\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nAlso related: https:\/\/mobile.twitter.com\/OktemAlp\/status\/1435196393631764482","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874","id":989685328,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4","number":2874,"title":"Support streaming datasets that use pathlib","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've tried https:\/\/github.com\/huggingface\/datasets\/issues\/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```","@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... \ud83d\ude05 ","No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!"],"created_at":1631000149000,"updated_at":1631039122000,"closed_at":1631014875000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2874","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874.patch"},"body":"This PR extends the support in streaming mode for datasets that use `pathlib.Path`.\r\n\r\nRelated to: #2866.\r\nCC: @severo ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873","id":989587695,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4MzA0MTMw","number":2873,"title":"adding swedish_medical_ner","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?","Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset"],"created_at":1630989893000,"updated_at":1631911657000,"closed_at":1631911657000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2873","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873.patch"},"body":"Adding the Swedish Medical NER dataset, listed in \"Biomedical Datasets - BigScience Workshop 2021\"\r\n\r\nCode refactored ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872","id":989453069,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4MTkzMjkz","number":2872,"title":"adding swedish_medical_ner","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630965652000,"updated_at":1630989392000,"closed_at":1630989392000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2872","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872.patch"},"body":"Adding the Swedish Medical NER dataset, listed in \"Biomedical Datasets - BigScience Workshop 2021\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2871","id":989436088,"node_id":"MDU6SXNzdWU5ODk0MzYwODg=","number":2871,"title":"datasets.config.PYARROW_VERSION has no attribute 'major'","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.","Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https:\/\/github.com\/huggingface\/datasets\/commit\/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https:\/\/github.com\/huggingface\/datasets\/commit\/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n","Sorted. Thanks!","Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci\/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/2873","Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci\/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci\/circleci: run_dataset_script_tests_pyarrow_1\" details](https:\/\/circleci.com\/gh\/huggingface\/datasets\/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue."],"created_at":1630962417000,"updated_at":1631091112000,"closed_at":1631091112000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"In the test_dataset_common.py script, line 288-289\r\n\r\n```\r\nif datasets.config.PYARROW_VERSION.major < 3:\r\n packaged_datasets = [pd for pd in packaged_datasets if pd[\"dataset_name\"] != \"parquet\"]\r\n```\r\n\r\nwhich throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.\r\n\r\n```\r\nimport datasets\r\ndatasets.config.PYARROW_VERSION.major\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n\/var\/folders\/1f\/0wqmlgp90qjd5mpj53fnjq440000gn\/T\/ipykernel_73361\/2547517336.py in \r\n 1 import datasets\r\n----> 2 datasets.config.PYARROW_VERSION.major\r\n\r\nAttributeError: 'str' object has no attribute 'major'\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.11.0\r\n- Platform: Darwin-20.6.0-x86_64-i386-64bit\r\n- Python version: 3.7.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870","id":988276859,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI3MjI4Njk5","number":2870,"title":"Fix three typos in two files for documentation","user":{"login":"leny-mi","id":25124853,"node_id":"MDQ6VXNlcjI1MTI0ODUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25124853?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leny-mi","html_url":"https:\/\/github.com\/leny-mi","followers_url":"https:\/\/api.github.com\/users\/leny-mi\/followers","following_url":"https:\/\/api.github.com\/users\/leny-mi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leny-mi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leny-mi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leny-mi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leny-mi\/orgs","repos_url":"https:\/\/api.github.com\/users\/leny-mi\/repos","events_url":"https:\/\/api.github.com\/users\/leny-mi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leny-mi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630756183000,"updated_at":1630916481000,"closed_at":1630916375000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2870","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870.patch"},"body":"Changed \"bacth_size\" to \"batch_size\" (2x)\r\nChanged \"intsructions\" to \"instructions\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2869","id":987676420,"node_id":"MDU6SXNzdWU5ODc2NzY0MjA=","number":2869,"title":"TypeError: 'NoneType' object is not callable","user":{"login":"Chenfei-Kang","id":40911446,"node_id":"MDQ6VXNlcjQwOTExNDQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40911446?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Chenfei-Kang","html_url":"https:\/\/github.com\/Chenfei-Kang","followers_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/followers","following_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/orgs","repos_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/repos","events_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details and environment info (platform, PyArrow version)?","> Hi, @Chenfei-Kang.\r\n> \r\n> I'm sorry, but I'm not able to reproduce your bug:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> ds = load_dataset(\"glue\", 'cola')\r\n> ds\r\n> ```\r\n> \r\n> ```\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 8551\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1043\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1063\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Could you please give more details and environment info (platform, PyArrow version)?\r\n\r\nSorry to reply you so late.\r\nplatform: pycharm 2021 + anaconda with python 3.7\r\nPyArrow version: 5.0.0\r\nhuggingface-hub: 0.0.16\r\ndatasets: 1.9.0\r\n","- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?","> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?\r\n\r\n1. For the platform, here are the output:\r\n - datasets` version: 1.11.0\r\n - Platform: Windows-10-10.0.19041-SP0\r\n - Python version: 3.7.10\r\n - PyArrow version: 5.0.0\r\n2. For the code and error\uff1a\r\n ```python\r\n from datasets import load_dataset, load_metric\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n ```\r\n ```python\r\n Traceback (most recent call last):\r\n ....\r\n ....\r\n File \"my_file.py\", line 2, in \r\n dataset = load_dataset(\"glue\", \"cola\")\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n TypeError: 'NoneType' object is not callable\r\n ```\r\n Thank you!","For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.","One naive question: do you have internet access from the machine where you execute the code?","> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.\r\n\r\nBut I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much!"],"created_at":1630668459000,"updated_at":1631102998000,"closed_at":1631093095000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nTypeError: 'NoneType' object is not callable\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\ndataset = datasets.load_dataset(\"glue\", 'cola')\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform:\r\n- Python version: 3.7\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2868","id":987139146,"node_id":"MDU6SXNzdWU5ODcxMzkxNDY=","number":2868,"title":"Add Common Objects in 3D (CO3D)","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630614972000,"updated_at":1630614972000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *Common Objects in 3D (CO3D)*\r\n- **Description:** *See blog post [here](https:\/\/ai.facebook.com\/blog\/common-objects-in-3d-dataset-for-3d-reconstruction)*\r\n- **Paper:** *[link to paper](https:\/\/arxiv.org\/abs\/2109.00512)*\r\n- **Data:** *[link to data](https:\/\/ai.facebook.com\/datasets\/co3d-downloads\/)*\r\n- **Motivation:** *excerpt from above blog post:*\r\n\r\n> As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR\/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences.\r\n> \r\n> Besides practical applications in AR\/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model.\r\n> \r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867","id":986971224,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI2MTE3NzAw","number":2867,"title":"Add CaSiNo dataset","user":{"login":"kushalchawla","id":8416863,"node_id":"MDQ6VXNlcjg0MTY4NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8416863?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kushalchawla","html_url":"https:\/\/github.com\/kushalchawla","followers_url":"https:\/\/api.github.com\/users\/kushalchawla\/followers","following_url":"https:\/\/api.github.com\/users\/kushalchawla\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kushalchawla\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kushalchawla\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kushalchawla\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kushalchawla\/orgs","repos_url":"https:\/\/api.github.com\/users\/kushalchawla\/repos","events_url":"https:\/\/api.github.com\/users\/kushalchawla\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kushalchawla\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.","Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https:\/\/huggingface.co\/datasets. Does it take some time or did I miss something?","Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;)"],"created_at":1630602383000,"updated_at":1631805174000,"closed_at":1631784224000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2867","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867.patch"},"body":"Hi. I request you to add our dataset to the repository. \r\n\r\nThis data was recently published at NAACL 2021: https:\/\/aclanthology.org\/2021.naacl-main.254.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2866","id":986706676,"node_id":"MDU6SXNzdWU5ODY3MDY2NzY=","number":2866,"title":"\"counter\" dataset raises an error in normal mode, but not in streaming mode","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `\/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.","OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)?","We should definitely support datasets using `pathlib` in streaming mode...\r\n\r\nFor non-supported datasets in streaming mode, we have already a request of raising an error\/warning: see #2654.","Hi @severo, please note that \"counter\" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:\r\n- #2874\r\n- #2876\r\n- #2880\r\n\r\nI have tested it. \ud83d\ude09 ","Now (on master), we get:\r\n\r\n```\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter\/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to \/home\/slesage\/.cache\/huggingface\/datasets\/counter\/default\/1.0.0\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"\/home\/slesage\/hf\/datasets\/.venv\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/counter\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9\/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n```\r\n\r\nThe error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!\r\n","Note that we might want to open an issue to fix the \"counter\" dataset by itself, but I let it up to you.","Fixed here: https:\/\/github.com\/huggingface\/datasets\/pull\/2894. Thanks @albertvillanova "],"created_at":1630588253000,"updated_at":1631291369000,"closed_at":1631277085000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\n`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> a = ds.load_dataset('counter', split=\"train\", streaming=False)\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter\/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to \/home\/slesage\/.cache\/huggingface\/datasets\/counter\/default\/1.0.0\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/counter\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9\/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n```\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> b = ds.load_dataset('counter', split=\"train\", streaming=True)\r\nUsing custom data configuration default\r\n>>> list(b)\r\n[]\r\n```\r\n\r\n## Expected results\r\n\r\nAn exception should be raised in streaming mode\r\n\r\n## Actual results\r\n\r\nNo exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.1.dev0\r\n- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865","id":986460698,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI1NjY1ODgx","number":2865,"title":"Add MultiEURLEX dataset","user":{"login":"iliaschalkidis","id":1626984,"node_id":"MDQ6VXNlcjE2MjY5ODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1626984?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliaschalkidis","html_url":"https:\/\/github.com\/iliaschalkidis","followers_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/followers","following_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/repos","events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ","Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.\r\n\r\nI would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. ","Thanks for the changes :)\r\n\r\nRegarding the labels:\r\n\r\nIf you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.\r\nThe advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.\r\n\r\nLet me know if that sounds good to you or if you still want to stick with the labels as they are now.","Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages')\r\n# Read strs from the labels (list of integers) for the 1st sample of the training split\r\n```\r\n\r\nI would like to include this in the README file.\r\n\r\nCould you also provide some info on how I could define the supervized key to automate model training, as you said?\r\n\r\nThanks!","Thanks for the update :)\r\n\r\nHere is an example of usage:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages', split='train')\r\nclasslabel = dataset.features[\"labels\"].feature\r\nprint(dataset[0][\"labels\"])\r\n# [1, 20, 7, 3, 0]\r\nprint(classlabel.int2str(dataset[0][\"labels\"]))\r\n# ['100160', '100155', '100158', '100147', '100149']\r\n```\r\n\r\nThe ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p \r\n\r\nI think one last thing to do is just update the `dataset_infos.json` file and we'll be good !","Everything is ready! \ud83d\udc4d \r\n"],"created_at":1630575744000,"updated_at":1631274606000,"closed_at":1631274606000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2865","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865.patch"},"body":"**Add new MultiEURLEX Dataset**\r\n\r\nMultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864","id":986159438,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw","number":2864,"title":"Fix data URL in ToTTo dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":[],"created_at":1630560308000,"updated_at":1630565260000,"closed_at":1630565260000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2864","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864.patch"},"body":"Data source host changed their data URL: google-research-datasets\/ToTTo@cebeb43.\r\n\r\nFix #2860.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863","id":986156755,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI1MzkwMTkx","number":2863,"title":"Update dataset URL","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. \ud83d\ude09 "],"created_at":1630560138000,"updated_at":1630570250000,"closed_at":1630570250000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2863","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2862","id":985763001,"node_id":"MDU6SXNzdWU5ODU3NjMwMDE=","number":2862,"title":"Only retain relevant statistics in certain metrics","user":{"login":"ZhaofengWu","id":11954789,"node_id":"MDQ6VXNlcjExOTU0Nzg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11954789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZhaofengWu","html_url":"https:\/\/github.com\/ZhaofengWu","followers_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/followers","following_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/repos","events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630534690000,"updated_at":1630534690000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nAs I understand, in the `add_batch()` function, the raw predictions and references are kept (in memory?) until `compute()` is called.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e248247518140d5b0527ce2843a1a327e2902059\/src\/datasets\/metric.py#L423-L442\r\n\r\nThis takes O(n) memory. However, for many (most?) metrics, this is not necessary. E.g., for accuracy, only the # correct and # total need to be recorded.\r\n\r\n**Describe the solution you'd like**\r\nProbably an inheritance hierarchy where `\"predictions\"` and `\"references\"` are not always the two keys for the final metric computation. Each metric should create and maintain its own relevant statistics, again for example, `\"n_correct\"` and `\"n_total\"` for accuracy.\r\n\r\nI believe the metrics in AllenNLP (https:\/\/github.com\/allenai\/allennlp\/tree\/39c40fe38cd2fd36b3465b0b3c031f54ec824160\/allennlp\/training\/metrics) can be used as a good reference.\r\n\r\n**Describe alternatives you've considered**\r\nAt least `Metric.compute()` shouldn't hard-code `\"predictions\"` and `\"references\"` so that custom subclasses may override this behavior.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e248247518140d5b0527ce2843a1a327e2902059\/src\/datasets\/metric.py#L399-L400","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861","id":985081871,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw","number":2861,"title":"fix: \ud83d\udc1b be more specific when catching exceptions","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https:\/\/github.com\/huggingface\/datasets-preview-backend\/issues\/17 Is this right?\r\n\r\n","Yes, that's it. And to do that I'm trying to use https:\/\/pypi.org\/project\/stopit\/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ","And what about passing the `timeout` parameter instead?","It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https:\/\/github.com\/huggingface\/datasets-preview-backend\/tree\/master\/src\/datasets_preview_backend\/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`","I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...","Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case."],"created_at":1630498692000,"updated_at":1630576416000,"closed_at":1630576323000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2861","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861.patch"},"body":"The same specific exception is catched in other parts of the same\r\nfunction.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2860","id":985013339,"node_id":"MDU6SXNzdWU5ODUwMTMzMzk=","number":2860,"title":"Cannot download TOTTO dataset","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https:\/\/github.com\/google-research-datasets\/ToTTo\/commit\/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it."],"created_at":1630494250000,"updated_at":1630565260000,"closed_at":1630565260000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Error: Couldn't find file at https:\/\/storage.googleapis.com\/totto\/totto_data.zip\r\n\r\n`datasets version: 1.11.0`\r\n# How to reproduce:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('totto')\r\n```\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2859","id":984324500,"node_id":"MDU6SXNzdWU5ODQzMjQ1MDA=","number":2859,"title":"Loading allenai\/c4 in streaming mode does too many HEAD requests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["https:\/\/github.com\/huggingface\/datasets\/blob\/6c766f9115d686182d76b1b937cb27e099c45d68\/src\/datasets\/builder.py#L179-L186"],"created_at":1630444264000,"updated_at":1630484194000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"This does 60,000+ HEAD requests to get all the ETags of all the data files:\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"allenai\/c4\", streaming=True)\r\n```\r\nIt makes loading the dataset completely impractical.\r\n\r\nThe ETags are used to compute the config id (it must depend on the data files being used).\r\nInstead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858","id":984145568,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzNjEzNzQ0","number":2858,"title":"Fix s3fs version in CI","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630433143000,"updated_at":1630935215000,"closed_at":1630445391000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2858","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858.patch"},"body":"The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore\r\n\r\nThis PR changes the constrains to avoid the new conflicts\r\n\r\nIn particular it pins the version of s3fs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857","id":984093938,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzNTY5OTE4","number":2857,"title":"Update: Openwebtext - update size","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI error in unrelated to this PR and fixed on master"],"created_at":1630429863000,"updated_at":1631007872000,"closed_at":1631007872000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2857","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857.patch"},"body":"Update the size of the Openwebtext dataset\r\n\r\nI also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples)\r\n\r\nrelated to #2839 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856","id":983876734,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMzg2NzIw","number":2856,"title":"fix: \ud83d\udc1b remove URL's query string only if it's ?dl=1","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630417207000,"updated_at":1630419732000,"closed_at":1630419732000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2856","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856.patch"},"body":"A lot of URL use the query strings, for example\r\nhttp:\/\/opus.nlpl.eu\/download.php?f=Bianet\/v1\/moses\/en-ku.txt.zip, we\r\nmust not remove it when trying to detect the protocol. We thus remove it\r\nonly in the case of the query string being ?dl=1 which occurs on dropbox\r\nand dl.orangedox.com. Also: add unit tests.\r\n\r\nSee https:\/\/github.com\/huggingface\/datasets\/pull\/2843 for the original\r\ndiscussion.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855","id":983858229,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMzcxMTIy","number":2855,"title":"Fix windows CI CondaError","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630416122000,"updated_at":1630416934000,"closed_at":1630416933000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2855","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855.patch"},"body":"From this thread: https:\/\/github.com\/conda\/conda\/issues\/6057\r\n\r\nWe can fix the conda error\r\n```\r\nCondaError: Cannot link a source that does not exist.\r\nC:\\Users\\...\\Anaconda3\\Scripts\\conda.exe\r\n```\r\n\r\nby doing\r\n```bash\r\nconda update conda\r\n```\r\n\r\nbefore doing any install in the windows CI","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955","id":1003999469,"node_id":"PR_kwDODunzps4sHuRu","number":2955,"title":"Update legacy Python image for CI tests in Linux","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632299127000,"updated_at":1632302608000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2955","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2955.patch"},"body":"Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights:\r\n\r\n- Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host.\r\n\r\n- Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images.\r\n\r\nMore info: https:\/\/circleci.com\/docs\/2.0\/circleci-images","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2955\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954","id":1003904803,"node_id":"PR_kwDODunzps4sHa8O","number":2954,"title":"Run tests in parallel","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```","There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`"],"created_at":1632294044000,"updated_at":1632297373000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2954.patch"},"body":"Run CI tests in parallel to speed up the test suite.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2954\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952","id":1002704096,"node_id":"PR_kwDODunzps4sDU8S","number":2952,"title":"Fix missing conda deps","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632237781000,"updated_at":1632285599000,"closed_at":1632238244000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2952.patch"},"body":"`aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail.\r\n\r\nFix #2932.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2952\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951","id":1001267888,"node_id":"PR_kwDODunzps4r-lGs","number":2951,"title":"Dummy labels no longer on by default in `to_tf_dataset`","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.","Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features"],"created_at":1632162419000,"updated_at":1632232857000,"closed_at":1632219272000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2951","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2951.patch"},"body":"After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2951\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950","id":1001085353,"node_id":"PR_kwDODunzps4r-AKu","number":2950,"title":"Fix fn kwargs in filter","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632150626000,"updated_at":1632154979000,"closed_at":1632151681000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2950","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2950.patch"},"body":"#2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/2927\r\n\r\nI fixed that and added a test to make sure it doesn't happen again (for either map or filter)\r\n\r\nFix #2927","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2950\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949","id":1001026680,"node_id":"PR_kwDODunzps4r90Pt","number":2949,"title":"Introduce web and wiki config in triviaqa dataset","user":{"login":"shirte","id":1706443,"node_id":"MDQ6VXNlcjE3MDY0NDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1706443?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shirte","html_url":"https:\/\/github.com\/shirte","followers_url":"https:\/\/api.github.com\/users\/shirte\/followers","following_url":"https:\/\/api.github.com\/users\/shirte\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shirte\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shirte\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shirte\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shirte\/orgs","repos_url":"https:\/\/api.github.com\/users\/shirte\/repos","events_url":"https:\/\/api.github.com\/users\/shirte\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shirte\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632147443000,"updated_at":1632262631000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2949.patch"},"body":"The TriviaQA paper suggests that the two subsets (Wikipedia and Web)\r\nshould be treated differently. There are also different leaderboards\r\nfor the two sets on CodaLab. For that reason, introduce additional\r\nbuilder configs in the trivia_qa dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2949\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948","id":1000844077,"node_id":"PR_kwDODunzps4r9PdV","number":2948,"title":"Fix minor URL format in scitldr dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632136292000,"updated_at":1632143908000,"closed_at":1632143908000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2948.patch"},"body":"While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2948\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947","id":1000798338,"node_id":"PR_kwDODunzps4r9GIP","number":2947,"title":"Don't use old, incompatible cache for the new `filter`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632133139000,"updated_at":1632155109000,"closed_at":1632145382000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2947","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2947.patch"},"body":"#2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation.\r\n\r\nHowever the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account). \r\n\r\nThis is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result.\r\n\r\nTo fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0\r\n\r\nThis way the new `filter` outputs are now considered different from the old ones from the caching point of view.\r\n\r\nThis should fix #2943\r\n\r\ncc @anton-l","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2947\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946","id":1000754824,"node_id":"PR_kwDODunzps4r89f8","number":2946,"title":"Update meteor score from nltk update","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632130126000,"updated_at":1632130559000,"closed_at":1632130559000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2946","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2946.patch"},"body":"It looks like there were issues in NLTK on the way the METEOR score was computed.\r\nA fix was added in NLTK at https:\/\/github.com\/nltk\/nltk\/pull\/2763, and therefore the scoring function no longer returns the same values.\r\n\r\nI updated the score of the example in the docs","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2946\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2945","id":1000624883,"node_id":"I_kwDODunzps47pFLz","number":2945,"title":"Protect master branch","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool, I think we can do both :)","@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable\/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history)."],"created_at":1632120421000,"updated_at":1632139287000,"closed_at":1632139216000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:\r\n- 00cc036fea7c7745cfe722360036ed306796a3f2\r\n- 13ae8c98602bbad8197de3b9b425f4c78f582af1\r\n- ...\r\n\r\nI propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:\r\n- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch\r\n - Currently, simple merge commits are already disabled\r\n - I propose to disable rebase merging as well\r\n- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~\r\n - ~~This protection would reject direct pushes to master branch~~\r\n - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~\r\n- [x] Protect the master branch only from direct pushing of **merge commits**\r\n - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).\r\n - No need to disable\/re-enable this protection on each release \r\n\r\nThis purpose of this Issue is to open a discussion about this problem and to agree in a solution.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2945\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2944","id":1000544370,"node_id":"I_kwDODunzps47oxhy","number":2944,"title":"Add `remove_columns` to `IterableDataset ` ","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1632110460000,"updated_at":1632110460000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"c4\", 'realnewslike', streaming =True, split='train')\r\ndataset = dataset.remove_columns('url')\r\n```\r\n```\r\nAttributeError: 'IterableDataset' object has no attribute 'remove_columns'\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nIt would be nice to have `.remove_columns()` to match the `Datasets` api. \r\n\r\n\r\n**Describe alternatives you've considered**\r\n\r\nThis can be done with a single call to `.map()`, \r\n\r\nI can try to help add this. \ud83e\udd17","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2944\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2943","id":1000355115,"node_id":"I_kwDODunzps47oDUr","number":2943,"title":"Backwards compatibility broken for cached datasets that use `.filter()`","user":{"login":"anton-l","id":26864830,"node_id":"MDQ6VXNlcjI2ODY0ODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26864830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anton-l","html_url":"https:\/\/github.com\/anton-l","followers_url":"https:\/\/api.github.com\/users\/anton-l\/followers","following_url":"https:\/\/api.github.com\/users\/anton-l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anton-l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anton-l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anton-l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anton-l\/orgs","repos_url":"https:\/\/api.github.com\/users\/anton-l\/repos","events_url":"https:\/\/api.github.com\/users\/anton-l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anton-l\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?","If it's easy enough to implement, then yes please \ud83d\ude04 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.","Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR","I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available","Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !","Fixed by #2947."],"created_at":1632068197000,"updated_at":1632155143000,"closed_at":1632155142000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nAfter upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with \r\n`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`\r\n\r\nRelated feature: https:\/\/github.com\/huggingface\/datasets\/pull\/2836\r\n\r\n:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) \r\n\r\n## Workaround\r\nRemove the cache for the given dataset, e.g. `rm -rf ~\/.cache\/huggingface\/datasets\/librispeech_asr`.\r\n\r\n## Steps to reproduce the bug\r\n1. Delete `~\/.cache\/huggingface\/datasets\/librispeech_asr` if it exists.\r\n\r\n2. `pip install datasets==1.11.0` and run the following snippet:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nids = [\"1272-141231-0000\"]\r\nds = load_dataset(\"patrickvonplaten\/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\nds = ds.filter(lambda x: x[\"id\"] in ids)\r\n```\r\n3. `pip install datasets==1.12.1` and re-run the code again\r\n\r\n## Expected results\r\nSame result as with the previous `datasets` version.\r\n\r\n## Actual results\r\n```bash\r\nReusing dataset librispeech_asr (.\/.cache\/huggingface\/datasets\/librispeech_asr\/clean\/2.1.0\/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)\r\nLoading cached processed dataset at .\/.cache\/huggingface\/datasets\/librispeech_asr\/clean\/2.1.0\/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1\/cache-cd1c29844fdbc87a.arrow\r\nTraceback (most recent call last):\r\n File \".\/repos\/transformers\/src\/transformers\/models\/wav2vec2\/try_dataset.py\", line 5, in \r\n ds = ds.filter(lambda x: x[\"id\"] in ids)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 185, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py\", line 398, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 2169, in filter\r\n indices = self.map(\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1686, in map\r\n return self._map_single(\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 185, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py\", line 398, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1896, in _map_single\r\n return Dataset.from_file(cache_file_name, info=info, split=self.split)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 343, in from_file\r\n return cls(\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 282, in __init__\r\n self.info.features = self.info.features.reorder_fields_as(inferred_features)\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/features.py\", line 1151, in reorder_fields_as\r\n return Features(recursive_reorder(self, other))\r\n File \".\/envs\/transformers\/lib\/python3.8\/site-packages\/datasets\/features.py\", line 1140, in recursive_reorder\r\n raise ValueError(f\"Keys mismatch: between {source} and {target}\" + stack_position)\r\nValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}\r\n\r\nProcess finished with exit code 1\r\n\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2943\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942","id":1000309765,"node_id":"PR_kwDODunzps4r7tY6","number":2942,"title":"Add SEDE dataset","user":{"login":"Hazoom","id":13545154,"node_id":"MDQ6VXNlcjEzNTQ1MTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13545154?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Hazoom","html_url":"https:\/\/github.com\/Hazoom","followers_url":"https:\/\/api.github.com\/users\/Hazoom\/followers","following_url":"https:\/\/api.github.com\/users\/Hazoom\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Hazoom\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Hazoom\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Hazoom\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Hazoom\/orgs","repos_url":"https:\/\/api.github.com\/users\/Hazoom\/repos","events_url":"https:\/\/api.github.com\/users\/Hazoom\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Hazoom\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.","Hi @Hazoom,\r\n\r\nYou were right: the non-passing test had nothing to do with this PR.\r\n\r\nUnfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n- your commits repeated two times\r\n- and commits which are not yours from the master branch\r\n\r\nIf you would like to clean your pull request, please make:\r\n```\r\ngit reset --hard 587b93a\r\ngit fetch upstream master\r\ngit merge upstream\/master\r\ngit push --force origin sede\r\n```","> Hi @Hazoom,\r\n> \r\n> You were right: the non-passing test had nothing to do with this PR.\r\n> \r\n> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n> \r\n> * your commits repeated two times\r\n> * and commits which are not yours from the master branch\r\n> \r\n> If you would like to clean your pull request, please make:\r\n> \r\n> ```\r\n> git reset --hard 587b93a\r\n> git fetch upstream master\r\n> git merge upstream\/master\r\n> git push --force origin sede\r\n> ```\r\n\r\nThanks @albertvillanova ","> Nice! Just one final request before approving your pull request:\r\n> \r\n> As you have updated the \"QuerySetId\" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:\r\n> \r\n> ```\r\n> rm datasets\/sede\/dataset_infos.json\r\n> datasets-cli test datasets\/sede --save_infos --all_configs\r\n> ```\r\n\r\n@albertvillanova Good catch, just fixed it."],"created_at":1632057084000,"updated_at":1632139643000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2942","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2942.patch"},"body":"This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.\r\n\r\nPlease see our paper for more details: https:\/\/arxiv.org\/abs\/2106.05006","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2942\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2941","id":1000000711,"node_id":"I_kwDODunzps47mszH","number":2941,"title":"OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError","user":{"login":"ayaka14732","id":68557794,"node_id":"MDQ6VXNlcjY4NTU3Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/68557794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ayaka14732","html_url":"https:\/\/github.com\/ayaka14732","followers_url":"https:\/\/api.github.com\/users\/ayaka14732\/followers","following_url":"https:\/\/api.github.com\/users\/ayaka14732\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ayaka14732\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ayaka14732\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ayaka14732\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ayaka14732\/orgs","repos_url":"https:\/\/api.github.com\/users\/ayaka14732\/repos","events_url":"https:\/\/api.github.com\/users\/ayaka14732\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ayaka14732\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried `unshuffled_original_da` and it is also not working"],"created_at":1631961553000,"updated_at":1631982333000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nCannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\n>>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko')\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}]\r\n```\r\n\r\n## Expected results\r\n\r\nLoading is successful.\r\n\r\n## Actual results\r\n\r\nLoading throws above error.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2941\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940","id":999680796,"node_id":"PR_kwDODunzps4r6EUF","number":2940,"title":"add swedish_medical_ner dataset","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631908985000,"updated_at":1632216774000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2940","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2940.patch"},"body":"Adding the Swedish Medical NER dataset, listed in \"Biomedical Datasets - BigScience Workshop 2021\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2940\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939","id":999639630,"node_id":"PR_kwDODunzps4r58Gu","number":2939,"title":"MENYO-20k repo has moved, updating URL","user":{"login":"cdleong","id":4109253,"node_id":"MDQ6VXNlcjQxMDkyNTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4109253?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cdleong","html_url":"https:\/\/github.com\/cdleong","followers_url":"https:\/\/api.github.com\/users\/cdleong\/followers","following_url":"https:\/\/api.github.com\/users\/cdleong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cdleong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cdleong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cdleong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cdleong\/orgs","repos_url":"https:\/\/api.github.com\/users\/cdleong\/repos","events_url":"https:\/\/api.github.com\/users\/cdleong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cdleong\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631905314000,"updated_at":1632238297000,"closed_at":1632238296000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2939","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2939.patch"},"body":"Dataset repo moved to https:\/\/github.com\/uds-lsv\/menyo-20k_MT, now editing URL to match.\r\n\r\nhttps:\/\/github.com\/uds-lsv\/menyo-20k_MT\/blob\/master\/data\/train.tsv is the file we're looking for","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2939\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938","id":999552263,"node_id":"PR_kwDODunzps4r5qwa","number":2938,"title":"Take namespace into account in caching","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `\/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https:\/\/github.com\/huggingface\/datasets-preview-backend\/blob\/master\/benchmark\/scripts\/serialize.py. That way, all the datasets are one-level deep directories","IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\ncc @Pierrci ","> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\nout of curiosity: where is it enforced?","> where is it enforced?\r\n\r\nNowhere yet but we should :) feel free to track in internal tracker and\/or implement, as this will be useful in the future","Thanks for the trick, I'm doing the change :)\r\nWe can use\r\n`~\/.cache\/huggingface\/datasets\/username___dataset_name` for the data\r\n`~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/username___dataset_name` for the python files"],"created_at":1631897853000,"updated_at":1632242634000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2938.patch"},"body":"Loading a dataset \"username\/dataset_name\" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads \"dataset_name\" without specifying the username, it would reload the dataset from the cache instead of failing.\r\n\r\nI changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:\r\n\r\n`~\/.cache\/huggingface\/datasets\/username\/dataset_name` for the data\r\n`~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/username\/dataset_name` for the python files\r\n<\/s>\r\nEDIT: actually using three underscores:\r\n`~\/.cache\/huggingface\/datasets\/username___dataset_name` for the data\r\n`~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/username___dataset_name` for the python files\r\n\r\nThis PR should fix the issue https:\/\/github.com\/huggingface\/datasets\/issues\/2842\r\n\r\ncc @stas00 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2938\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2937","id":999548277,"node_id":"I_kwDODunzps47k-V1","number":2937,"title":"load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied","user":{"login":"daqieq","id":40532020,"node_id":"MDQ6VXNlcjQwNTMyMDIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40532020?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/daqieq","html_url":"https:\/\/github.com\/daqieq","followers_url":"https:\/\/api.github.com\/users\/daqieq\/followers","following_url":"https:\/\/api.github.com\/users\/daqieq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/daqieq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/daqieq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/daqieq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/daqieq\/orgs","repos_url":"https:\/\/api.github.com\/users\/daqieq\/repos","events_url":"https:\/\/api.github.com\/users\/daqieq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/daqieq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB\/s]\r\nDownloading: 2.71kB [00:00, ?B\/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio\/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\\r\n1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nDownloading: 334MB [01:17, 4.32MB\/s]\r\nDataset wiki_bio downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi\r\ns data.\r\n```\r\n\r\nThis kind of error messages usually happen because:\r\n- Your running Python script hasn't write access to that directory\r\n- You have another program (the File Explorer?) already browsing inside that directory","Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.\r\n\r\nRunning on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.\r\n\r\nThat leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).\r\n\r\nIf it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue."],"created_at":1631897530000,"updated_at":1632189875000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nStandard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wiki_bio')\r\n```\r\n\r\n## Expected results\r\nIt is expected that the dataset downloads without any errors.\r\n\r\n## Actual results\r\nPermissionError see trace below:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio\/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\site-packages\\datasets\\load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\site-packages\\datasets\\builder.py\", line 644, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\contextlib.py\", line 120, in __exit__\r\n next(self.gen)\r\n File \"C:\\Users\\username\\.conda\\envs\\hf\\lib\\site-packages\\datasets\\builder.py\", line 598, in incomplete_dir\r\n os.rename(tmp_dir, dirname)\r\nPermissionError: [WinError 5] Access is denied: 'C:\\\\Users\\\\username\\\\.cache\\\\huggingface\\\\datasets\\\\wiki_bio\\\\default\\\\1.1.0\\\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\\\Users\\\\username\\\\.cache\\\\huggingface\\\\datasets\\\\wiki_bio\\\\default\\\\1.1.0\\\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9'\r\n```\r\nBy commenting out the os.rename() [L604](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/builder.py#L604) and the shutil.rmtree() [L607](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed.\r\n\r\nIt seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https:\/\/github.com\/conan-io\/conan\/issues\/6560) with similar issue with os.rename() if it helps debug this issue.\r\n\r\n## Environment info\r\n- `datasets` version: 1.12.1\r\n- Platform: Windows-10-10.0.22449-SP0\r\n- Python version: 3.8.12\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2937\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936","id":999521647,"node_id":"PR_kwDODunzps4r5knb","number":2936,"title":"Check that array is not Float as nan != nan","user":{"login":"Iwontbecreative","id":494951,"node_id":"MDQ6VXNlcjQ5NDk1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/494951?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Iwontbecreative","html_url":"https:\/\/github.com\/Iwontbecreative","followers_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/followers","following_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/orgs","repos_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/repos","events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631895401000,"updated_at":1632217145000,"closed_at":1632217144000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2936","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2936.patch"},"body":"The Exception wants to check for issues with StructArrays\/ListArrays but catches FloatArrays with value nan as nan != nan.\r\nPass on FloatArrays as we should not raise an Exception for them.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2936\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935","id":999518469,"node_id":"PR_kwDODunzps4r5j8B","number":2935,"title":"Add Jigsaw unintended Bias","user":{"login":"Iwontbecreative","id":494951,"node_id":"MDQ6VXNlcjQ5NDk1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/494951?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Iwontbecreative","html_url":"https:\/\/github.com\/Iwontbecreative","followers_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/followers","following_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/orgs","repos_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/repos","events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Iwontbecreative\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Note that the tests seem to fail because of a bug in an Exception at the moment, see: https:\/\/github.com\/huggingface\/datasets\/pull\/2936 for the fix","@lhoestq implemented your changes, I think this might be ready for another look."],"created_at":1631895151000,"updated_at":1632269548000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2935","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2935.patch"},"body":"Hi,\r\n\r\nHere's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. \r\n\r\nThis requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2935\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2934","id":999477413,"node_id":"I_kwDODunzps47ktCl","number":2934,"title":"to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I did some investigation and, as it seems, the bug stems from [this line](https:\/\/github.com\/huggingface\/datasets\/blob\/8004d7c3e1d74b29c3e5b0d1660331cd26758363\/src\/datasets\/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!","Thanks a lot for investigating !"],"created_at":1631892413000,"updated_at":1632155004000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"To reproduce:\r\n```python\r\nimport datasets as ds\r\nimport weakref\r\nimport gc\r\n\r\nd = ds.load_dataset(\"mnist\", split=\"train\")\r\nref = weakref.ref(d._data.table)\r\ntfd = d.to_tf_dataset(\"image\", batch_size=1, shuffle=False, label_cols=\"label\")\r\ndel tfd, d\r\ngc.collect()\r\nassert ref() is None, \"Error: there is at least one reference left\"\r\n```\r\n\r\nThis causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.\r\n\r\nMoreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.\r\n\r\ncc @Rocketknight1 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2934\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933","id":999392566,"node_id":"PR_kwDODunzps4r5MHs","number":2933,"title":"Replace script_version with revision","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm also fine with the removal in 1.15"],"created_at":1631887479000,"updated_at":1632131530000,"closed_at":1632131530000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2933","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2933.patch"},"body":"As discussed in https:\/\/github.com\/huggingface\/datasets\/pull\/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files).\r\n\r\nThis PR replaces the parameter name `script_version` with `revision`.\r\n\r\nThis way, we are also aligned with:\r\n- Transformers: `AutoTokenizer.from_pretrained(..., revision=...)`\r\n- Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2933\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2932","id":999317750,"node_id":"I_kwDODunzps47kGD2","number":2932,"title":"Conda build fails","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Why 1.9 ?\r\n\r\nhttps:\/\/anaconda.org\/HuggingFace\/datasets currently says 1.11","Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 "],"created_at":1631882962000,"updated_at":1632238270000,"closed_at":1632238270000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nCurrent `datasets` version in conda is 1.9 instead of 1.12.\r\n\r\nThe build of the conda package fails.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2932\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931","id":998326359,"node_id":"PR_kwDODunzps4r1-JH","number":2931,"title":"Fix bug in to_tf_dataset","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!"],"created_at":1631804883000,"updated_at":1631811698000,"closed_at":1631811697000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2931","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2931.patch"},"body":"Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2931\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2930","id":998154311,"node_id":"I_kwDODunzps47fqBH","number":2930,"title":"Mutable columns argument breaks set_format","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"assignees":[{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Pushed a fix to my branch #2731 "],"created_at":1631795242000,"updated_at":1631800253000,"closed_at":1631800253000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIf you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"glue\", \"cola\")\r\n\r\ncolumn_list = [\"idx\", \"label\"]\r\ndataset.set_format(\"python\", columns=column_list)\r\ncolumn_list[1] = \"foo\" # Change the list after we call `set_format`\r\ndataset['train'][:4].keys()\r\n```\r\n\r\n## Expected results\r\n```python\r\ndict_keys(['idx', 'label'])\r\n```\r\n\r\n## Actual results\r\n```python\r\ndict_keys(['idx'])\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2930\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929","id":997960024,"node_id":"PR_kwDODunzps4r015C","number":2929,"title":"Add regression test for null Sequence","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631782713000,"updated_at":1631867039000,"closed_at":1631867039000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2929","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2929.patch"},"body":"Relates to #2892 and #2900.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2929\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928","id":997941506,"node_id":"PR_kwDODunzps4r0yUb","number":2928,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631781560000,"updated_at":1631795734000,"closed_at":1631795734000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2928.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2928\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2927","id":997654680,"node_id":"I_kwDODunzps47dwCY","number":2927,"title":"Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, I'm looking into it :)","Fixed by #2950."],"created_at":1631754842000,"updated_at":1632155002000,"closed_at":1632155001000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nUpgrading to 1.12 caused `dataset.filter` call to fail with \r\n\r\n> get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels\r\n\r\n\r\n## Steps to reproduce the bug\r\n```pythondef \r\n\r\nfilter_good_rows(\r\n ex: Dict,\r\n valid_rel_labels: Set[str],\r\n valid_ner_labels: Set[str],\r\n tokenizer: PreTrainedTokenizerFast,\r\n) -> bool:\r\n \"\"\"Get the good rows\"\"\"\r\n encoding = get_encoding_for_text(text=ex[\"text\"], tokenizer=tokenizer)\r\n ex[\"encoding\"] = encoding\r\n for relation in ex[\"relations\"]:\r\n if not is_valid_relation(relation, valid_rel_labels):\r\n return False\r\n for span in ex[\"spans\"]:\r\n if not is_valid_span(span, valid_ner_labels, encoding):\r\n return False\r\n return True\r\n \r\ndef get_dataset(): \r\n loader_path = str(Path(__file__).parent \/ \"prodigy_dataset_builder.py\")\r\n ds = load_dataset(\r\n loader_path,\r\n name=\"prodigy-dataset\",\r\n data_files=sorted(file_paths),\r\n cache_dir=cache_dir,\r\n )[\"train\"]\r\n\r\n valid_ner_labels = set(vocab.ner_category)\r\n valid_relations = set(vocab.relation_types.keys())\r\n ds = ds.filter(\r\n filter_good_rows,\r\n fn_kwargs=dict(\r\n valid_rel_labels=valid_relations,\r\n valid_ner_labels=valid_ner_labels,\r\n tokenizer=vocab.tokenizer,\r\n ),\r\n keep_in_memory=True,\r\n num_proc=num_proc,\r\n )\r\n\r\n```\r\n\r\n`ds` is a `DatasetDict` produced by a jsonl dataset.\r\nThis runs fine on 1.11 but fails on 1.12\r\n\r\n**Stack Trace**\r\n\r\n\r\n\r\n## Expected results\r\n\r\nI expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11\r\n\r\n## Actual results\r\n```\r\ntf_ner_rel_lib\/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl\r\n ds = ds.filter(\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:185: in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py:398: in wrapper\r\n out = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2169: in filter\r\n indices = self.map(\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:1686: in map\r\n return self._map_single(\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:185: in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py:398: in wrapper\r\n out = func(self, *args, **kwargs)\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:2048: in _map_single\r\n batch = apply_function_on_filtered_inputs(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\ninputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...}\r\nindices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0\r\n\r\n def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0):\r\n \"\"\"Utility to apply the function on a selection of columns.\"\"\"\r\n nonlocal update_data\r\n fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]\r\n if offset == 0:\r\n effective_indices = indices\r\n else:\r\n effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset\r\n processed_inputs = (\r\n> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n )\r\nE TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels'\r\n\r\n..\/..\/..\/..\/.pyenv\/versions\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:1939: TypeError\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1\r\n- Platform: Mac\r\n- Python version: 3.8.9\r\n- PyArrow version: pyarrow==5.0.0\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2927\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2926","id":997463277,"node_id":"I_kwDODunzps47dBTt","number":2926,"title":"Error when downloading datasets to non-traditional cache directories","user":{"login":"dar-tau","id":45885627,"node_id":"MDQ6VXNlcjQ1ODg1NjI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45885627?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dar-tau","html_url":"https:\/\/github.com\/dar-tau","followers_url":"https:\/\/api.github.com\/users\/dar-tau\/followers","following_url":"https:\/\/api.github.com\/users\/dar-tau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dar-tau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dar-tau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dar-tau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dar-tau\/orgs","repos_url":"https:\/\/api.github.com\/users\/dar-tau\/repos","events_url":"https:\/\/api.github.com\/users\/dar-tau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dar-tau\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631735986000,"updated_at":1631736135000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. \r\n\r\n## Steps to reproduce the bug\r\n```bash\r\nln -s \/path\/to\/netapp\/.cache ~\/.cache\r\n```\r\n\r\n```python\r\nload_dataset(\"imdb\")\r\n```\r\n\r\n## Expected results\r\nSuccessfully loading IMDB dataset\r\n\r\n## Actual results\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, \r\nnum_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0,\r\n dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'),\r\n 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected':\r\n SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded':\r\n SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.1.2\r\n- Platform: Ubuntu \r\n- Python version: 3.8\r\n\r\n## Extra notes\r\nStranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction:\r\n - With `cache_dir=\"\/path\/to\/netapp\/.cache\"` the same thing happens.\r\n - However, when linking `~\/netapp\/` to `\/path\/to\/netapp` *and* setting `cache_dir=\"~\/netapp\/.cache\/huggingface\/datasets\"` - it does work\r\n - On the other hand, when linking `~\/.cache` to `~\/netapp\/.cache` without using `cache_dir`, it does work anymore.\r\n\r\nWhile I could test it only for a NetApp device, it might have to do with any other mounted FS.\r\n\r\nThanks :)\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2926\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925","id":997407034,"node_id":"PR_kwDODunzps4rzJ9s","number":2925,"title":"Add tutorial for no-code dataset upload","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu\/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu\/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```","Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps:\/\/47389-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet"],"created_at":1631732082000,"updated_at":1632248034000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2925","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2925.patch"},"body":"This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2925\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2924","id":997378113,"node_id":"I_kwDODunzps47cshB","number":2924,"title":"\"File name too long\" error for file locks","user":{"login":"gar1t","id":184949,"node_id":"MDQ6VXNlcjE4NDk0OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/184949?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gar1t","html_url":"https:\/\/github.com\/gar1t","followers_url":"https:\/\/api.github.com\/users\/gar1t\/followers","following_url":"https:\/\/api.github.com\/users\/gar1t\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gar1t\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gar1t\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gar1t\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gar1t\/orgs","repos_url":"https:\/\/api.github.com\/users\/gar1t\/repos","events_url":"https:\/\/api.github.com\/users\/gar1t\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gar1t\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152\/src\/datasets\/utils\/filelock.py#L135-L135","Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info."],"created_at":1631729810000,"updated_at":1632232993000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nGetting the following error when calling `load_dataset(\"gar1t\/test\")`:\r\n\r\n```\r\nOSError: [Errno 36] File name too long: '\/.cache\/huggingface\/datasets\/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'\r\n```\r\n\r\n## Steps to reproduce the bug\r\n\r\nWhere the user cache dir (e.g. `~\/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"gar1t\/test\")\r\n```\r\n\r\n## Expected results\r\n\r\nExpect the function to return without an error.\r\n\r\n## Actual results\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 644, in download_and_prepare\r\n self._save_info()\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 765, in _save_info\r\n with FileLock(lock_path):\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"\/lib\/python3.9\/site-packages\/datasets\/utils\/filelock.py\", line 403, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 36] File name too long: '\/.cache\/huggingface\/datasets\/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2924\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2923","id":997351590,"node_id":"I_kwDODunzps47cmCm","number":2923,"title":"Loading an autonlp dataset raises in normal mode but not in streaming mode","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631727878000,"updated_at":1631727878000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nThe same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"severo\/autonlp-data-sentiment_detection-3c8bcd36\", split=\"train\", streaming=False)\r\n## raises an error\r\n\r\nload_dataset(\"severo\/autonlp-data-sentiment_detection-3c8bcd36\", split=\"train\", streaming=True)\r\n## does not raise an error\r\n```\r\n\r\n## Expected results\r\n\r\nBoth calls should raise the same error\r\n\r\n## Actual results\r\n\r\nCall with streaming=False:\r\n\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 5825.42it\/s]\r\nUsing custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b\r\nDownloading and preparing dataset json\/autonlp-data-sentiment_detection-3c8bcd36 to \/home\/slesage\/.cache\/huggingface\/datasets\/json\/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b\/0.0.0\/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50...\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 15923.71it\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 3346.88it\/s]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1187, in _prepare_split\r\n writer.write_table(table)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 418, in write_table\r\n pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 418, in \r\n pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)\r\n File \"pyarrow\/table.pxi\", line 1249, in pyarrow.lib.Table.__getitem__\r\n File \"pyarrow\/table.pxi\", line 1825, in pyarrow.lib.Table.column\r\n File \"pyarrow\/table.pxi\", line 1800, in pyarrow.lib.Table._ensure_integer_index\r\nKeyError: 'Field \"splits\" does not exist in table schema'\r\n```\r\n\r\nCall with `streaming=False`:\r\n\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 6000.43it\/s]\r\nUsing custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 46916.15it\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5\/5 [00:00<00:00, 148734.18it\/s]\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.1.dev0\r\n- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2923\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922","id":997332662,"node_id":"PR_kwDODunzps4ry6-s","number":2922,"title":"Fix conversion of multidim arrays in list to arrow","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631726496000,"updated_at":1631726572000,"closed_at":1631726505000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2922","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2922.patch"},"body":"Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation.\r\nHowever in #2361 we started to keep numpy arrays in order to keep their dtypes.\r\nIt works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays.\r\n\r\nIn this PR I added two strategies:\r\n- one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case)\r\n- one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2921","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2922\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2921","id":997325424,"node_id":"I_kwDODunzps47cfpw","number":2921,"title":"Using a list of multi-dim numpy arrays raises an error \"can only convert 1-dimensional array values\"","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631725931000,"updated_at":1631726505000,"closed_at":1631726505000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"This error has been introduced in https:\/\/github.com\/huggingface\/datasets\/pull\/2361\r\n\r\nTo reproduce:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\nd = Dataset.from_dict({\"a\": [np.zeros((2, 2))]})\r\n```\r\nraises\r\n```python\r\nTraceback (most recent call last):\r\n File \"playground\/ttest.py\", line 5, in \r\n d = Dataset.from_dict({\"a\": [np.zeros((2, 2))]}).with_format(\"torch\")\r\n File \"\/Users\/quentinlhoest\/Desktop\/hf\/nlp\/src\/datasets\/arrow_dataset.py\", line 458, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping)\r\n File \"\/Users\/quentinlhoest\/Desktop\/hf\/nlp\/src\/datasets\/table.py\", line 365, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n File \"pyarrow\/table.pxi\", line 1639, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow\/array.pxi\", line 332, in pyarrow.lib.asarray\r\n File \"pyarrow\/array.pxi\", line 223, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/Users\/quentinlhoest\/Desktop\/hf\/nlp\/src\/datasets\/arrow_writer.py\", line 107, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\/array.pxi\", line 306, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2921\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920","id":997323014,"node_id":"PR_kwDODunzps4ry4_u","number":2920,"title":"Fix unwanted tqdm bar when accessing examples","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631725751000,"updated_at":1631726304000,"closed_at":1631726304000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2920","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2920.patch"},"body":"A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default\r\n\r\nFix #2919 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2920\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2919","id":997127487,"node_id":"I_kwDODunzps47bvU_","number":2919,"title":"Unwanted progress bars when accessing examples","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["doing a patch release now :)"],"created_at":1631714710000,"updated_at":1631726509000,"closed_at":1631726303000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples:\r\n```python\r\nIn [1]: import datasets as ds \r\n\r\nIn [2]: d = ds.Dataset.from_dict({\"a\": [0, 1, 2]}).with_format(\"torch\") \r\n\r\nIn [3]: d[0] \r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 3172.70it\/s]\r\nOut[3]: {'a': tensor(0)}\r\n```\r\n\r\nThis is because the pytorch formatter calls `map_nested` that uses progress bars\r\n\r\ncc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2919\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2918","id":997063347,"node_id":"I_kwDODunzps47bfqz","number":2918,"title":"`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https:\/\/github.com\/intake\/filesystem_spec\/issues\/389\r\n\r\nI will ask them if they are planning to fix it...","Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https:\/\/raw.githubusercontent.com\/allenai\/scitldr\/master\/SciTLDR-Data\/SciTLDR-FullText\/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```","Thanks for investigating @albertvillanova ! \ud83e\udd17 "],"created_at":1631711167000,"updated_at":1632127898000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nTrying to load the `\"FullText\"` config of the `\"scitldr\"` dataset with `streaming=True` raises an error from `aiohttp`:\r\n```python\r\nClientPayloadError: 400, message='Can not decode content-encoding: gzip'\r\n```\r\n\r\ncc @lhoestq \r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\niter_dset = iter(\r\n load_dataset(\"scitldr\", name=\"FullText\", split=\"test\", streaming=True)\r\n)\r\n\r\nnext(iter_dset)\r\n```\r\n\r\n## Expected results\r\nReturns the first sample of the dataset\r\n\r\n## Actual results\r\nCalling `__next__` crashes with the following Traceback:\r\n\r\n```python\r\n----> 1 next(dset_iter)\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 339\r\n 340 def __iter__(self):\r\n--> 341 for key, example in self._iter():\r\n 342 if self.features:\r\n 343 # we encode the example for ClassLabel feature types for example\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\datasets\\iterable_dataset.py in _iter(self)\r\n 336 else:\r\n 337 ex_iterable = self._ex_iterable\r\n--> 338 yield from ex_iterable\r\n 339\r\n 340 def __iter__(self):\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 76\r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n 80\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\scitldr\\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\\scitldr.py in _generate_examples(self, filepath, split)\r\n 162\r\n 163 with open(filepath, encoding=\"utf-8\") as f:\r\n--> 164 for id_, row in enumerate(f):\r\n 165 data = json.loads(row)\r\n 166 if self.config.name == \"AIC\":\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\implementations\\http.py in read(self, length)\r\n 496 else:\r\n 497 length = min(self.size - self.loc, length)\r\n--> 498 return super().read(length)\r\n 499\r\n 500 async def async_fetch_all(self):\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\spec.py in read(self, length)\r\n 1481 # don't even bother calling fetch\r\n 1482 return b\"\"\r\n-> 1483 out = self.cache._fetch(self.loc, self.loc + length)\r\n 1484 self.loc += len(out)\r\n 1485 return out\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\caching.py in _fetch(self, start, end)\r\n 378 elif start < self.start:\r\n 379 if self.end - end > self.blocksize:\r\n--> 380 self.cache = self.fetcher(start, bend)\r\n 381 self.start = start\r\n 382 else:\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\asyn.py in wrapper(*args, **kwargs)\r\n 86 def wrapper(*args, **kwargs):\r\n 87 self = obj or args[0]\r\n---> 88 return sync(self.loop, func, *args, **kwargs)\r\n 89\r\n 90 return wrapper\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\asyn.py in sync(loop, func, timeout, *args, **kwargs)\r\n 67 raise FSTimeoutError\r\n 68 if isinstance(result[0], BaseException):\r\n---> 69 raise result[0]\r\n 70 return result[0]\r\n 71\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\asyn.py in _runner(event, coro, result, timeout)\r\n 23 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 24 try:\r\n---> 25 result[0] = await coro\r\n 26 except Exception as ex:\r\n 27 result[0] = ex\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\fsspec\\implementations\\http.py in async_fetch_range(self, start, end)\r\n 538 if r.status == 206:\r\n 539 # partial content, as expected\r\n--> 540 out = await r.read()\r\n 541 elif \"Content-Length\" in r.headers:\r\n 542 cl = int(r.headers[\"Content-Length\"])\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\aiohttp\\client_reqrep.py in read(self)\r\n 1030 if self._body is None:\r\n 1031 try:\r\n-> 1032 self._body = await self.content.read()\r\n 1033 for trace in self._traces:\r\n 1034 await trace.send_response_chunk_received(\r\n\r\n~\\miniconda3\\envs\\datasets\\lib\\site-packages\\aiohttp\\streams.py in read(self, n)\r\n 342 async def read(self, n: int = -1) -> bytes:\r\n 343 if self._exception is not None:\r\n--> 344 raise self._exception\r\n 345\r\n 346 # migration problem; with DataQueue you have to catch\r\n\r\nClientPayloadError: 400, message='Can not decode content-encoding: gzip'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.5\r\n- PyArrow version: 2.0.0\r\n- aiohttp version: 3.7.4.post0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2918\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2917","id":997041658,"node_id":"I_kwDODunzps47baX6","number":2917,"title":"windows download abnormal","user":{"login":"wei1826676931","id":52347799,"node_id":"MDQ6VXNlcjUyMzQ3Nzk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52347799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wei1826676931","html_url":"https:\/\/github.com\/wei1826676931","followers_url":"https:\/\/api.github.com\/users\/wei1826676931\/followers","following_url":"https:\/\/api.github.com\/users\/wei1826676931\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wei1826676931\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wei1826676931\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wei1826676931\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wei1826676931\/orgs","repos_url":"https:\/\/api.github.com\/users\/wei1826676931\/repos","events_url":"https:\/\/api.github.com\/users\/wei1826676931\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wei1826676931\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used","It is indeed an agency problem, thank you very, very much","Let me know if you have other questions :)\r\n\r\nClosing this issue now"],"created_at":1631709935000,"updated_at":1631812668000,"closed_at":1631812668000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??\r\n## Steps to reproduce the bug\r\n```python3.7 + windows\r\n![image](https:\/\/user-images.githubusercontent.com\/52347799\/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)\r\n\r\n\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\nIt can be downloaded normally.\r\n\r\n## Actual results\r\nit cann't\r\n\r\n## Environment info\r\n\r\n- `datasets` version:1.11.0\r\n- Platform:windows\r\n- Python version:3.7\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2917\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916","id":997003661,"node_id":"PR_kwDODunzps4rx5ua","number":2916,"title":"Add OpenAI's pass@k code evaluation metric","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https:\/\/huggingface.co\/docs\/datasets\/loading.html?highlight=rank#distributed-setup)\r\nYou can test to spawn several processes where each process would load the metric. Then in each process you add some references\/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references\/predictions\r\n\r\nLet me know if you have questions or if I can help","Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages."],"created_at":1631707543000,"updated_at":1631951964000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2916","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2916.patch"},"body":"This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https:\/\/github.com\/openai\/human-eval) introduced in the [Codex paper](https:\/\/arxiv.org\/abs\/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`\/`references` convention.\r\n\r\nThe addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.\r\n\r\nA few open questions:\r\n\r\n- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?\r\n- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?\r\n- Naming: the implementation sticks to the `predictions`\/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2916\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915","id":996870071,"node_id":"PR_kwDODunzps4rxfWb","number":2915,"title":"Fix fsspec AbstractFileSystem access","user":{"login":"pierre-godard","id":3969168,"node_id":"MDQ6VXNlcjM5NjkxNjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3969168?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pierre-godard","html_url":"https:\/\/github.com\/pierre-godard","followers_url":"https:\/\/api.github.com\/users\/pierre-godard\/followers","following_url":"https:\/\/api.github.com\/users\/pierre-godard\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pierre-godard\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pierre-godard\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pierre-godard\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pierre-godard\/orgs","repos_url":"https:\/\/api.github.com\/users\/pierre-godard\/repos","events_url":"https:\/\/api.github.com\/users\/pierre-godard\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pierre-godard\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631698760000,"updated_at":1631705724000,"closed_at":1631705724000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2915","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2915.patch"},"body":"This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2915\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2914","id":996770168,"node_id":"I_kwDODunzps47aYF4","number":2914,"title":"Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets","user":{"login":"pierre-godard","id":3969168,"node_id":"MDQ6VXNlcjM5NjkxNjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3969168?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pierre-godard","html_url":"https:\/\/github.com\/pierre-godard","followers_url":"https:\/\/api.github.com\/users\/pierre-godard\/followers","following_url":"https:\/\/api.github.com\/users\/pierre-godard\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pierre-godard\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pierre-godard\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pierre-godard\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pierre-godard\/orgs","repos_url":"https:\/\/api.github.com\/users\/pierre-godard\/repos","events_url":"https:\/\/api.github.com\/users\/pierre-godard\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pierre-godard\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closed by #2915."],"created_at":1631692446000,"updated_at":1631724557000,"closed_at":1631724556000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIn one of my project, I defined a custom fsspec filesystem with an entrypoint.\r\nMy guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https:\/\/github.com\/intake\/filesystem_spec\/blob\/0589358d8a029ed6b60d031018f52be2eb721291\/fsspec\/__init__.py#L55)).\r\nSo that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable.\r\nThis make the import of datasets failing as it is using that `fsspec.spec`.\r\n\r\n## Steps to reproduce the bug\r\nI could reproduce the bug with a dummy poetry project.\r\n\r\nHere is the pyproject.toml:\r\n```toml\r\n[tool.poetry]\r\nname = \"debug-datasets\"\r\nversion = \"0.1.0\"\r\ndescription = \"\"\r\nauthors = [\"Pierre Godard\"]\r\n\r\n[tool.poetry.dependencies]\r\npython = \"^3.8\"\r\ndatasets = \"^1.11.0\"\r\n\r\n[tool.poetry.dev-dependencies]\r\n\r\n[build-system]\r\nrequires = [\"poetry-core>=1.0.0\"]\r\nbuild-backend = \"poetry.core.masonry.api\"\r\n\r\n[tool.poetry.plugins.\"fsspec.specs\"]\r\n\"file2\" = \"fsspec.implementations.local.LocalFileSystem\"\r\n```\r\n\r\nThe only other file being a `debug_datasets\/__init__.py` empty file.\r\n\r\nThe overall structure of the project is as follows:\r\n```\r\n.\r\n\u251c\u2500\u2500 pyproject.toml\r\n\u2514\u2500\u2500 debug_datasets\r\n \u2514\u2500\u2500 __init__.py\r\n```\r\n\r\nThen, within the project folder run:\r\n\r\n```\r\npoetry install\r\npoetry run python\r\n```\r\n\r\nAnd in the python interpreter, try to import `datasets`:\r\n\r\n```\r\nimport datasets\r\n```\r\n\r\n## Expected results\r\nThe import should run successfully.\r\n\r\n## Actual results\r\n\r\nHere is the trace of the error I get:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/godarpi\/.cache\/pypoetry\/virtualenvs\/debug-datasets-JuFzTKL--py3.8\/lib\/python3.8\/site-packages\/datasets\/__init__.py\", line 33, in \r\n from .arrow_dataset import Dataset, concatenate_datasets\r\n File \"\/home\/godarpi\/.cache\/pypoetry\/virtualenvs\/debug-datasets-JuFzTKL--py3.8\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 48, in \r\n from .filesystems import extract_path_from_uri, is_remote_filesystem\r\n File \"\/home\/godarpi\/.cache\/pypoetry\/virtualenvs\/debug-datasets-JuFzTKL--py3.8\/lib\/python3.8\/site-packages\/datasets\/filesystems\/__init__.py\", line 30, in \r\n def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool:\r\nAttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem'\r\n```\r\n\r\n## Suggested fix\r\n\r\n`datasets\/filesystems\/__init__.py`, line 30, replace:\r\n```\r\n def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool:\r\n```\r\nby:\r\n```\r\n def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool:\r\n```\r\n\r\nI will come up with a PR soon if this effectively solves the issue.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: WSL2 (Ubuntu 20.04.1 LTS)\r\n- Python version: 3.8.5\r\n- PyArrow version: 5.0.0\r\n- `fsspec` version: 2021.8.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2914\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2913","id":996436368,"node_id":"I_kwDODunzps47ZGmQ","number":2913,"title":"timit_asr dataset only includes one text phrase","user":{"login":"margotwagner","id":39107794,"node_id":"MDQ6VXNlcjM5MTA3Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39107794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/margotwagner","html_url":"https:\/\/github.com\/margotwagner","followers_url":"https:\/\/api.github.com\/users\/margotwagner\/followers","following_url":"https:\/\/api.github.com\/users\/margotwagner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/margotwagner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/margotwagner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/margotwagner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/margotwagner\/orgs","repos_url":"https:\/\/api.github.com\/users\/margotwagner\/repos","events_url":"https:\/\/api.github.com\/users\/margotwagner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/margotwagner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)","Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `datasets` version: 1.4.1"],"created_at":1631653567000,"updated_at":1631693119000,"closed_at":1631693118000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe dataset 'timit_asr' only includes one text phrase. It only includes the transcription \"Would such an act of refusal be useful?\" multiple times rather than different phrases.\r\n\r\n## Steps to reproduce the bug\r\nNote: I am following the tutorial https:\/\/huggingface.co\/blog\/fine-tune-wav2vec2-english\r\n\r\n1. Install the dataset and other packages\r\n```python\r\n!pip install datasets>=1.5.0\r\n!pip install transformers==4.4.0\r\n!pip install soundfile\r\n!pip install jiwer\r\n```\r\n2. Load the dataset\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\n\r\ntimit = load_dataset(\"timit_asr\")\r\n```\r\n3. Remove columns that we don't want\r\n```python\r\ntimit = timit.remove_columns([\"phonetic_detail\", \"word_detail\", \"dialect_region\", \"id\", \"sentence_type\", \"speaker_id\"])\r\n```\r\n4. Write a short function to display some random samples of the dataset.\r\n```python\r\nfrom datasets import ClassLabel\r\nimport random\r\nimport pandas as pd\r\nfrom IPython.display import display, HTML\r\n\r\ndef show_random_elements(dataset, num_examples=10):\r\n assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\r\n picks = []\r\n for _ in range(num_examples):\r\n pick = random.randint(0, len(dataset)-1)\r\n while pick in picks:\r\n pick = random.randint(0, len(dataset)-1)\r\n picks.append(pick)\r\n \r\n df = pd.DataFrame(dataset[picks])\r\n display(HTML(df.to_html()))\r\n\r\nshow_random_elements(timit[\"train\"].remove_columns([\"file\"]))\r\n```\r\n\r\n## Expected results\r\n10 random different transcription phrases.\r\n\r\n## Actual results\r\n10 of the same transcription phrase \"Would such an act of refusal be useful?\"\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.4.1\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: not listed\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2913\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912","id":996256005,"node_id":"PR_kwDODunzps4rvhgp","number":2912,"title":"Update link to Blog in docs footer","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631640194000,"updated_at":1631692763000,"closed_at":1631692763000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2912","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2912.patch"},"body":"Update link.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2912\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911","id":996202598,"node_id":"PR_kwDODunzps4rvW7Y","number":2911,"title":"Fix exception chaining","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631636369000,"updated_at":1631804684000,"closed_at":1631804684000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2911","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2911.patch"},"body":"Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2911\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910","id":996149632,"node_id":"PR_kwDODunzps4rvL9N","number":2910,"title":"feat: \ud83c\udfb8 pass additional arguments to get private configs + info","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Included in https:\/\/github.com\/huggingface\/datasets\/pull\/2906"],"created_at":1631633059000,"updated_at":1631722749000,"closed_at":1631722746000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2910","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2910.patch"},"body":"`use_auth_token` can now be passed to the functions to get the configs\r\nor infos of private datasets on the hub","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2910\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909","id":996002180,"node_id":"PR_kwDODunzps4rutdo","number":2909,"title":"fix anli splits","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631625035000,"updated_at":1631625035000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2909","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2909.patch"},"body":"I can't run the tests for dummy data, facing this error \r\n\r\n`ImportError while loading conftest '\/home\/zaid\/tmp\/fix_anli_splits\/datasets\/tests\/conftest.py'.\r\ntests\/conftest.py:10: in \r\n from datasets import config\r\nE ImportError: cannot import name 'config' from 'datasets' (unknown location)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2909\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908","id":995970612,"node_id":"PR_kwDODunzps4rumwW","number":2908,"title":"Update Zenodo metadata with creator names and affiliation","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631623177000,"updated_at":1631629765000,"closed_at":1631629765000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2908","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2908.patch"},"body":"This PR helps in prefilling author data when automatically generating the DOI after each release.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2908\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907","id":995968152,"node_id":"PR_kwDODunzps4rumOy","number":2907,"title":"add story_cloze dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631623013000,"updated_at":1631623013000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2907","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2907.patch"},"body":"@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2907\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906","id":995962905,"node_id":"PR_kwDODunzps4rulH-","number":2906,"title":"feat: \ud83c\udfb8 add a function to get a dataset config's split names","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Should I add a section in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is"],"created_at":1631622682000,"updated_at":1632155739000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2906","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2906.patch"},"body":"Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub\r\n\r\nQuestions:\r\n\r\n- I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?<\/strike> no -> reverted\r\n- Should I add a section in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/load_hub.rst? (there is no section for get_dataset_infos)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2906\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905","id":995843964,"node_id":"PR_kwDODunzps4ruL5X","number":2905,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631614577000,"updated_at":1631622337000,"closed_at":1631622337000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2905","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2905.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2905\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2904","id":995814222,"node_id":"I_kwDODunzps47WutO","number":2904,"title":"FORCE_REDOWNLOAD does not work","user":{"login":"anoopkatti","id":5278299,"node_id":"MDQ6VXNlcjUyNzgyOTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5278299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anoopkatti","html_url":"https:\/\/github.com\/anoopkatti","followers_url":"https:\/\/api.github.com\/users\/anoopkatti\/followers","following_url":"https:\/\/api.github.com\/users\/anoopkatti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anoopkatti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anoopkatti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anoopkatti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anoopkatti\/orgs","repos_url":"https:\/\/api.github.com\/users\/anoopkatti\/repos","events_url":"https:\/\/api.github.com\/users\/anoopkatti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anoopkatti\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue."],"created_at":1631612726000,"updated_at":1632129275000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWith GenerateMode.FORCE_REDOWNLOAD, the documentation says \r\n +------------------------------------+-----------+---------+\r\n | | Downloads | Dataset |\r\n +====================================+===========+=========+\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n +------------------------------------+-----------+---------+\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n +------------------------------------+-----------+---------+\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n +------------------------------------+-----------+---------+\r\n\r\nHowever, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nimport pandas as pd\r\nfrom datasets import load_dataset, GenerateMode\r\npd.DataFrame(range(5), columns=['numbers']).to_csv('\/tmp\/test.tsv.gz', index=False)\r\nee = load_dataset('csv', data_files=['\/tmp\/test.tsv.gz'], delimiter='\\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)\r\nprint(ee)\r\npd.DataFrame(range(10), columns=['numerals']).to_csv('\/tmp\/test.tsv.gz', index=False)\r\nee = load_dataset('csv', data_files=['\/tmp\/test.tsv.gz'], delimiter='\\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD)\r\nprint(ee)\r\n\r\n```\r\n\r\n## Expected results\r\nDataset({\r\n features: ['numbers'],\r\n num_rows: 5\r\n})\r\nDataset({\r\n features: ['numerals'],\r\n num_rows: 10\r\n})\r\n\r\n## Actual results\r\nDataset({\r\n features: ['numbers'],\r\n num_rows: 5\r\n})\r\nDataset({\r\n features: ['numbers'],\r\n num_rows: 5\r\n})\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2904\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903","id":995715191,"node_id":"PR_kwDODunzps4rtxxV","number":2903,"title":"Fix xpathopen to accept positional arguments","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks!"],"created_at":1631606570000,"updated_at":1631609481000,"closed_at":1631608847000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2903.patch"},"body":"Fix `xpathopen()` so that it also accepts positional arguments.\r\n\r\nFix #2901.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2903\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2902","id":995254216,"node_id":"MDU6SXNzdWU5OTUyNTQyMTY=","number":2902,"title":"Add WIT Dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@hassiahk is working on it #2810 ","WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps:\/\/techblog.wikimedia.org\/2021\/09\/09\/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research\/\r\nhttps:\/\/analytics.wikimedia.org\/published\/datasets\/one-off\/caption_competition\/training\/image_pixels\/","> @hassiahk is working on it #2810\r\n\r\nThank you @bhavitvyamalik! Added this issue so we could track progress \ud83d\ude04 . Just linked the PR as well for visibility. "],"created_at":1631561929000,"updated_at":1631567400000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *WIT*\r\n- **Description:** *Wikipedia-based Image Text Dataset*\r\n- **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning\r\n](https:\/\/arxiv.org\/abs\/2103.01913)*\r\n- **Data:** *https:\/\/github.com\/google-research-datasets\/wit*\r\n- **Motivation:** (excerpt from their Github README.md)\r\n\r\n> - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples.\r\n> - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages.\r\n> - A collection of diverse set of concepts and real world entities.\r\n> - Brings forth challenging real-world test sets.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2902\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2901","id":995232844,"node_id":"MDU6SXNzdWU5OTUyMzI4NDQ=","number":2901,"title":"Incompatibility with pytest","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!"],"created_at":1631560337000,"updated_at":1631608847000,"closed_at":1631608847000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\npytest complains about xpathopen \/ path.open(\"w\")\r\n\r\n## Steps to reproduce the bug\r\n\r\nCreate a test file, `test.py`:\r\n\r\n```python\r\nimport datasets as ds\r\ndef load_dataset():\r\n ds.load_dataset(\"counter\", split=\"train\", streaming=True)\r\n```\r\n\r\nAnd launch it with pytest:\r\n\r\n```bash\r\npython -m pytest test.py\r\n```\r\n\r\n## Expected results\r\n\r\nIt should give something like:\r\n\r\n```\r\ncollected 1 item\r\n\r\ntest.py . [100%]\r\n\r\n======= 1 passed in 3.15s =======\r\n```\r\n\r\n## Actual results\r\n\r\n```\r\n============================================================================================================================= test session starts ==============================================================================================================================\r\nplatform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0\r\nrootdir: \/home\/slesage\/hf\/datasets-preview-backend, configfile: pyproject.toml\r\nplugins: anyio-3.3.1\r\ncollected 1 item\r\n\r\ntests\/queries\/test_rows.py . [100%]Traceback (most recent call last):\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pytest\/__main__.py\", line 5, in \r\n raise SystemExit(pytest.console_main())\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/config\/__init__.py\", line 185, in console_main\r\n code = main()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/config\/__init__.py\", line 162, in main\r\n ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 60, in _multicall\r\n return outcome.get_result()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/main.py\", line 316, in pytest_cmdline_main\r\n return wrap_session(config, _main)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/main.py\", line 304, in wrap_session\r\n config.hook.pytest_sessionfinish(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 55, in _multicall\r\n gen.send(outcome)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/terminal.py\", line 803, in pytest_sessionfinish\r\n outcome.get_result()\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/pluggy\/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/cacheprovider.py\", line 428, in pytest_sessionfinish\r\n config.cache.set(\"cache\/nodeids\", sorted(self.cached_nodeids))\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/_pytest\/cacheprovider.py\", line 188, in set\r\n f = path.open(\"w\")\r\nTypeError: xpathopen() takes 1 positional argument but 2 were given\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.12.0\r\n- Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2901\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900","id":994922580,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMyNzczNDkw","number":2900,"title":"Fix null sequence encoding","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631541308000,"updated_at":1631542663000,"closed_at":1631542662000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2900","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2900.patch"},"body":"The Sequence feature encoding was failing when a `None` sequence was used in a dataset.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2892","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2900\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2899","id":994082432,"node_id":"MDU6SXNzdWU5OTQwODI0MzI=","number":2899,"title":"Dataset","user":{"login":"rcacho172","id":90449239,"node_id":"MDQ6VXNlcjkwNDQ5MjM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90449239?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rcacho172","html_url":"https:\/\/github.com\/rcacho172","followers_url":"https:\/\/api.github.com\/users\/rcacho172\/followers","following_url":"https:\/\/api.github.com\/users\/rcacho172\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rcacho172\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rcacho172\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rcacho172\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rcacho172\/orgs","repos_url":"https:\/\/api.github.com\/users\/rcacho172\/repos","events_url":"https:\/\/api.github.com\/users\/rcacho172\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rcacho172\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631432333000,"updated_at":1631463135000,"closed_at":1631463135000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2899\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2898","id":994032814,"node_id":"MDU6SXNzdWU5OTQwMzI4MTQ=","number":2898,"title":"Hug emoji","user":{"login":"Jackg-08","id":90539794,"node_id":"MDQ6VXNlcjkwNTM5Nzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90539794?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jackg-08","html_url":"https:\/\/github.com\/Jackg-08","followers_url":"https:\/\/api.github.com\/users\/Jackg-08\/followers","following_url":"https:\/\/api.github.com\/users\/Jackg-08\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jackg-08\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jackg-08\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jackg-08\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jackg-08\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jackg-08\/repos","events_url":"https:\/\/api.github.com\/users\/Jackg-08\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jackg-08\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631417271000,"updated_at":1631463193000,"closed_at":1631463193000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2898\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897","id":993798386,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxOTA0ODk4","number":2897,"title":"Add OpenAI's HumanEval dataset","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)"],"created_at":1631353067000,"updated_at":1631804531000,"closed_at":1631804531000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2897","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2897.patch"},"body":"This PR adds OpenAI's [HumanEval](https:\/\/github.com\/openai\/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2897\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896","id":993613113,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNzcwMTE3","number":2896,"title":"add multi-proc in `to_csv`","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631309709000,"updated_at":1631309709000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2896","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2896.patch"},"body":"This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. \r\n\r\nResults on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1):\r\n```\r\nTime taken on 1 num_proc, 10000 batch_size 674.2055702209473\r\nTime taken on 4 num_proc, 10000 batch_size 425.6553490161896\r\n\r\nTime taken on 1 num_proc, 50000 batch_size 623.5897650718689\r\nTime taken on 4 num_proc, 50000 batch_size 380.0402421951294\r\n\r\nTime taken on 4 num_proc, 100000 batch_size 361.7168130874634\r\n```\r\nThis is a WIP as writing tests is pending for this PR. \r\n\r\nI'm also exploring [this](https:\/\/arrow.apache.org\/docs\/python\/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2896\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895","id":993462274,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNjQ0NTY2","number":2895,"title":"Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast","user":{"login":"arsarabi","id":12345848,"node_id":"MDQ6VXNlcjEyMzQ1ODQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12345848?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arsarabi","html_url":"https:\/\/github.com\/arsarabi","followers_url":"https:\/\/api.github.com\/users\/arsarabi\/followers","following_url":"https:\/\/api.github.com\/users\/arsarabi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arsarabi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arsarabi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arsarabi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arsarabi\/orgs","repos_url":"https:\/\/api.github.com\/users\/arsarabi\/repos","events_url":"https:\/\/api.github.com\/users\/arsarabi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arsarabi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631296617000,"updated_at":1632264601000,"closed_at":1632212315000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2895","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2895.patch"},"body":"This PR partially addresses #2252.\r\n\r\n``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2895\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894","id":993375654,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5","number":2894,"title":"Fix COUNTER dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631290049000,"updated_at":1631291265000,"closed_at":1631291264000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2894","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2894.patch"},"body":"Fix filename generating `FileNotFoundError`.\r\n\r\nRelated to #2866.\r\nCC: @severo.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2894\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893","id":993342781,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxNTQ0NDQz","number":2893,"title":"add mbpp dataset","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it's fine to have the original schema"],"created_at":1631287650000,"updated_at":1631784942000,"closed_at":1631784942000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2893","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2893.patch"},"body":"This PR adds the mbpp dataset introduced by Google [here](https:\/\/github.com\/google-research\/google-research\/tree\/master\/mbpp) as mentioned in #2816.\r\n\r\nThe dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. An open question is whether to harmonize the two schemas when loading the dataset or to preserve the original one. Since not all fields are overlapping the schema will not be exactly the same.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2893\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2892","id":993274572,"node_id":"MDU6SXNzdWU5OTMyNzQ1NzI=","number":2892,"title":"Error when encoding a dataset with None objects with a Sequence feature","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This has been fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/2900\r\nWe're doing a new release 1.12 today to make the fix available :)"],"created_at":1631283103000,"updated_at":1631542693000,"closed_at":1631542662000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"There is an error when encoding a dataset with None objects with a Sequence feature\r\n\r\nTo reproduce:\r\n```python\r\nfrom datasets import Dataset, Features, Value, Sequence\r\ndata = {\"a\": [[0], None]}\r\nfeatures = Features({\"a\": Sequence(Value(\"int32\"))})\r\ndataset = Dataset.from_dict(data, features=features)\r\n```\r\nraises\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n in \r\n 2 data = {\"a\": [[0], None]}\r\n 3 features = Features({\"a\": Sequence(Value(\"int32\"))})\r\n----> 4 dataset = Dataset.from_dict(data, features=features)\r\n[...]\r\n~\/datasets\/features.py in encode_nested_example(schema, obj)\r\n 888 if isinstance(obj, str): # don't interpret a string as a list\r\n 889 raise ValueError(\"Got a string but expected a list instead: '{}'\".format(obj))\r\n--> 890 return [encode_nested_example(schema.feature, o) for o in obj]\r\n 891 # Object with special encoding:\r\n 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks\r\n\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\nInstead, if should run without error, as if the `features` were not passed","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2892\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891","id":993161984,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMxMzkwNjM2","number":2891,"title":"[WIP] Allow dynamic first dimension for ArrayXD","user":{"login":"rpowalski","id":10357417,"node_id":"MDQ6VXNlcjEwMzU3NDE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10357417?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rpowalski","html_url":"https:\/\/github.com\/rpowalski","followers_url":"https:\/\/api.github.com\/users\/rpowalski\/followers","following_url":"https:\/\/api.github.com\/users\/rpowalski\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rpowalski\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rpowalski\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rpowalski\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rpowalski\/orgs","repos_url":"https:\/\/api.github.com\/users\/rpowalski\/repos","events_url":"https:\/\/api.github.com\/users\/rpowalski\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rpowalski\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631274772000,"updated_at":1632142453000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2891","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2891.patch"},"body":"Add support for dynamic first dimension for ArrayXD features. See issue [#887](https:\/\/github.com\/huggingface\/datasets\/issues\/887).\r\nFollowing changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary.\r\n\r\n@lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2891\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2890","id":993074102,"node_id":"MDU6SXNzdWU5OTMwNzQxMDI=","number":2890,"title":"0x290B112ED1280537B24Ee6C268a004994a16e6CE","user":{"login":"rcacho172","id":90449239,"node_id":"MDQ6VXNlcjkwNDQ5MjM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90449239?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rcacho172","html_url":"https:\/\/github.com\/rcacho172","followers_url":"https:\/\/api.github.com\/users\/rcacho172\/followers","following_url":"https:\/\/api.github.com\/users\/rcacho172\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rcacho172\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rcacho172\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rcacho172\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rcacho172\/orgs","repos_url":"https:\/\/api.github.com\/users\/rcacho172\/repos","events_url":"https:\/\/api.github.com\/users\/rcacho172\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rcacho172\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631267477000,"updated_at":1631274329000,"closed_at":1631274329000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2890\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2889","id":992968382,"node_id":"MDU6SXNzdWU5OTI5NjgzODI=","number":2889,"title":"Coc","user":{"login":"Bwiggity","id":90444264,"node_id":"MDQ6VXNlcjkwNDQ0MjY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90444264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bwiggity","html_url":"https:\/\/github.com\/Bwiggity","followers_url":"https:\/\/api.github.com\/users\/Bwiggity\/followers","following_url":"https:\/\/api.github.com\/users\/Bwiggity\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bwiggity\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bwiggity\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bwiggity\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bwiggity\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bwiggity\/repos","events_url":"https:\/\/api.github.com\/users\/Bwiggity\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bwiggity\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631259127000,"updated_at":1631274354000,"closed_at":1631274354000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2889\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2888","id":992676535,"node_id":"MDU6SXNzdWU5OTI2NzY1MzU=","number":2888,"title":"v1.11.1 release date","user":{"login":"fcakyon","id":34196005,"node_id":"MDQ6VXNlcjM0MTk2MDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34196005?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fcakyon","html_url":"https:\/\/github.com\/fcakyon","followers_url":"https:\/\/api.github.com\/users\/fcakyon\/followers","following_url":"https:\/\/api.github.com\/users\/fcakyon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fcakyon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fcakyon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fcakyon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fcakyon\/orgs","repos_url":"https:\/\/api.github.com\/users\/fcakyon\/repos","events_url":"https:\/\/api.github.com\/users\/fcakyon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fcakyon\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Probably 1.12 on monday :)\r\n","@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)"],"created_at":1631224395000,"updated_at":1631477915000,"closed_at":1631463339000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago.\r\n\r\nWhen do you plan to publush v1.11.1 release?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2888\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887","id":992576305,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwODg4MTU3","number":2887,"title":"#2837 Use cache folder for lockfile","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631217356000,"updated_at":1632231578000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2887","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2887.patch"},"body":"Fixes #2837 \r\n\r\nUse a cache folder directory to store the FileLock.\r\n\r\nThe issue was that the lock file was in a readonly folder.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2887\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2886","id":992534632,"node_id":"MDU6SXNzdWU5OTI1MzQ2MzI=","number":2886,"title":"Hj","user":{"login":"Noorasri","id":90416328,"node_id":"MDQ6VXNlcjkwNDE2MzI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/90416328?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Noorasri","html_url":"https:\/\/github.com\/Noorasri","followers_url":"https:\/\/api.github.com\/users\/Noorasri\/followers","following_url":"https:\/\/api.github.com\/users\/Noorasri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Noorasri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Noorasri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Noorasri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Noorasri\/orgs","repos_url":"https:\/\/api.github.com\/users\/Noorasri\/repos","events_url":"https:\/\/api.github.com\/users\/Noorasri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Noorasri\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631213932000,"updated_at":1631274389000,"closed_at":1631274389000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2886\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2885","id":992160544,"node_id":"MDU6SXNzdWU5OTIxNjA1NDQ=","number":2885,"title":"Adding an Elastic Search index to a Dataset","user":{"login":"MotzWanted","id":36195371,"node_id":"MDQ6VXNlcjM2MTk1Mzcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36195371?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MotzWanted","html_url":"https:\/\/github.com\/MotzWanted","followers_url":"https:\/\/api.github.com\/users\/MotzWanted\/followers","following_url":"https:\/\/api.github.com\/users\/MotzWanted\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MotzWanted\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MotzWanted\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MotzWanted\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MotzWanted\/orgs","repos_url":"https:\/\/api.github.com\/users\/MotzWanted\/repos","events_url":"https:\/\/api.github.com\/users\/MotzWanted\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MotzWanted\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env"],"created_at":1631190099000,"updated_at":1632128781000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen trying to index documents from the squad dataset, the connection to ElasticSearch seems to break:\r\n\r\nReusing dataset squad (\/Users\/andreasmotz\/.cache\/huggingface\/datasets\/squad\/plain_text\/1.0.0\/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\n 90%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589 | 9501\/10570 [00:01<00:00, 6335.61docs\/s]\r\n\r\nNo error is thrown, but the indexing breaks ~90%.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nfrom datasets import load_dataset\r\nfrom elasticsearch import Elasticsearch\r\nes = Elasticsearch()\r\nsquad = load_dataset('squad', split='validation')\r\nindex_name = \"corpus\"\r\nes_config = {\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n \"analysis\": {\"analyzer\": {\"stop_standard\": {\"type\": \"standard\", \" stopwords\": \"_english_\"}}},\r\n },\r\n \"mappings\": {\r\n \"properties\": {\r\n \"idx\" : {\"type\" : \"keyword\"},\r\n \"title\" : {\"type\" : \"keyword\"},\r\n \"text\": {\r\n \"type\": \"text\",\r\n \"analyzer\": \"standard\",\r\n \"similarity\": \"BM25\"\r\n },\r\n }\r\n },\r\n}\r\nclass IndexBuilder:\r\n \"\"\"\r\n Elastic search indexing of a corpus\r\n \"\"\"\r\n def __init__(\r\n self,\r\n *args,\r\n #corpus : None,\r\n dataset : squad,\r\n index_name = str,\r\n query = str,\r\n config = dict,\r\n **kwargs,\r\n ):\r\n #instantiate HuggingFace dataset\r\n self.dataset = dataset\r\n #instantiate ElasticSearch config\r\n self.config = config\r\n self.es = Elasticsearch()\r\n self.index_name = index_name\r\n self.query = query\r\n def elastic_index(self):\r\n print(self.es.info)\r\n self.es.indices.delete(index=self.index_name, ignore=[400, 404])\r\n search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config)\r\n return search_index\r\n def exact_match_method(self, index):\r\n scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1)\r\n return scores, retrieved_examples\r\nif __name__ == \"__main__\":\r\n print(type(squad))\r\n Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config)\r\n search_index = Index.elastic_index()\r\n scores, examples = Index.exact_match_method(search_index)\r\n print(scores, examples)\r\n for name in squad.column_names:\r\n print(type(squad[name]))\r\n```\r\n\r\n## Environment info\r\nWe run the code in Poetry. This might be the issue, since the script runs successfully in our local environment.\r\n\r\nPoetry:\r\n- Python version: 3.8\r\n- PyArrow: 4.0.1\r\n- Elasticsearch: 7.13.4\r\n- datasets: 1.10.2\r\n\r\nLocal:\r\n- Python version: 3.8\r\n- PyArrow: 3.0.0\r\n- Elasticsearch: 7.7.1\r\n- datasets: 1.7.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2885\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884","id":992135698,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwNTA4MTE1","number":2884,"title":"Add IC, SI, ER tasks to SUPERB","user":{"login":"anton-l","id":26864830,"node_id":"MDQ6VXNlcjI2ODY0ODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26864830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anton-l","html_url":"https:\/\/github.com\/anton-l","followers_url":"https:\/\/api.github.com\/users\/anton-l\/followers","following_url":"https:\/\/api.github.com\/users\/anton-l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anton-l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anton-l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anton-l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anton-l\/orgs","repos_url":"https:\/\/api.github.com\/users\/anton-l\/repos","events_url":"https:\/\/api.github.com\/users\/anton-l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anton-l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ","Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken\/unstable links. You can get all necessary archives in this repo: https:\/\/huggingface.co\/datasets\/anton-l\/superb_source_data_dumps\/tree\/main\r\nAre we allowed to make these datasets public or would that violate the terms of their use?","@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us. \nFor example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(","> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.\r\n> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(\r\n\r\nI think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)?"],"created_at":1631188563000,"updated_at":1632129478000,"closed_at":1632128449000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2884","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2884.patch"},"body":"This PR adds 3 additional classification tasks to SUPERB\r\n\r\n#### Intent Classification\r\nDataset URL seems to be down at the moment :( See the note below.\r\nS3PRL source: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/fluent_commands\/dataset.py\r\nInstructions: https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#ic-intent-classification---fluent-speech-commands\r\n\r\n#### Speaker Identification\r\nManual download script:\r\n```\r\nmkdir VoxCeleb1\r\ncd VoxCeleb1\r\n \r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partaa\r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partab\r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partac\r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_dev_wav_partad\r\ncat vox1_dev* > vox1_dev_wav.zip\r\nunzip vox1_dev_wav.zip\r\n \r\nwget https:\/\/thor.robots.ox.ac.uk\/~vgg\/data\/voxceleb\/vox1a\/vox1_test_wav.zip\r\nunzip vox1_test_wav.zip\r\n \r\n# download the official SUPERB train-dev-test split\r\nwget https:\/\/raw.githubusercontent.com\/s3prl\/s3prl\/master\/s3prl\/downstream\/voxceleb1\/veri_test_class.txt\r\n```\r\nS3PRL source: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/voxceleb1\/dataset.py\r\nInstructions: https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#sid-speaker-identification\r\n\r\n#### Intent Classification\r\nManual download requires going through a slow application process, see the note below.\r\nS3PRL source: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/emotion\/IEMOCAP_preprocess.py\r\nInstructions: https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#er-emotion-recognition\r\n\r\n#### :warning: Note\r\nThese datasets either require manual downloads or have broken\/unstable links. You can get all necessary archives in this repo: https:\/\/huggingface.co\/datasets\/anton-l\/superb_source_data_dumps\/tree\/main","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2884\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883","id":991969875,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwMzYzNTQz","number":2883,"title":"Fix data URLs and metadata in DocRED dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631177734000,"updated_at":1631532271000,"closed_at":1631532271000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2883","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2883.patch"},"body":"The host of `docred` dataset has updated the `dev` data file. This PR:\r\n- Updates the dev URL\r\n- Updates dataset metadata\r\n\r\nThis PR also fixes the URL of the `train_distant` split, which was wrong.\r\n\r\nFix #2882.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2883\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2882","id":991800141,"node_id":"MDU6SXNzdWU5OTE4MDAxNDE=","number":2882,"title":"`load_dataset('docred')` results in a `NonMatchingChecksumError` ","user":{"login":"tmpr","id":51313597,"node_id":"MDQ6VXNlcjUxMzEzNTk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51313597?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tmpr","html_url":"https:\/\/github.com\/tmpr","followers_url":"https:\/\/api.github.com\/users\/tmpr\/followers","following_url":"https:\/\/api.github.com\/users\/tmpr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tmpr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tmpr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tmpr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tmpr\/orgs","repos_url":"https:\/\/api.github.com\/users\/tmpr\/repos","events_url":"https:\/\/api.github.com\/users\/tmpr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tmpr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https:\/\/drive.google.com\/drive\/folders\/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in the meantime, you could avoid the error by passing `ignore_verifications=True` to `load_dataset`. However, as the old link points to a non-existing file, the link must be updated too.\r\n\r\nI'm fixing all this.\r\n\r\n"],"created_at":1631166902000,"updated_at":1631532270000,"closed_at":1631532270000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`.\r\n\r\n## Steps to reproduce the bug\r\nIt is quasi only this code:\r\n```python\r\nimport datasets\r\ndata = datasets.load_dataset('docred')\r\n```\r\n\r\n## Expected results\r\nThe DocRED dataset should be loaded without any problems.\r\n\r\n## Actual results\r\n```\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n in \r\n----> 1 d = datasets.load_dataset('docred')\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 673 # Checksums verification\r\n 674 if verify_infos:\r\n--> 675 verify_checksums(\r\n 676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 677 )\r\n\r\n~\/anaconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/drive.google.com\/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7']\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.11.0\r\n- Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyArrow version: 5.0.0\r\n\r\nThis error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`.\r\n\r\n## Remarks\r\n\r\n- I have already called `rm -rf \/home\/\/.cache\/huggingface`, i.e., I have tried clearing the cache.\r\n- The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2882\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881","id":991639142,"node_id":"MDExOlB1bGxSZXF1ZXN0NzMwMDc1OTAy","number":2881,"title":"Add BIOSSES dataset","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631147736000,"updated_at":1631542840000,"closed_at":1631542840000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2881","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2881.patch"},"body":"Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in \"Biomedical Datasets - BigScience Workshop 2021\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2881\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880","id":990877940,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI5NDIzMDMy","number":2880,"title":"Extend support for streaming datasets that use pathlib.Path stem\/suffix","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631090563000,"updated_at":1631193209000,"closed_at":1631193209000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2880","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2880.patch"},"body":"This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`.\r\n\r\nRelated to #2876, #2874, #2866.\r\n\r\nCC: @severo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2880\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2879","id":990257404,"node_id":"MDU6SXNzdWU5OTAyNTc0MDQ=","number":2879,"title":"In v1.4.1, all TIMIT train transcripts are \"Would such an act of refusal be useful?\"","user":{"login":"rcgale","id":2279700,"node_id":"MDQ6VXNlcjIyNzk3MDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2279700?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rcgale","html_url":"https:\/\/github.com\/rcgale","followers_url":"https:\/\/api.github.com\/users\/rcgale\/followers","following_url":"https:\/\/api.github.com\/users\/rcgale\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rcgale\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rcgale\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rcgale\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rcgale\/orgs","repos_url":"https:\/\/api.github.com\/users\/rcgale\/repos","events_url":"https:\/\/api.github.com\/users\/rcgale\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rcgale\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https:\/\/github.com\/huggingface\/datasets\/commit\/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that should work.\r\n\r\nOn the other hand, would it be possible for @patrickvonplaten to update the [blog post](https:\/\/huggingface.co\/blog\/fine-tune-wav2vec2-english) with the correct version of `datasets`?","I just proposed a change in the blog post.\r\n\r\nI had assumed there was a data format change that broke a previous version of the code, since presumably @patrickvonplaten tested the tutorial with the version they explicitly referenced. But that fix you linked suggests a problem in the code, which surprised me.\r\n\r\nI still wonder, though, is there a way for downloads to be invalidated server-side? If the client can announce its version during a download request, perhaps the server could reject known incompatibilities? It would save much valuable time if `datasets` raised an informative error on a known problem (\"Error: the requested data set requires `datasets>=1.5.0`.\"). This kind of API versioning is a prudent move anyhow, as there will surely come a time when you'll need to make a breaking change to data.","Also, thank you for a quick and helpful reply!"],"created_at":1631040825000,"updated_at":1631120119000,"closed_at":1631092348000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nUsing version 1.4.1 of `datasets`, TIMIT transcripts are all the same.\r\n\r\n## Steps to reproduce the bug\r\nI was following this tutorial\r\n- https:\/\/huggingface.co\/blog\/fine-tune-wav2vec2-english\r\n\r\nBut here's a distilled repro:\r\n```python\r\n!pip install datasets==1.4.1\r\nfrom datasets import load_dataset\r\ntimit = load_dataset(\"timit_asr\", cache_dir=\".\/temp\")\r\nunique_transcripts = set(timit[\"train\"][\"text\"])\r\nprint(unique_transcripts)\r\nassert len(unique_transcripts) > 1\r\n```\r\n## Expected results\r\nExpected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.\r\n\r\n## Actual results\r\nEvery train transcript was \"Would such an act of refusal be useful?\" Every test transcript was \"The bungalow was pleasantly situated near the shore.\"\r\n\r\n## Environment info\r\n- `datasets` version: 1.4.1\r\n- Platform: Darwin-18.7.0-x86_64-i386-64bit\r\n- Python version: 3.7.9\r\n- PyTorch version (GPU?): 1.9.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: tried both\r\n- Using distributed or parallel set-up in script?: no\r\n- \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2879\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2878","id":990093316,"node_id":"MDU6SXNzdWU5OTAwOTMzMTY=","number":2878,"title":"NotADirectoryError: [WinError 267] During load_from_disk","user":{"login":"Grassycup","id":1875064,"node_id":"MDQ6VXNlcjE4NzUwNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1875064?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Grassycup","html_url":"https:\/\/github.com\/Grassycup","followers_url":"https:\/\/api.github.com\/users\/Grassycup\/followers","following_url":"https:\/\/api.github.com\/users\/Grassycup\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Grassycup\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Grassycup\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Grassycup\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Grassycup\/orgs","repos_url":"https:\/\/api.github.com\/users\/Grassycup\/repos","events_url":"https:\/\/api.github.com\/users\/Grassycup\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Grassycup\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631027705000,"updated_at":1631027705000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nTrying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails.\r\nPerforming the same operation succeeds on non-windows environment (AWS Sagemaker).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Followed https:\/\/huggingface.co\/docs\/datasets\/filesystems.html#loading-a-processed-dataset-from-s3\r\n\r\nfrom datasets import load_from_disk\r\nfrom datasets.filesystems import S3FileSystem\r\n\r\n\r\ns3_file = \"output of save_to_disk\"\r\n\r\ns3_filesystem = S3FileSystem()\r\n\r\nload_from_disk(s3_file, fs=s3_filesystem)\r\n```\r\n\r\n## Expected results\r\nload_from_disk succeeds without error\r\n\r\n## Actual results\r\nSeems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it.\r\n```\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\weakref.py\", line 566, in __call__\r\n return info.func(*info.args, **(info.kwargs or {}))\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 817, in _cleanup\r\n cls._rmtree(name)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n [Previous line repeated 2 more times]\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 805, in onerror\r\n cls._rmtree(path)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 599, in _rmtree_unsafe\r\n onerror(os.scandir, path, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 596, in _rmtree_unsafe\r\n with os.scandir(path) as scandir_it:\r\nNotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\\\Users\\\\grassycup\\\\AppData\\\\Local\\\\Temp\\\\tmp45f_qbma\\\\tests3bucket\\\\output\\\\test_output\\\\train\\\\dataset.arrow'\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\weakref.py\", line 566, in __call__\r\n return info.func(*info.args, **(info.kwargs or {}))\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 817, in _cleanup\r\n cls._rmtree(name)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n _rmtree_unsafe(fullname, onerror)\r\n [Previous line repeated 2 more times]\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 805, in onerror\r\n cls._rmtree(path)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\tempfile.py\", line 813, in _rmtree\r\n _shutil.rmtree(name, onerror=onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 599, in _rmtree_unsafe\r\n onerror(os.scandir, path, sys.exc_info())\r\n File \"C:\\Users\\grassycup\\Anaconda3\\envs\\hello.world\\lib\\shutil.py\", line 596, in _rmtree_unsafe\r\n with os.scandir(path) as scandir_it:\r\nNotADirectoryError: [WinError 267] The directory name is invalid:\r\n'C:\\\\Users\\\\grassycup\\\\AppData\\\\Local\\\\Temp\\\\tmp45f_qbma\\\\tests3bucket\\\\output\\\\test_output\\\\train\\\\dataset.arrow'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19042-SP0\r\n- Python version: 3.8.11\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2878\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2877","id":990027249,"node_id":"MDU6SXNzdWU5OTAwMjcyNDk=","number":2877,"title":"Don't keep the dummy data folder or dataset_infos.json when resolving data files","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631023744000,"updated_at":1631023744000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files.\r\n\r\nThere are already a few exceptions:\r\n- files starting with \".\" are ignored\r\n- the dataset card \"README.md\" is ignored\r\n- any file named \"config.json\" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure)\r\n\r\nHowever any data files in a folder named \"dummy\" should be ignored as well as they should only be used to test the dataset.\r\nSame for \"dataset_infos.json\" which should only be used to get the `dataset.info`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2877\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876","id":990001079,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4NjU3MDc2","number":2876,"title":"Extend support for streaming datasets that use pathlib.Path.glob","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am thinking that ideally we should call `fs.glob()` instead...","Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs."],"created_at":1631022225000,"updated_at":1631267449000,"closed_at":1631267448000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2876","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2876.patch"},"body":"This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`.\r\n\r\nRelated to #2874, #2866.\r\n\r\nCC: @severo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2876\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2875","id":989919398,"node_id":"MDU6SXNzdWU5ODk5MTkzOTg=","number":2875,"title":"Add Congolese Swahili speech datasets","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1631016830000,"updated_at":1631016830000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Congolese Swahili speech corpora\r\n- **Data:** https:\/\/gamayun.translatorswb.org\/data\/\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nAlso related: https:\/\/mobile.twitter.com\/OktemAlp\/status\/1435196393631764482","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2875\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874","id":989685328,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4","number":2874,"title":"Support streaming datasets that use pathlib","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've tried https:\/\/github.com\/huggingface\/datasets\/issues\/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```","@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as that dataset uses quite a lot of `pathlib` functions... \ud83d\ude05 ","No worry and no stress, I just wanted to check for that case :) I'm very happy that you're working on issues I'm interested in!"],"created_at":1631000149000,"updated_at":1631039122000,"closed_at":1631014875000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2874","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2874.patch"},"body":"This PR extends the support in streaming mode for datasets that use `pathlib.Path`.\r\n\r\nRelated to: #2866.\r\nCC: @severo ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2874\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873","id":989587695,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4MzA0MTMw","number":2873,"title":"adding swedish_medical_ner","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?","Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset"],"created_at":1630989893000,"updated_at":1631911657000,"closed_at":1631911657000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2873","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2873.patch"},"body":"Adding the Swedish Medical NER dataset, listed in \"Biomedical Datasets - BigScience Workshop 2021\"\r\n\r\nCode refactored ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2873\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872","id":989453069,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI4MTkzMjkz","number":2872,"title":"adding swedish_medical_ner","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630965652000,"updated_at":1630989392000,"closed_at":1630989392000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2872","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2872.patch"},"body":"Adding the Swedish Medical NER dataset, listed in \"Biomedical Datasets - BigScience Workshop 2021\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2872\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2871","id":989436088,"node_id":"MDU6SXNzdWU5ODk0MzYwODg=","number":2871,"title":"datasets.config.PYARROW_VERSION has no attribute 'major'","user":{"login":"bwang482","id":6764450,"node_id":"MDQ6VXNlcjY3NjQ0NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6764450?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bwang482","html_url":"https:\/\/github.com\/bwang482","followers_url":"https:\/\/api.github.com\/users\/bwang482\/followers","following_url":"https:\/\/api.github.com\/users\/bwang482\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bwang482\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bwang482\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bwang482\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bwang482\/orgs","repos_url":"https:\/\/api.github.com\/users\/bwang482\/repos","events_url":"https:\/\/api.github.com\/users\/bwang482\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bwang482\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.","Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simultaneously modified:\r\n- test_dataset_common.py: https:\/\/github.com\/huggingface\/datasets\/commit\/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-a1bc225bd9a5bade373d1f140e24d09cbbdc97971c2f73bb627daaa803ada002L289 that introduces the usage of `datasets.config.PYARROW_VERSION.major`\r\n- but also changed config.py: https:\/\/github.com\/huggingface\/datasets\/commit\/d03223d4d64b89e76b48b00602aba5aa2f817f1e#diff-e021fcfc41811fb970fab889b8d245e68382bca8208e63eaafc9a396a336f8f2L40, so that `datasets.config.PYARROW_VERSION.major` exists\r\n","Sorted. Thanks!","Reopening this. Although the `test_dataset_common.py` script works fine now.\r\n\r\nHas this got something to do with my pull request not passing `ci\/circleci: run_dataset_script_tests_pyarrow` tests?\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/2873","Hi @bwang482,\r\n\r\nIf you click on `Details` (on the right of your non passing CI test names: `ci\/circleci: run_dataset_script_tests_pyarrow`), you can have more information about the non-passing tests.\r\n\r\nFor example, for [\"ci\/circleci: run_dataset_script_tests_pyarrow_1\" details](https:\/\/circleci.com\/gh\/huggingface\/datasets\/46324?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link), you can see that the only non-passing test has to do with the dataset card (missing information in the `README.md` file): `test_changed_dataset_card`\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_cards.py::test_changed_dataset_card[swedish_medical_ner]\r\n= 1 failed, 3214 passed, 2874 skipped, 2 xfailed, 1 xpassed, 15 warnings in 175.59s (0:02:55) =\r\n```\r\n\r\nTherefore, your PR non-passing test has nothing to do with this issue."],"created_at":1630962417000,"updated_at":1631091112000,"closed_at":1631091112000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"In the test_dataset_common.py script, line 288-289\r\n\r\n```\r\nif datasets.config.PYARROW_VERSION.major < 3:\r\n packaged_datasets = [pd for pd in packaged_datasets if pd[\"dataset_name\"] != \"parquet\"]\r\n```\r\n\r\nwhich throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.\r\n\r\n```\r\nimport datasets\r\ndatasets.config.PYARROW_VERSION.major\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n\/var\/folders\/1f\/0wqmlgp90qjd5mpj53fnjq440000gn\/T\/ipykernel_73361\/2547517336.py in \r\n 1 import datasets\r\n----> 2 datasets.config.PYARROW_VERSION.major\r\n\r\nAttributeError: 'str' object has no attribute 'major'\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.11.0\r\n- Platform: Darwin-20.6.0-x86_64-i386-64bit\r\n- Python version: 3.7.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2871\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870","id":988276859,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI3MjI4Njk5","number":2870,"title":"Fix three typos in two files for documentation","user":{"login":"leny-mi","id":25124853,"node_id":"MDQ6VXNlcjI1MTI0ODUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25124853?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leny-mi","html_url":"https:\/\/github.com\/leny-mi","followers_url":"https:\/\/api.github.com\/users\/leny-mi\/followers","following_url":"https:\/\/api.github.com\/users\/leny-mi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leny-mi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leny-mi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leny-mi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leny-mi\/orgs","repos_url":"https:\/\/api.github.com\/users\/leny-mi\/repos","events_url":"https:\/\/api.github.com\/users\/leny-mi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leny-mi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630756183000,"updated_at":1630916481000,"closed_at":1630916375000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2870","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2870.patch"},"body":"Changed \"bacth_size\" to \"batch_size\" (2x)\r\nChanged \"intsructions\" to \"instructions\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2870\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2869","id":987676420,"node_id":"MDU6SXNzdWU5ODc2NzY0MjA=","number":2869,"title":"TypeError: 'NoneType' object is not callable","user":{"login":"Chenfei-Kang","id":40911446,"node_id":"MDQ6VXNlcjQwOTExNDQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40911446?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Chenfei-Kang","html_url":"https:\/\/github.com\/Chenfei-Kang","followers_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/followers","following_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/orgs","repos_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/repos","events_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Chenfei-Kang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1043\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1063\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details and environment info (platform, PyArrow version)?","> Hi, @Chenfei-Kang.\r\n> \r\n> I'm sorry, but I'm not able to reproduce your bug:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> \r\n> ds = load_dataset(\"glue\", 'cola')\r\n> ds\r\n> ```\r\n> \r\n> ```\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 8551\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1043\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'idx'],\r\n> num_rows: 1063\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Could you please give more details and environment info (platform, PyArrow version)?\r\n\r\nSorry to reply you so late.\r\nplatform: pycharm 2021 + anaconda with python 3.7\r\nPyArrow version: 5.0.0\r\nhuggingface-hub: 0.0.16\r\ndatasets: 1.9.0\r\n","- For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n- In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?","> * For the platform, we need to know the operating system of your machine. Could you please run the command `datasets-cli env` and copy-and-paste its output below?\r\n> * In relation with the error, you just gave us the error type and message (`TypeError: 'NoneType' object is not callable`). Could you please copy-paste the complete stack trace, so that we know exactly which part of the code threw the error?\r\n\r\n1. For the platform, here are the output:\r\n - datasets` version: 1.11.0\r\n - Platform: Windows-10-10.0.19041-SP0\r\n - Python version: 3.7.10\r\n - PyArrow version: 5.0.0\r\n2. For the code and error\uff1a\r\n ```python\r\n from datasets import load_dataset, load_metric\r\n dataset = load_dataset(\"glue\", \"cola\")\r\n ```\r\n ```python\r\n Traceback (most recent call last):\r\n ....\r\n ....\r\n File \"my_file.py\", line 2, in \r\n dataset = load_dataset(\"glue\", \"cola\")\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"My environments\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n TypeError: 'NoneType' object is not callable\r\n ```\r\n Thank you!","For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.","One naive question: do you have internet access from the machine where you execute the code?","> For that environment, I am sorry but I can't reproduce the bug: I can load the dataset without any problem.\r\n\r\nBut I can download other task dataset such as `dataset = load_dataset('squad')`. I don't know what went wrong. Thank you so much!"],"created_at":1630668459000,"updated_at":1631102998000,"closed_at":1631093095000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nTypeError: 'NoneType' object is not callable\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\ndataset = datasets.load_dataset(\"glue\", 'cola')\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform:\r\n- Python version: 3.7\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2869\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2868","id":987139146,"node_id":"MDU6SXNzdWU5ODcxMzkxNDY=","number":2868,"title":"Add Common Objects in 3D (CO3D)","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630614972000,"updated_at":1630614972000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *Common Objects in 3D (CO3D)*\r\n- **Description:** *See blog post [here](https:\/\/ai.facebook.com\/blog\/common-objects-in-3d-dataset-for-3d-reconstruction)*\r\n- **Paper:** *[link to paper](https:\/\/arxiv.org\/abs\/2109.00512)*\r\n- **Data:** *[link to data](https:\/\/ai.facebook.com\/datasets\/co3d-downloads\/)*\r\n- **Motivation:** *excerpt from above blog post:*\r\n\r\n> As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR\/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences.\r\n> \r\n> Besides practical applications in AR\/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model.\r\n> \r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2868\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867","id":986971224,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI2MTE3NzAw","number":2867,"title":"Add CaSiNo dataset","user":{"login":"kushalchawla","id":8416863,"node_id":"MDQ6VXNlcjg0MTY4NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8416863?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kushalchawla","html_url":"https:\/\/github.com\/kushalchawla","followers_url":"https:\/\/api.github.com\/users\/kushalchawla\/followers","following_url":"https:\/\/api.github.com\/users\/kushalchawla\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kushalchawla\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kushalchawla\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kushalchawla\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kushalchawla\/orgs","repos_url":"https:\/\/api.github.com\/users\/kushalchawla\/repos","events_url":"https:\/\/api.github.com\/users\/kushalchawla\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kushalchawla\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.","Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https:\/\/huggingface.co\/datasets. Does it take some time or did I miss something?","Hi ! It takes a few hours or a day for the list of datasets on the website to be updated ;)"],"created_at":1630602383000,"updated_at":1631805174000,"closed_at":1631784224000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2867","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2867.patch"},"body":"Hi. I request you to add our dataset to the repository. \r\n\r\nThis data was recently published at NAACL 2021: https:\/\/aclanthology.org\/2021.naacl-main.254.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2867\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2866","id":986706676,"node_id":"MDU6SXNzdWU5ODY3MDY2NzY=","number":2866,"title":"\"counter\" dataset raises an error in normal mode, but not in streaming mode","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `\/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.","OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)?","We should definitely support datasets using `pathlib` in streaming mode...\r\n\r\nFor non-supported datasets in streaming mode, we have already a request of raising an error\/warning: see #2654.","Hi @severo, please note that \"counter\" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:\r\n- #2874\r\n- #2876\r\n- #2880\r\n\r\nI have tested it. \ud83d\ude09 ","Now (on master), we get:\r\n\r\n```\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter\/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to \/home\/slesage\/.cache\/huggingface\/datasets\/counter\/default\/1.0.0\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"\/home\/slesage\/hf\/datasets\/.venv\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/counter\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9\/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets\/src\/datasets\/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n```\r\n\r\nThe error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!\r\n","Note that we might want to open an issue to fix the \"counter\" dataset by itself, but I let it up to you.","Fixed here: https:\/\/github.com\/huggingface\/datasets\/pull\/2894. Thanks @albertvillanova "],"created_at":1630588253000,"updated_at":1631291369000,"closed_at":1631277085000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\n`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> a = ds.load_dataset('counter', split=\"train\", streaming=False)\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter\/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to \/home\/slesage\/.cache\/huggingface\/datasets\/counter\/default\/1.0.0\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/slesage\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/counter\/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9\/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"\/home\/slesage\/.pyenv\/versions\/3.8.11\/lib\/python3.8\/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/slesage\/hf\/datasets-preview-backend\/.venv\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '\/home\/slesage\/.cache\/huggingface\/datasets\/downloads\/extracted\/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211\/COUNTER\/0032p.xml'\r\n```\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> b = ds.load_dataset('counter', split=\"train\", streaming=True)\r\nUsing custom data configuration default\r\n>>> list(b)\r\n[]\r\n```\r\n\r\n## Expected results\r\n\r\nAn exception should be raised in streaming mode\r\n\r\n## Actual results\r\n\r\nNo exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.1.dev0\r\n- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2866\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865","id":986460698,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI1NjY1ODgx","number":2865,"title":"Add MultiEURLEX dataset","user":{"login":"iliaschalkidis","id":1626984,"node_id":"MDQ6VXNlcjE2MjY5ODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1626984?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliaschalkidis","html_url":"https:\/\/github.com\/iliaschalkidis","followers_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/followers","following_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/repos","events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ","Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.\r\n\r\nI would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. ","Thanks for the changes :)\r\n\r\nRegarding the labels:\r\n\r\nIf you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.\r\nThe advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.\r\n\r\nLet me know if that sounds good to you or if you still want to stick with the labels as they are now.","Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages')\r\n# Read strs from the labels (list of integers) for the 1st sample of the training split\r\n```\r\n\r\nI would like to include this in the README file.\r\n\r\nCould you also provide some info on how I could define the supervized key to automate model training, as you said?\r\n\r\nThanks!","Thanks for the update :)\r\n\r\nHere is an example of usage:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages', split='train')\r\nclasslabel = dataset.features[\"labels\"].feature\r\nprint(dataset[0][\"labels\"])\r\n# [1, 20, 7, 3, 0]\r\nprint(classlabel.int2str(dataset[0][\"labels\"]))\r\n# ['100160', '100155', '100158', '100147', '100149']\r\n```\r\n\r\nThe ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p \r\n\r\nI think one last thing to do is just update the `dataset_infos.json` file and we'll be good !","Everything is ready! \ud83d\udc4d \r\n"],"created_at":1630575744000,"updated_at":1631274606000,"closed_at":1631274606000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2865","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2865.patch"},"body":"**Add new MultiEURLEX Dataset**\r\n\r\nMultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2865\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864","id":986159438,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw","number":2864,"title":"Fix data URL in ToTTo dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":[],"created_at":1630560308000,"updated_at":1630565260000,"closed_at":1630565260000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2864","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2864.patch"},"body":"Data source host changed their data URL: google-research-datasets\/ToTTo@cebeb43.\r\n\r\nFix #2860.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2864\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863","id":986156755,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI1MzkwMTkx","number":2863,"title":"Update dataset URL","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. \ud83d\ude09 "],"created_at":1630560138000,"updated_at":1630570250000,"closed_at":1630570250000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2863","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2863.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2863\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2862","id":985763001,"node_id":"MDU6SXNzdWU5ODU3NjMwMDE=","number":2862,"title":"Only retain relevant statistics in certain metrics","user":{"login":"ZhaofengWu","id":11954789,"node_id":"MDQ6VXNlcjExOTU0Nzg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11954789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZhaofengWu","html_url":"https:\/\/github.com\/ZhaofengWu","followers_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/followers","following_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/repos","events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630534690000,"updated_at":1630534690000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nAs I understand, in the `add_batch()` function, the raw predictions and references are kept (in memory?) until `compute()` is called.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e248247518140d5b0527ce2843a1a327e2902059\/src\/datasets\/metric.py#L423-L442\r\n\r\nThis takes O(n) memory. However, for many (most?) metrics, this is not necessary. E.g., for accuracy, only the # correct and # total need to be recorded.\r\n\r\n**Describe the solution you'd like**\r\nProbably an inheritance hierarchy where `\"predictions\"` and `\"references\"` are not always the two keys for the final metric computation. Each metric should create and maintain its own relevant statistics, again for example, `\"n_correct\"` and `\"n_total\"` for accuracy.\r\n\r\nI believe the metrics in AllenNLP (https:\/\/github.com\/allenai\/allennlp\/tree\/39c40fe38cd2fd36b3465b0b3c031f54ec824160\/allennlp\/training\/metrics) can be used as a good reference.\r\n\r\n**Describe alternatives you've considered**\r\nAt least `Metric.compute()` shouldn't hard-code `\"predictions\"` and `\"references\"` so that custom subclasses may override this behavior.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e248247518140d5b0527ce2843a1a327e2902059\/src\/datasets\/metric.py#L399-L400","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2862\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861","id":985081871,"node_id":"MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw","number":2861,"title":"fix: \ud83d\udc1b be more specific when catching exceptions","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https:\/\/github.com\/huggingface\/datasets-preview-backend\/issues\/17 Is this right?\r\n\r\n","Yes, that's it. And to do that I'm trying to use https:\/\/pypi.org\/project\/stopit\/, which will raise a stopit.TimeoutException exception. But currently, if this exception is raised, it's caught and considered as a \"FileNotFoundError\" while it should not be caught. ","And what about passing the `timeout` parameter instead?","It might be a good idea, but I would have to add a timeout argument to several methods, I'm not sure we want that (I want to ensure all my queries in https:\/\/github.com\/huggingface\/datasets-preview-backend\/tree\/master\/src\/datasets_preview_backend\/queries resolve in a given time, be it with an error in case of timeout, or with the successful response). The methods are `prepare_module`, `import_main_class`, *builder_cls.*`get_all_exported_dataset_infos`, `load_dataset_builder`, and `load_dataset`","I understand, you are trying to find a fix for your use case. OK.\r\n\r\nJust note that it is also an issue for `datasets` users. Once #2859 fixed in `datasets`, you will no longer have this issue...","Closing, since 1. my problem is more #2859, and I was asking for that change in order to make a hack work on my side, 2. if we want to change how exceptions are handled, we surely want to do it on all the codebase, not only in this particular case."],"created_at":1630498692000,"updated_at":1630576416000,"closed_at":1630576323000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2861","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2861.patch"},"body":"The same specific exception is catched in other parts of the same\r\nfunction.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2861\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2860","id":985013339,"node_id":"MDU6SXNzdWU5ODUwMTMzMzk=","number":2860,"title":"Cannot download TOTTO dataset","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https:\/\/github.com\/google-research-datasets\/ToTTo\/commit\/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it."],"created_at":1630494250000,"updated_at":1630565260000,"closed_at":1630565260000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Error: Couldn't find file at https:\/\/storage.googleapis.com\/totto\/totto_data.zip\r\n\r\n`datasets version: 1.11.0`\r\n# How to reproduce:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('totto')\r\n```\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2860\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2859","id":984324500,"node_id":"MDU6SXNzdWU5ODQzMjQ1MDA=","number":2859,"title":"Loading allenai\/c4 in streaming mode does too many HEAD requests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["https:\/\/github.com\/huggingface\/datasets\/blob\/6c766f9115d686182d76b1b937cb27e099c45d68\/src\/datasets\/builder.py#L179-L186"],"created_at":1630444264000,"updated_at":1630484194000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"This does 60,000+ HEAD requests to get all the ETags of all the data files:\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"allenai\/c4\", streaming=True)\r\n```\r\nIt makes loading the dataset completely impractical.\r\n\r\nThe ETags are used to compute the config id (it must depend on the data files being used).\r\nInstead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2859\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858","id":984145568,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzNjEzNzQ0","number":2858,"title":"Fix s3fs version in CI","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630433143000,"updated_at":1630935215000,"closed_at":1630445391000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2858","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2858.patch"},"body":"The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore\r\n\r\nThis PR changes the constrains to avoid the new conflicts\r\n\r\nIn particular it pins the version of s3fs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2858\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857","id":984093938,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzNTY5OTE4","number":2857,"title":"Update: Openwebtext - update size","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI error in unrelated to this PR and fixed on master"],"created_at":1630429863000,"updated_at":1631007872000,"closed_at":1631007872000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2857","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2857.patch"},"body":"Update the size of the Openwebtext dataset\r\n\r\nI also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples)\r\n\r\nrelated to #2839 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2857\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856","id":983876734,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMzg2NzIw","number":2856,"title":"fix: \ud83d\udc1b remove URL's query string only if it's ?dl=1","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630417207000,"updated_at":1630419732000,"closed_at":1630419732000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2856","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2856.patch"},"body":"A lot of URL use the query strings, for example\r\nhttp:\/\/opus.nlpl.eu\/download.php?f=Bianet\/v1\/moses\/en-ku.txt.zip, we\r\nmust not remove it when trying to detect the protocol. We thus remove it\r\nonly in the case of the query string being ?dl=1 which occurs on dropbox\r\nand dl.orangedox.com. Also: add unit tests.\r\n\r\nSee https:\/\/github.com\/huggingface\/datasets\/pull\/2843 for the original\r\ndiscussion.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2856\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855","id":983858229,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMzcxMTIy","number":2855,"title":"Fix windows CI CondaError","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630416122000,"updated_at":1630416934000,"closed_at":1630416933000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2855","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2855.patch"},"body":"From this thread: https:\/\/github.com\/conda\/conda\/issues\/6057\r\n\r\nWe can fix the conda error\r\n```\r\nCondaError: Cannot link a source that does not exist.\r\nC:\\Users\\...\\Anaconda3\\Scripts\\conda.exe\r\n```\r\n\r\nby doing\r\n```bash\r\nconda update conda\r\n```\r\n\r\nbefore doing any install in the windows CI","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2855\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2854","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2854\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2854\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2854\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2854","id":983726084,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMjU3NDg5","number":2854,"title":"Fix caching when moving script","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging since the CI failure is unrelated to this PR"],"created_at":1630407515000,"updated_at":1630415616000,"closed_at":1630415616000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2854","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2854","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2854.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2854.patch"},"body":"When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code.\r\n\r\nUsing the full path of the python script for the location of the code makes the hash change if a script like `run_mlm.py` is moved.\r\n\r\nI changed this by simply using the base name of the script instead of the full path.\r\n\r\nNote that this change also affects the hash of the code used from imported modules, but I think it's fine. Indeed it hashes the code of the imported modules anyway, so the location of the python files of the imported modules doesn't matter when computing the hash.\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/2825","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2854\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2853","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2853\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2853\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2853\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2853","id":983692026,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMjI4NDY3","number":2853,"title":"Add AMI dataset","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630405141000,"updated_at":1632131136000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2853","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2853","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2853.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2853.patch"},"body":"This is an initial commit for AMI dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2853\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2852","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2852\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2852\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2852\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2852","id":983609352,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIzMTU4Mzc4","number":2852,"title":"Fix: linnaeus - fix url","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging since the CI error is unrelated this this PR"],"created_at":1630399873000,"updated_at":1630415530000,"closed_at":1630415529000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2852","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2852","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2852.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2852.patch"},"body":"The url was causing a `ConnectionError` because of the \"\/\" at the end\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/2821","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2852\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2851","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2851\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2851\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2851\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2851","id":982789593,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIyNDg4MDY2","number":2851,"title":"Update `column_names` showed as `:func:` in exploring.st","user":{"login":"ClementRomac","id":8899812,"node_id":"MDQ6VXNlcjg4OTk4MTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8899812?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ClementRomac","html_url":"https:\/\/github.com\/ClementRomac","followers_url":"https:\/\/api.github.com\/users\/ClementRomac\/followers","following_url":"https:\/\/api.github.com\/users\/ClementRomac\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ClementRomac\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ClementRomac\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ClementRomac\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ClementRomac\/orgs","repos_url":"https:\/\/api.github.com\/users\/ClementRomac\/repos","events_url":"https:\/\/api.github.com\/users\/ClementRomac\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ClementRomac\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630329706000,"updated_at":1630485731000,"closed_at":1630421146000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2851","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2851","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2851.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2851.patch"},"body":"Hi, \r\n\r\nOne mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2851\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2850","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2850\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2850\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2850\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2850","id":982654644,"node_id":"MDU6SXNzdWU5ODI2NTQ2NDQ=","number":2850,"title":"Wound segmentation datasets","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630320272000,"updated_at":1630320272000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Wound segmentation datasets\r\n- **Description:** annotated wound image dataset \r\n- **Paper:** https:\/\/www.nature.com\/articles\/s41598-020-78799-w\r\n- **Data:** https:\/\/github.com\/uwm-bigdata\/wound-segmentation\r\n- **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http:\/\/www.miccai.org\/special-interest-groups\/challenges\/ and https:\/\/fusc.grand-challenge.org\/\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2850\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2849","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2849\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2849\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2849\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2849","id":982631420,"node_id":"MDU6SXNzdWU5ODI2MzE0MjA=","number":2849,"title":"Add Open Catalyst Project Dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630318479000,"updated_at":1630318479000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Open Catalyst 2020 (OC20) Dataset\r\n- **Website:** https:\/\/opencatalystproject.org\/\r\n- **Data:** https:\/\/github.com\/Open-Catalyst-Project\/ocp\/blob\/master\/DATASET.md\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2849\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2848","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2848\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2848\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2848\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2848","id":981953908,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIxODYyMDQx","number":2848,"title":"Update README.md","user":{"login":"odellus","id":4686956,"node_id":"MDQ6VXNlcjQ2ODY5NTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4686956?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/odellus","html_url":"https:\/\/github.com\/odellus","followers_url":"https:\/\/api.github.com\/users\/odellus\/followers","following_url":"https:\/\/api.github.com\/users\/odellus\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/odellus\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/odellus\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/odellus\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/odellus\/orgs","repos_url":"https:\/\/api.github.com\/users\/odellus\/repos","events_url":"https:\/\/api.github.com\/users\/odellus\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/odellus\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging since the CI error is unrelated to this PR and fixed on master"],"created_at":1630195106000,"updated_at":1631007632000,"closed_at":1631007632000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2848","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2848","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2848.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2848.patch"},"body":"Changed 'Tain' to 'Train'.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2848\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2847","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2847\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2847\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2847\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2847","id":981589693,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIxNjA3OTA0","number":2847,"title":"fix regex to accept negative timezone","user":{"login":"jadermcs","id":7156771,"node_id":"MDQ6VXNlcjcxNTY3NzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7156771?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jadermcs","html_url":"https:\/\/github.com\/jadermcs","followers_url":"https:\/\/api.github.com\/users\/jadermcs\/followers","following_url":"https:\/\/api.github.com\/users\/jadermcs\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jadermcs\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jadermcs\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jadermcs\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jadermcs\/orgs","repos_url":"https:\/\/api.github.com\/users\/jadermcs\/repos","events_url":"https:\/\/api.github.com\/users\/jadermcs\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jadermcs\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630097645000,"updated_at":1631565590000,"closed_at":1631007263000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2847","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2847","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2847.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2847.patch"},"body":"fix #2846","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2847\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2846","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2846\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2846\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2846\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2846","id":981587590,"node_id":"MDU6SXNzdWU5ODE1ODc1OTA=","number":2846,"title":"Negative timezone","user":{"login":"jadermcs","id":7156771,"node_id":"MDQ6VXNlcjcxNTY3NzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7156771?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jadermcs","html_url":"https:\/\/github.com\/jadermcs","followers_url":"https:\/\/api.github.com\/users\/jadermcs\/followers","following_url":"https:\/\/api.github.com\/users\/jadermcs\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jadermcs\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jadermcs\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jadermcs\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jadermcs\/orgs","repos_url":"https:\/\/api.github.com\/users\/jadermcs\/repos","events_url":"https:\/\/api.github.com\/users\/jadermcs\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jadermcs\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fixed by #2847."],"created_at":1630097433000,"updated_at":1631274667000,"closed_at":1631274667000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex:\r\n```\r\n\"^(s|ms|us|ns),\\s*tz=([a-zA-Z0-9\/_+:]*)$\"\r\n```\r\nSo a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Where the timestamp column has a tz of -03:00\r\ndatasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files,\r\n 'test': test_files}, cache_dir=\".\/cache_teste\/\")\r\n```\r\n\r\n## Expected results\r\nThe -03:00 is a valid tz so the regex should accept this without raising an error.\r\n\r\n## Actual results\r\nAs this regex disaproves a valid tz it raises the following error:\r\n```python\r\nraise ValueError(\r\n f\"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp.\"\r\n f\"Examples include timestamp[us] or timestamp[us, tz=America\/New_York]\"\r\n f\"See: https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.timestamp.html#pyarrow.timestamp\"\r\n )\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: Ubuntu 20.04\r\n- Python version: 3.8\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2846\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2845","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2845\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2845\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2845\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2845","id":981487861,"node_id":"MDU6SXNzdWU5ODE0ODc4NjE=","number":2845,"title":"[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630088511000,"updated_at":1630088645000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do:\r\n\r\n``` \r\nif not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds)\r\n```\r\n\r\nThis can already be done with:\r\n```\r\nbuilder = load_dataset_builder(ds)\r\nif not os.path.idsir(builder.cache_dir):\r\n builder.download_and_prepare()\r\n```\r\n\r\nbut the current way is a way less intuitive and much harder to remember than the proposed API, IMHO. \r\n\r\nOne more way is to do:\r\n\r\n```\r\n_ = load_dataset(ds)\r\n```\r\nbut it wastes resources loading the dataset when it's not needed.\r\n\r\nthis has been discussed at https:\/\/huggingface.slack.com\/archives\/C01229B19EX\/p1630021912025800\r\n\r\nThank you!\r\n\r\n@lhoestq \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2845\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2844","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2844\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2844\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2844\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2844","id":981382806,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIxNDQzMjY2","number":2844,"title":"Fix: wikicorpus - fix keys","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The CI error is unrelated to this PR\r\n\r\n... merging !"],"created_at":1630079766000,"updated_at":1630937248000,"closed_at":1630937247000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2844","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2844","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2844.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2844.patch"},"body":"As mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/2552, there is a duplicate keys error in `wikicorpus`.\r\n\r\nI fixed that by taking into account the file index in the keys","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2844\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2843","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2843\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2843\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2843\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2843","id":981317775,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIxMzkwODA5","number":2843,"title":"Fix extraction protocol inference from urls with params","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the windows error is just a CircleCI issue","It works, eg https:\/\/observablehq.com\/@huggingface\/datasets-preview-backend-client#{%22datasetId%22%3A%22discovery%22} and https:\/\/datasets-preview.huggingface.tech\/rows?dataset=discovery&config=discovery&split=train","Nice !"],"created_at":1630075257000,"updated_at":1630343509000,"closed_at":1630329121000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2843","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2843","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2843.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2843.patch"},"body":"Previously it was unable to infer the compression protocol for files at URLs like\r\n```\r\nhttps:\/\/foo.bar\/train.json.gz?dl=1\r\n```\r\nbecause of the query parameters.\r\n\r\nI fixed that, this should allow 10+ datasets to work in streaming mode:\r\n```\r\n \"discovery\",\r\n \"emotion\",\r\n \"grail_qa\",\r\n \"guardian_authorship\",\r\n \"pragmeval\",\r\n \"simple_questions_v2\",\r\n \"versae\/adobo\",\r\n \"w-nicole\/childes_data\",\r\n \"w-nicole\/childes_data_no_tags_\",\r\n \"w-nicole\/childes_data_with_tags\",\r\n \"w-nicole\/childes_data_with_tags_\"\r\n```\r\n\r\ncc @severo ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2843\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2842","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2842\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2842\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2842\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2842","id":980725899,"node_id":"MDU6SXNzdWU5ODA3MjU4OTk=","number":2842,"title":"always requiring the username in the dataset name when there is one","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix?","I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:\r\n```\r\n# first run\r\npython -c \"from datasets import load_dataset; load_dataset('stas\/openwebtext-10k')\"\r\n# now run immediately\r\npython -c \"from datasets import load_dataset; load_dataset('openwebtext-10k')\"\r\n# the second command should fail, but it doesn't fail now.\r\n```\r\n\r\nMoreover, if someone were to create `openwebtext-10k` w\/o the prefix, they will now get the wrong dataset, if they previously downloaded `stas\/openwebtext-10k`.\r\n\r\nAnd if there are 2 users with the same dataset name `foo\/ds` and `bar\/ds` - currently this won't work to get the correct dataset.\r\n\r\nSo really there 3 unrelated issues hiding in the current behavior."],"created_at":1630020713000,"updated_at":1631899087000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.\r\n\r\nSo both of us started with `stas\/openwebtext-10k`, somewhere along the lines lost `stas\/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`\r\n\r\nSo this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w\/o it.\r\n\r\nThe same in code:\r\n\r\n```\r\n# first run\r\npython -c \"from datasets import load_dataset; load_dataset('stas\/openwebtext-10k')\"\r\n# now run immediately\r\npython -c \"from datasets import load_dataset; load_dataset('openwebtext-10k')\"\r\n# the second command should fail, but it doesn't fail now.\r\n```\r\n\r\nPlease let me know if I explained myself clearly.\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2842\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2841","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2841\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2841\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2841\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2841","id":980497321,"node_id":"MDU6SXNzdWU5ODA0OTczMjE=","number":2841,"title":"Adding GLUECoS Hinglish and Spanglish code-switching bemchmark","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1630000059000,"updated_at":1630000059000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** GLUECoS\r\n- **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks\r\n- **Paper:** https:\/\/aclanthology.org\/2020.acl-main.329\/\r\n- **Data:** https:\/\/github.com\/microsoft\/GLUECoS\r\n- **Motivation:** We currently only have [one other](https:\/\/huggingface.co\/datasets\/lince) dataset for code-switching\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2841\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2840","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2840\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2840\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2840\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2840","id":980489074,"node_id":"MDU6SXNzdWU5ODA0ODkwNzQ=","number":2840,"title":"How can I compute BLEU-4 score use `load_metric` ?","user":{"login":"Doragd","id":26213546,"node_id":"MDQ6VXNlcjI2MjEzNTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26213546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Doragd","html_url":"https:\/\/github.com\/Doragd","followers_url":"https:\/\/api.github.com\/users\/Doragd\/followers","following_url":"https:\/\/api.github.com\/users\/Doragd\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Doragd\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Doragd\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Doragd\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Doragd\/orgs","repos_url":"https:\/\/api.github.com\/users\/Doragd\/repos","events_url":"https:\/\/api.github.com\/users\/Doragd\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Doragd\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629999397000,"updated_at":1630052004000,"closed_at":1630052004000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4.\r\nIf I want to compute BLEU-4 score, what can i do?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2840\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2839","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2839\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2839\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2839\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2839","id":980271715,"node_id":"MDU6SXNzdWU5ODAyNzE3MTU=","number":2839,"title":"OpenWebText: NonMatchingSplitsSizesError","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting, I'm updating the verifications metadata","I just regenerated the verifications metadata and noticed that nothing changed: the data file is fine (the checksum didn't change), and the number of examples is still 8013769. Not sure how you managed to get 7982430 examples.\r\n\r\nCan you try to delete your cache ( by default at `~\/.cache\/huggingface\/datasets`) and try again please ?\r\nAlso, on which platform are you (linux\/macos\/windows) ?","I'll try without deleting the whole cache (we have large datasets already stored). I was under the impression that `download_mode=\"force_redownload\"` would bypass cache.\r\nSorry plateform should be linux (Redhat version 8.1)","Hi @thomasw21 , are you still having this issue after clearing your cache ?","Sorry I haven't had time to work on this. I'll close and re-open if I can't figure out why I'm having this issue. Thanks for taking a look !"],"created_at":1629985826000,"updated_at":1632233560000,"closed_at":1632233383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nWhen downloading `openwebtext`, I'm getting:\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}]\r\n```\r\n\r\nI suspect that the file we download from has changed since the size doesn't look like to match with documentation\r\n\r\n`Downloading: 0%| | 0.00\/12.9G [00:00\r\n- `datasets` version: 1.10.2\r\n- Platform: linux (Redhat version 8.1)\r\n- Python version: 3.8\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2839\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2838","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2838\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2838\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2838\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2838","id":980067186,"node_id":"MDExOlB1bGxSZXF1ZXN0NzIwMzcxMDk5","number":2838,"title":"Add error_bad_chunk to the JSON loader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629972452000,"updated_at":1629972486000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2838","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2838","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2838.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2838.patch"},"body":"Add the `error_bad_chunk` parameter to the JSON loader.\r\n\r\nSetting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error.\r\n\r\nAdditional note:\r\n\r\nIn case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in streaming mode) to get the JSON fields that the user may have forgotten to pass. Ex : for squad-like data, the user has to pass `field=\"data\"` to tell the loader to get the list of examples from this field.\r\n\r\nTODO: update docs\r\n\r\ncc @lvwerra ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2838\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2837","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2837\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2837\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2837\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2837","id":979298297,"node_id":"MDU6SXNzdWU5NzkyOTgyOTc=","number":2837,"title":"prepare_module issue when loading from read-only fs","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello, I opened #2887 to fix this."],"created_at":1629904886000,"updated_at":1632155301000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nWhen we use prepare_module from a readonly file system, we create a FileLock using the `local_path`.\r\nThis path is not necessarily writable.\r\n\r\n`lock_path = local_path + \".lock\"`\r\n\r\n\r\n## Steps to reproduce the bug\r\n\r\nRun `load_dataset` on a readonly python loader file.\r\n```python\r\nds = load_dataset(\r\n python_loader, data_files={\"train\": train_path, \"test\": test_path}\r\n )\r\n```\r\n\r\nwhere `python_loader` is a path to a file located in a readonly folder.\r\n\r\n## Expected results\r\nThis should work I think?\r\n\r\n## Actual results\r\n\r\n```python\r\n return load_dataset(\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/load.py\", line 711, in load_dataset\r\n module_path, hash, resolved_file_path = prepare_module(\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/load.py\", line 465, in prepare_module\r\n with FileLock(lock_path):\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/utils\/filelock.py\", line 314, in __enter__\r\n self.acquire()\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/utils\/filelock.py\", line 263, in acquire\r\n self._acquire()\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/datasets\/utils\/filelock.py\", line 378, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock'\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.7.0\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.8\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2837\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2836","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2836\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2836\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2836\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2836","id":979230142,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE5NjY5MDUy","number":2836,"title":"Optimize Dataset.filter to only compute the indices to keep","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe worth updating the docs here as well?","Yup, will do !"],"created_at":1629902482000,"updated_at":1631631113000,"closed_at":1631548221000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2836","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2836","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2836.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2836.patch"},"body":"Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space.\r\n\r\nThis will be useful to process audio datasets for example cc @patrickvonplaten ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2836\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2835","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2835\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2835\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2835\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2835","id":979209394,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE5NjUxOTE4","number":2835,"title":"Update: timit_asr - make the dataset streamable","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629901369000,"updated_at":1631020547000,"closed_at":1631020546000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2835","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2835","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2835.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2835.patch"},"body":"The TIMIT ASR dataset had two issues that was preventing it from being streamable:\r\n\r\n1. it was missing a call to `open` before `pd.read_csv`\r\n2. it was using `os.path.dirname` which is not supported for streaming\r\n\r\nI made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.dirname` in dataset scripts to stream data\r\n\r\nYou can now do\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntimit_asr = load_dataset(\"timit_asr\", streaming=True)\r\nprint(next(iter(timit_asr[\"train\"])))\r\n```\r\nprints:\r\n\r\n```json\r\n{\"file\": \"zip:\/\/data\/TRAIN\/DR4\/MMDM0\/SI681.WAV::https:\/\/data.deepai.org\/timit.zip\",\r\n\"phonetic_detail\": {\"start\": [0, 1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720],\r\n\"utterance\": [\"h#\", \"w\", \"ix\", \"dcl\", \"s\", \"ah\", \"tcl\", \"ch\", \"ix\", \"n\", \"ae\", \"kcl\", \"t\", \"ix\", \"v\", \"r\", \"ix\", \"f\", \"y\", \"ux\", \"zh\", \"el\", \"bcl\", \"b\", \"iy\", \"y\", \"ux\", \"s\", \"f\", \"el\", \"h#\"],\r\n\"stop\": [1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720, 39920]},\r\n\"sentence_type\": \"SI\", \"id\": \"SI681\",\r\n\"speaker_id\": \"MMDM0\",\r\n\"dialect_region\": \"DR4\",\r\n\"text\": \"Would such an act of refusal be useful?\",\r\n\"word_detail\": {\r\n \"start\": [1960, 4000, 9400, 10680, 15880, 18297, 27080, 30120],\r\n \"utterance\": [\"would\", \"such\", \"an\", \"act\", \"of\", \"refusal\", \"be\", \"useful\"],\r\n \"stop\": [4000, 9400, 10680, 15880, 18297, 27080, 30120, 37720]\r\n}}\r\n```\r\n\r\ncc @patrickvonplaten @vrindaprabhu","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2835\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2834","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2834\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2834\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2834\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2834","id":978309749,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE4OTE5NjQ0","number":2834,"title":"Fix IndexError by ignoring empty RecordBatch","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629824773000,"updated_at":1629825678000,"closed_at":1629825678000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2834","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2834","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2834.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2834.patch"},"body":"We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables\r\n\r\nClose #2833\r\n\r\ncc @SaulLu ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2834\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2833","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2833\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2833\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2833\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2833","id":978296140,"node_id":"MDU6SXNzdWU5NzgyOTYxNDA=","number":2833,"title":"IndexError when accessing first element of a Dataset if first RecordBatch is empty","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1629823760000,"updated_at":1629825677000,"closed_at":1629825677000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty.\r\n\r\n```python\r\nfrom datasets import Dataset\r\nimport pyarrow as pa\r\n\r\npa_table = pa.Table.from_pydict({\"a\": [1]})\r\npa_table2 = pa.Table.from_pydict({\"a\": []}, schema=pa_table.schema)\r\nds_table = pa.concat_tables([pa_table2, pa_table])\r\n\r\ndataset = Dataset(ds_table)\r\n\r\nprint([len(b) for b in dataset.data._batches])\r\n# [0, 1]\r\n\r\nprint(dataset.data._offsets)\r\n# [0 0 1] (should be [0, 1])\r\n\r\ndataset[0]\r\n```\r\nraises\r\n```python\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/table.py in _interpolation_search(arr, x)\r\n 90 else:\r\n 91 i, j = i, k\r\n---> 92 raise IndexError(f\"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.\")\r\n 93 \r\n 94 \r\n\r\nIndexError: Invalid query '0' for size 1.\r\n```\r\n\r\nThis can be fixed by ignoring empty batches when computing `table._batches` and `table._offsets`\r\n\r\ncc @SaulLu ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2833\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2832","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2832\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2832\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2832\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2832","id":978012800,"node_id":"MDU6SXNzdWU5NzgwMTI4MDA=","number":2832,"title":"Logging levels not taken into account","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629805841000,"updated_at":1629805841000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nThe `logging` module isn't working as intended relative to the levels to set.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import logging\r\n\r\nlogging.set_verbosity_debug()\r\nlogger = logging.get_logger()\r\n\r\nlogger.error(\"ERROR\")\r\nlogger.warning(\"WARNING\")\r\nlogger.info(\"INFO\")\r\nlogger.debug(\"DEBUG\"\r\n```\r\n\r\n## Expected results\r\n\r\nI expect all logs to be output since I'm putting a `debug` level.\r\n\r\n## Actual results\r\n\r\nOnly the two first logs are output.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33\r\n- Python version: 3.9.6\r\n- PyArrow version: 5.0.0\r\n\r\n## To go further\r\n\r\nThis logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`.\r\n\r\n`transformers` sets a default `StreamHandler` [here](https:\/\/github.com\/huggingface\/transformers\/blob\/5c6eca71a983bae2589eed01e5c04fcf88ba5690\/src\/transformers\/utils\/logging.py#L86)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2832\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2831","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2831\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2831\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2831\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2831","id":977864600,"node_id":"MDU6SXNzdWU5Nzc4NjQ2MDA=","number":2831,"title":"ArrowInvalid when mapping dataset with missing values","user":{"login":"uniquefine","id":12694730,"node_id":"MDQ6VXNlcjEyNjk0NzMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12694730?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/uniquefine","html_url":"https:\/\/github.com\/uniquefine","followers_url":"https:\/\/api.github.com\/users\/uniquefine\/followers","following_url":"https:\/\/api.github.com\/users\/uniquefine\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/uniquefine\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/uniquefine\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/uniquefine\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/uniquefine\/orgs","repos_url":"https:\/\/api.github.com\/users\/uniquefine\/repos","events_url":"https:\/\/api.github.com\/users\/uniquefine\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/uniquefine\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It fails because of the feature type inference.\r\n\r\nBecause the first 1000 examples all have null values in the \"match\" field, then it infers that the type for this field is `null` type before writing the data on disk. But as soon as it tries to map an example with a non-null \"match\" field, then it fails.\r\n\r\nTo fix that you can either:\r\n- increase the writer_batch_size to >2000 (default is 1000) so that some non-null values will be in the first batch written to disk\r\n```python\r\ndatasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], writer_batch_size=2000)\r\n```\r\n- OR force the feature type with:\r\n```python\r\nfrom datasets import Features, Value\r\n\r\nfeatures = Features({\r\n 'conflict': Value('int64'),\r\n 'date': Value('string'),\r\n 'headline': Value('string'),\r\n 'match': Value('float64'),\r\n 'label': Value('float64')\r\n})\r\ndatasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id'], features=features)\r\n```"],"created_at":1629795042000,"updated_at":1630419334000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI encountered an `ArrowInvalid` when mapping dataset with missing values. \r\nHere are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown).\r\n[data_small.csv](https:\/\/github.com\/huggingface\/datasets\/files\/7037838\/data_small.csv)\r\n[data.csv](https:\/\/github.com\/huggingface\/datasets\/files\/7037842\/data.csv)\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndatasets = load_dataset(\"csv\", data_files=['data_small.csv'])\r\n\r\ndatasets = datasets.map(lambda e: {'labels': e['match']},\r\n remove_columns=['id'])\r\n```\r\n\r\n## Expected results\r\nNo error\r\n\r\n## Actual results\r\n```\r\nFile \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Invalid null value\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.5.0\r\n- Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyTorch version (GPU?): 1.7.1+cpu (False)\r\n- Tensorflow version (GPU?): 2.4.1 (False)\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: no\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2831\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2830","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2830\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2830\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2830\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2830","id":977563947,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE4MjkyMTM2","number":2830,"title":"Add imagefolder dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible? \r\n\r\nMy hacky community version [here](https:\/\/huggingface.co\/datasets\/nateraw\/image-folder) does this, but it wouldn't pass the test suite here. Any thoughts?","Hi ! Dataset builders that require some `data_files` like `csv` or `json` are handled differently that actual dataset scripts.\r\n\r\nIn particular:\r\n- they are placed directly in the `src` folder of the lib so that you can use it without internet connection (more exactly in `src\/datasets\/packaged_modules\/.py`). So feel free to move the dataset python file there. You also need to register it in `src\/datasets\/packaked_modules.__init__.py`\r\n- they are handled a bit differently in our test suite (see the `PackagedDatasetTest` class in `test_dataset_common.py`). To be able to test the builder with your dummy data, you just need to modify `get_packaged_dataset_dummy_data_files` in `test_dataset_common.py` to return the right `data_files` for your builder. The dummy data can stay in `datasets\/image_folder\/dummy`\r\n\r\nLet me know if you have questions or if I can help !","Hey @lhoestq , I actually already did both of those things. I'm trying to get the `image-classification` task to work now. \r\n\r\nFor example...When you run `ds = load_dataset('imagefolder', data_files='my_files')`, with a directory called `.\/my_files` that looks like this:\r\n\r\n```\r\nmy_files\r\n----| Cat\r\n--------| image1.jpg\r\n--------| ...\r\n----| Dog\r\n--------| image1.jpg\r\n--------| ...\r\n```\r\n\r\n...We should set the dataset's `labels` feature to `datasets.features.ClassLabel(names=['cat', 'dog'])` dynamically with class names we find by getting a list of directories in `my_files` (via `data_files`). Otherwise the `datasets.tasks.ImageClassification` task will break, as the `labels` feature is not a `ClassLabel`.\r\n\r\nI couldn't figure out how to access the `data_files` in the builder's `_info` function in a way that would pass in the test suite. ","Nice ! Then maybe you can use `self.config.data_files` in `_info()` ?\r\nWhat error are you getting in the test suite ?\r\n\r\nAlso note that `data_files` was first developed to accept paths to actual files, not directories. In particular, it fetches the metadata of all the data_files to get a unique hash for the caching mechanism. So we may need to do a few changes first."],"created_at":1629761646000,"updated_at":1631005601000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2830","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2830","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2830.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2830.patch"},"body":"A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`. \r\n\r\nResolves #2508 \r\n\r\n---\r\n\r\nExample Usage:\r\n\r\n[![Open In Colab](https:\/\/colab.research.google.com\/assets\/colab-badge.svg)](https:\/\/colab.research.google.com\/gist\/nateraw\/954fa8cba4ff806f6147a782fa9efd1a\/imagefolder-official-example.ipynb)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2830\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2829","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2829\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2829\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2829\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2829","id":977233360,"node_id":"MDU6SXNzdWU5NzcyMzMzNjA=","number":2829,"title":"Optimize streaming from TAR archives","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":3287858981,"node_id":"MDU6TGFiZWwzMjg3ODU4OTgx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/streaming","name":"streaming","color":"fef2c0","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629737800000,"updated_at":1631180268000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives:\r\n```\r\ntar:\/\/books_large_p1.txt::https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2\r\n```\r\nInstead, I suggest we implement `iter_archive` for the `StreamingDownloadManager`.\r\nThe regular `DownloadManager` already has it.\r\n\r\nThen we will have to update the json\/txt\/csv\/etc. loaders to make them use `iter_archive` on TAR archives.\r\n\r\nThat's also what Tensorflow Datasets is doing in this case.\r\nSee this [dataset](https:\/\/github.com\/tensorflow\/datasets\/blob\/93895059c80a9e05805e8f32a2e310f66a23fc98\/tensorflow_datasets\/image_classification\/flowers.py) for example.\r\n\r\nTherefore instead of doing\r\n```python\r\nuncompressed = dl_manager.extract(tar_archive)\r\nfilename = \"books_large_p1.txt\"\r\nwith open(os.path.join(uncompressed, filename)) as f:\r\n for line in f:\r\n ...\r\n```\r\nwe'll do\r\n```python\r\nfor filename, f in dl_manager.iter_archive(tar_archive):\r\n for line in f:\r\n ...\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2829\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2828","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2828\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2828\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2828\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2828","id":977181517,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE3OTYwODg3","number":2828,"title":"Add code-mixed Kannada Hope speech dataset","user":{"login":"adeepH","id":46108405,"node_id":"MDQ6VXNlcjQ2MTA4NDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46108405?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adeepH","html_url":"https:\/\/github.com\/adeepH","followers_url":"https:\/\/api.github.com\/users\/adeepH\/followers","following_url":"https:\/\/api.github.com\/users\/adeepH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adeepH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adeepH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adeepH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adeepH\/orgs","repos_url":"https:\/\/api.github.com\/users\/adeepH\/repos","events_url":"https:\/\/api.github.com\/users\/adeepH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adeepH\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629734109000,"updated_at":1632153950000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2828","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2828","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2828.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2828.patch"},"body":"## Adding a Dataset\r\n- **Name:** *KanHope*\r\n- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/2108.04616* \r\n- **Data:** *https:\/\/github.com\/adeepH\/KanHope\/tree\/main\/dataset*\r\n- **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2828\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2827","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2827\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2827\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2827\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2827","id":976976552,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE3Nzg3MjEw","number":2827,"title":"add a text classification dataset","user":{"login":"adeepH","id":46108405,"node_id":"MDQ6VXNlcjQ2MTA4NDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46108405?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adeepH","html_url":"https:\/\/github.com\/adeepH","followers_url":"https:\/\/api.github.com\/users\/adeepH\/followers","following_url":"https:\/\/api.github.com\/users\/adeepH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adeepH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adeepH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adeepH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adeepH\/orgs","repos_url":"https:\/\/api.github.com\/users\/adeepH\/repos","events_url":"https:\/\/api.github.com\/users\/adeepH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adeepH\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629721481000,"updated_at":1629733878000,"closed_at":1629733878000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2827","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2827","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2827.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2827.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2827\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2826","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2826\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2826\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2826\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2826","id":976974254,"node_id":"MDU6SXNzdWU5NzY5NzQyNTQ=","number":2826,"title":"Add a Text Classification dataset: KanHope","user":{"login":"adeepH","id":46108405,"node_id":"MDQ6VXNlcjQ2MTA4NDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46108405?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adeepH","html_url":"https:\/\/github.com\/adeepH","followers_url":"https:\/\/api.github.com\/users\/adeepH\/followers","following_url":"https:\/\/api.github.com\/users\/adeepH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adeepH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adeepH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adeepH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adeepH\/orgs","repos_url":"https:\/\/api.github.com\/users\/adeepH\/repos","events_url":"https:\/\/api.github.com\/users\/adeepH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adeepH\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! In your script it looks like you're trying to load the dataset `bn_hate_speech,`, not KanHope.\r\n\r\nMoreover the error `KeyError: ' '` means that you have a feature of type ClassLabel, but for a certain example of the dataset, it looks like the label is empty (it's just a string with a space). Can you make sure that the data don't have missing labels, and that your dataset script parses the labels correctly ?"],"created_at":1629721318000,"updated_at":1630416346000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *KanHope*\r\n- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/2108.04616* (I am the author of the paper}\r\n- **Author:** *[AdeepH](https:\/\/github.com\/adeepH)*\r\n- **Data:** *https:\/\/github.com\/adeepH\/KanHope\/tree\/main\/dataset*\r\n- **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages*\r\n\r\n- I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated.\r\n\r\n- The dataset card and the scripts for the dataset *https:\/\/github.com\/adeepH\/datasets\/tree\/multilingual-hope-speech\/datasets\/mhs_eval*\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset bn_hate_speech\/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/bn_hate_speech\/default\/0.0.0\/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762...\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n in ()\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 data = load_dataset('\/content\/bn')\r\n\r\n9 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 850 ignore_verifications=ignore_verifications,\r\n 851 try_from_hf_gcs=try_from_hf_gcs,\r\n--> 852 use_auth_token=use_auth_token,\r\n 853 )\r\n 854 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 614 if not downloaded_from_gcs:\r\n 615 self._download_and_prepare(\r\n--> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n 618 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 691 try:\r\n 692 # Prepare split will record examples associated to the split\r\n--> 693 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 694 except OSError as e:\r\n 695 raise OSError(\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _prepare_split(self, split_generator)\r\n 1107 disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n 1108 ):\r\n-> 1109 example = self.info.features.encode_example(record)\r\n 1110 writer.write(example, key)\r\n 1111 finally:\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features.py in encode_example(self, example)\r\n 1015 \"\"\"\r\n 1016 example = cast_to_python_objects(example)\r\n-> 1017 return encode_nested_example(self, example)\r\n 1018 \r\n 1019 def encode_batch(self, batch):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features.py in encode_nested_example(schema, obj)\r\n 863 if isinstance(schema, dict):\r\n 864 return {\r\n--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n 866 }\r\n 867 elif isinstance(schema, (list, tuple)):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features.py in (.0)\r\n 863 if isinstance(schema, dict):\r\n 864 return {\r\n--> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n 866 }\r\n 867 elif isinstance(schema, (list, tuple)):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features.py in encode_nested_example(schema, obj)\r\n 890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks\r\n 891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):\r\n--> 892 return schema.encode_example(obj)\r\n 893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)\r\n 894 return obj\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features.py in encode_example(self, example_data)\r\n 665 # If a string is given, convert to associated integer\r\n 666 if isinstance(example_data, str):\r\n--> 667 example_data = self.str2int(example_data)\r\n 668 \r\n 669 # Allowing -1 to mean no label.\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/features.py in str2int(self, values)\r\n 623 if value not in self._str2int:\r\n 624 value = str(value).strip()\r\n--> 625 output.append(self._str2int[str(value)])\r\n 626 else:\r\n 627 # No names provided, try to integerize\r\n\r\nKeyError: ' '\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2826\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2825","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2825\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2825\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2825\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2825","id":976584926,"node_id":"MDU6SXNzdWU5NzY1ODQ5MjY=","number":2825,"title":"The datasets.map function does not load cached dataset after moving python script","user":{"login":"hobbitlzy","id":35392624,"node_id":"MDQ6VXNlcjM1MzkyNjI0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35392624?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hobbitlzy","html_url":"https:\/\/github.com\/hobbitlzy","followers_url":"https:\/\/api.github.com\/users\/hobbitlzy\/followers","following_url":"https:\/\/api.github.com\/users\/hobbitlzy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hobbitlzy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hobbitlzy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hobbitlzy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hobbitlzy\/orgs","repos_url":"https:\/\/api.github.com\/users\/hobbitlzy\/repos","events_url":"https:\/\/api.github.com\/users\/hobbitlzy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hobbitlzy\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This also happened to me on COLAB.\r\nDetails:\r\nI ran the `run_mlm.py` in two different notebooks. \r\nIn the first notebook, I do tokenization since I can get 4 CPU cores without any GPUs, and save the cache into a folder which I copy to drive.\r\nIn the second notebook, I copy the cache folder from drive and re-run the run_mlm.py script (this time I uncomment the trainer code which happens after the tokenization)\r\n\r\nNote: I didn't change anything in the arguments, not even the preprocessing_num_workers\r\n ","Thanks for reporting ! This is indeed a bug, I'm looking into it","#2854 fixed the issue :)\r\n\r\nWe'll do a new release of `datasets` soon to make the fix available.\r\nIn the meantime, feel free to try it out by installing `datasets` from source\r\n\r\nIf you have other issues or any question, feel free to re-open the issue :)"],"created_at":1629689017000,"updated_at":1630415681000,"closed_at":1630415616000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files.\r\n\r\n## Steps to reproduce the bug\r\nJust run the following codes in different .py files.\r\n```python\r\nif __name__ == '__main__':\r\n from datasets import load_dataset\r\n from transformers import AutoTokenizer\r\n raw_datasets = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\r\n\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n```\r\n\r\n## Expected results\r\nThe map function should reload data in the second or any later runs.\r\n\r\n## Actual results\r\nThe processing happens in each run.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: linux\r\n- Python version: 3.7.6\r\n- PyArrow version: 3.0.0\r\n\r\nThis is the first time I report a bug. If there is any problem or confusing description, please let me know \ud83d\ude04.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2825\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2824","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2824\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2824\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2824\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2824","id":976394721,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE3MzIyMzY5","number":2824,"title":"Fix defaults in cache_dir docstring in load.py","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629643717000,"updated_at":1629984212000,"closed_at":1629978916000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2824","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2824","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2824.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2824.patch"},"body":"Fix defaults in the `cache_dir` docstring.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2824\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2823","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2823\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2823\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2823\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2823","id":976135355,"node_id":"MDU6SXNzdWU5NzYxMzUzNTU=","number":2823,"title":"HF_DATASETS_CACHE variable in Windows","user":{"login":"rp2839","id":8453798,"node_id":"MDQ6VXNlcjg0NTM3OTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8453798?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rp2839","html_url":"https:\/\/github.com\/rp2839","followers_url":"https:\/\/api.github.com\/users\/rp2839\/followers","following_url":"https:\/\/api.github.com\/users\/rp2839\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rp2839\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rp2839\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rp2839\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rp2839\/orgs","repos_url":"https:\/\/api.github.com\/users\/rp2839\/repos","events_url":"https:\/\/api.github.com\/users\/rp2839\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rp2839\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Agh - I'm a muppet. No quote marks are needed.\r\nset HF_DATASETS_CACHE = C:\\Datasets\r\nworks as intended."],"created_at":1629551864000,"updated_at":1629552011000,"closed_at":1629552011000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I can't seem to use a custom Cache directory in Windows. I have tried:\r\n\r\nset HF_DATASETS_CACHE = \"C:\\Datasets\"\r\nset HF_DATASETS_CACHE = \"C:\/Datasets\"\r\nset HF_DATASETS_CACHE = \"C:\\\\Datasets\"\r\nset HF_DATASETS_CACHE = \"r'C:\\Datasets'\"\r\nset HF_DATASETS_CACHE = \"\\Datasets\"\r\nset HF_DATASETS_CACHE = \"\/Datasets\"\r\n\r\nIn each instance I get the \"[WinError 123] The filename, directory name, or volume label syntax is incorrect\" error when attempting to load a dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2823\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2822","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2822\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2822\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2822\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2822","id":975744463,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE2ODUxMTAy","number":2822,"title":"Add url prefix convention for many compression formats","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the feedback :) I will also complete the documentation to explain this convention","I just added some documentation about how streaming works with chained URLs.\r\n\r\nI will also add some docs about how to use chained URLs directly in `load_dataset` in #2662, since #2662 does change the documentation already and to avoid having to resolve conflicts.","Merging this one now, next step is resolve the conflicts in #2662 and update the docs for URL chaining :)\r\n\r\nThere is also the glob feature of zip files that I need to add, to be able to do this for example:\r\n```python\r\nload_dataset(\"json\", data_files=\"zip:\/\/*::https:\/\/foo.bar\/archive.zip\")\r\n```"],"created_at":1629475883000,"updated_at":1629734356000,"closed_at":1629734354000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2822","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2822","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2822.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2822.patch"},"body":"## Intro\r\n\r\nWhen doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`.\r\n\r\nIn particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS:\r\n- `gz:\/\/file.txt::https:\/\/foo.bar\/file.txt.gz`\r\n- `bz2:\/\/file.txt::https:\/\/foo.bar\/file.txt.bz2`\r\n- `zip:\/\/::https:\/\/foo.bar\/archive.zip`\r\n- `tar:\/\/::https:\/\/foo.bar\/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`)\r\n\r\nThis syntax is highly inspired by the `fsspec` URL chaining syntax from https:\/\/filesystem-spec.readthedocs.io\/en\/latest\/features.html#url-chaining\r\n\r\nThis url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing\r\n```python\r\ndef _generate_examples(self, urlpath):\r\n with open(urlpath) as f:\r\n ....\r\n```\r\n\r\n## What it changes\r\n\r\nThis changes the previous behavior from https:\/\/github.com\/huggingface\/datasets\/pull\/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use.\r\n\r\n## Additional notes\r\n\r\nThis PR should close https:\/\/github.com\/huggingface\/datasets\/issues\/2813\r\n\r\nIt should also close this PR https:\/\/github.com\/huggingface\/datasets\/pull\/2811 since the oscar dataset script won't try to uncompress twice anymore\r\n\r\nNote that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit:\r\n```python\r\nload_dataset(\"json\", data_files=\"zip:\/\/*.jsonl::https:\/\/foo.bar\/archive.zip\")\r\n```\r\nThis is the exact same convention as fsspec and it removes all ambiguities\r\n\r\ncc @albertvillanova @lewtun ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2822\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2821","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2821\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2821\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2821\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2821","id":975556032,"node_id":"MDU6SXNzdWU5NzU1NTYwMzI=","number":2821,"title":"Cannot load linnaeus dataset","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting ! #2852 fixed this error\r\n\r\nWe'll do a new release of `datasets` soon :)"],"created_at":1629461715000,"updated_at":1630415582000,"closed_at":1630415529000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe [linnaeus](https:\/\/huggingface.co\/datasets\/linnaeus) dataset cannot be loaded. To reproduce:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndatasets = load_dataset(\"linnaeus\")\r\n```\r\nThis results in:\r\n```\r\nDownloading and preparing dataset linnaeus\/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to \/root\/.cache\/huggingface\/datasets\/linnaeus\/linnaeus\/1.0.0\/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704...\r\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\n in ()\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 datasets = load_dataset(\"linnaeus\")\r\n\r\n11 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 603 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n 604 _raise_if_offline_mode_is_enabled(f\"Tried to reach {url}\")\r\n--> 605 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 606 \r\n 607 # Try a second time\r\n\r\nConnectionError: Couldn't reach https:\/\/drive.google.com\/u\/0\/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download\/\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2821\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2820","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2820\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2820\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2820\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2820","id":975210712,"node_id":"MDU6SXNzdWU5NzUyMTA3MTI=","number":2820,"title":"Downloading \u201creddit\u201d dataset keeps timing out.","user":{"login":"smeyerhot","id":43877130,"node_id":"MDQ6VXNlcjQzODc3MTMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43877130?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/smeyerhot","html_url":"https:\/\/github.com\/smeyerhot","followers_url":"https:\/\/api.github.com\/users\/smeyerhot\/followers","following_url":"https:\/\/api.github.com\/users\/smeyerhot\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/smeyerhot\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/smeyerhot\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/smeyerhot\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/smeyerhot\/orgs","repos_url":"https:\/\/api.github.com\/users\/smeyerhot\/repos","events_url":"https:\/\/api.github.com\/users\/smeyerhot\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/smeyerhot\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset reddit\/default (download: 2.93 GiB, generated: 17.64 GiB, post-processed: Unknown size, total: 20.57 GiB) to \/Volumes\/My Passport for Mac\/og-chat-data\/reddit\/default\/1.0.0\/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969...\r\nDownloading: 13%\r\n403M\/3.14G [44:39<2:27:09, 310kB\/s]\r\n---------------------------------------------------------------------------\r\ntimeout Traceback (most recent call last)\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/urllib3\/response.py in _error_catcher(self)\r\n 437 try:\r\n--> 438 yield\r\n 439 \r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/urllib3\/response.py in read(self, amt, decode_content, cache_content)\r\n 518 cache_content = False\r\n--> 519 data = self._fp.read(amt) if not fp_closed else b\"\"\r\n 520 if (\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/http\/client.py in read(self, amt)\r\n 458 b = bytearray(amt)\r\n--> 459 n = self.readinto(b)\r\n 460 return memoryview(b)[:n].tobytes()\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/http\/client.py in readinto(self, b)\r\n 502 # (for example, reading in 1k chunks)\r\n--> 503 n = self.fp.readinto(b)\r\n 504 if not n and b:\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/socket.py in readinto(self, b)\r\n 703 try:\r\n--> 704 return self._sock.recv_into(b)\r\n 705 except timeout:\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/ssl.py in recv_into(self, buffer, nbytes, flags)\r\n 1240 self.__class__)\r\n-> 1241 return self.read(nbytes, buffer)\r\n 1242 else:\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/ssl.py in read(self, len, buffer)\r\n 1098 if buffer is not None:\r\n-> 1099 return self._sslobj.read(len, buffer)\r\n 1100 else:\r\n\r\ntimeout: The read operation timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadTimeoutError Traceback (most recent call last)\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/requests\/models.py in generate()\r\n 757 try:\r\n--> 758 for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n 759 yield chunk\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/urllib3\/response.py in stream(self, amt, decode_content)\r\n 575 while not is_fp_closed(self._fp):\r\n--> 576 data = self.read(amt=amt, decode_content=decode_content)\r\n 577 \r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/urllib3\/response.py in read(self, amt, decode_content, cache_content)\r\n 540 # Content-Length are caught.\r\n--> 541 raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n 542 \r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/contextlib.py in __exit__(self, type, value, traceback)\r\n 134 try:\r\n--> 135 self.gen.throw(type, value, traceback)\r\n 136 except StopIteration as exc:\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/urllib3\/response.py in _error_catcher(self)\r\n 442 # there is yet no clean way to get at it from this context.\r\n--> 443 raise ReadTimeoutError(self._pool, None, \"Read timed out.\")\r\n 444 \r\n\r\nReadTimeoutError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n\/var\/folders\/3f\/md0t9sgj6rz8xy01fskttqdc0000gn\/T\/ipykernel_89016\/1133441872.py in \r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"reddit\", ignore_verifications=True, cache_dir=\"\/Volumes\/My Passport for Mac\/og-chat-data\")\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 845 \r\n 846 # Download and prepare data\r\n--> 847 builder_instance.download_and_prepare(\r\n 848 download_config=download_config,\r\n 849 download_mode=download_mode,\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 613 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 614 if not downloaded_from_gcs:\r\n--> 615 self._download_and_prepare(\r\n 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 617 )\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 669 split_dict = SplitDict(dataset_name=self.name)\r\n 670 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 671 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 672 \r\n 673 # Checksums verification\r\n\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/reddit\/98ba5abea674d3178f7588aa6518a5510dc0c6fa8176d9653a3546d5afcb3969\/reddit.py in _split_generators(self, dl_manager)\r\n 73 def _split_generators(self, dl_manager):\r\n 74 \"\"\"Returns SplitGenerators.\"\"\"\r\n---> 75 dl_path = dl_manager.download_and_extract(_URL)\r\n 76 return [\r\n 77 datasets.SplitGenerator(\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/download_manager.py in download_and_extract(self, url_or_urls)\r\n 287 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 288 \"\"\"\r\n--> 289 return self.extract(self.download(url_or_urls))\r\n 290 \r\n 291 def get_recorded_sizes_checksums(self):\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/download_manager.py in download(self, url_or_urls)\r\n 195 \r\n 196 start_time = datetime.now()\r\n--> 197 downloaded_path_or_paths = map_nested(\r\n 198 download_func,\r\n 199 url_or_urls,\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 194 # Singleton\r\n 195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 196 return function(data_struct)\r\n 197 \r\n 198 disable_tqdm = bool(logger.getEffectiveLevel() > logging.INFO) or not utils.is_progress_bar_enabled()\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/download_manager.py in _download(self, url_or_filename, download_config)\r\n 218 # append the relative path to the base_path\r\n 219 url_or_filename = url_or_path_join(self._base_path, url_or_filename)\r\n--> 220 return cached_path(url_or_filename, download_config=download_config)\r\n 221 \r\n 222 def iter_archive(self, path):\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 286 if is_remote_url(url_or_filename):\r\n 287 # URL, so get it from the cache (downloading if necessary)\r\n--> 288 output_path = get_from_cache(\r\n 289 url_or_filename,\r\n 290 cache_dir=cache_dir,\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 643 ftp_get(url, temp_file)\r\n 644 else:\r\n--> 645 http_get(\r\n 646 url,\r\n 647 temp_file,\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)\r\n 451 disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n 452 )\r\n--> 453 for chunk in response.iter_content(chunk_size=1024):\r\n 454 if chunk: # filter out keep-alive new chunks\r\n 455 progress.update(len(chunk))\r\n\r\n\/usr\/local\/anaconda3\/envs\/og-data-env\/lib\/python3.9\/site-packages\/requests\/models.py in generate()\r\n 763 raise ContentDecodingError(e)\r\n 764 except ReadTimeoutError as e:\r\n--> 765 raise ConnectionError(e)\r\n 766 else:\r\n 767 # Standard file-like object.\r\n\r\nConnectionError: HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out.\r\n```","Hey @lhoestq should I try to fix this issue ?","It also doesn't seem to be \"smart caching\" and I received an error about a file not being found...","To be clear, the error I get when I try to \"re-instantiate\" the download after failure is: \r\n```\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: \/.cache\/huggingface\/datasets\/downloads\/1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c\/corpus-webis-tldr-17.json'\r\n```","Here is a new error:\r\n```\r\nConnectionError: Couldn't reach https:\/\/zenodo.org\/record\/1043504\/files\/corpus-webis-tldr-17.zip?download=1\r\n```","Hi ! Since https:\/\/github.com\/huggingface\/datasets\/pull\/2803 we've changed the time out from 10sec to 100sec.\r\nThis should prevent the `ReadTimeoutError`. Feel free to try it out by installing `datasets` from source\r\n```\r\npip install git+https:\/\/github.com\/huggingface\/datasets.git\r\n```\r\n\r\nWhen re-running your code you said you get a `OSError`, could you try deleting the file at the path returned by the error ? (the one after `[Errno 20] Not a directory:`). Ideally when a download fails you should be able to re-run it without error; there might be an issue here.\r\n\r\nFinally not sure what we can do about `ConnectionError`, this must be an issue from zenodo. If it happens you simply need to try again\r\n","@lhoestq thanks for the update. The directory specified by the OSError ie. \r\n```\r\n1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c\/corpus-webis-tldr-17.json \r\n```\r\n was not actually in that directory so I can't delete it. ","Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ?\r\nThis way the download manager will know that it has to uncompress the data again","It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished. ","Great ! The timeout change will be available in the next release of `datasets` :)"],"created_at":1629427956000,"updated_at":1631112722000,"closed_at":1631112722000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\nEverytime I try and download the reddit dataset it times out before finishing and I have to try again.\r\n\r\nThere is some timeout error that I will post once it happens again.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"reddit\", ignore_verifications=True, cache_dir=\"\/Volumes\/My Passport for Mac\/og-chat-data\")\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\nI would expect the download to finish, or at least provide a parameter to extend the read timeout window.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\nShown below in error message.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: macOS \r\n- Python version: 3.9.6 (conda env)\r\n- PyArrow version: N\/A\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2820\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2819","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2819\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2819\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2819\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2819","id":974683155,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE1OTUyMjE1","number":2819,"title":"Added XL-Sum dataset","user":{"login":"abhik1505040","id":49608995,"node_id":"MDQ6VXNlcjQ5NjA4OTk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49608995?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhik1505040","html_url":"https:\/\/github.com\/abhik1505040","followers_url":"https:\/\/api.github.com\/users\/abhik1505040\/followers","following_url":"https:\/\/api.github.com\/users\/abhik1505040\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhik1505040\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhik1505040\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhik1505040\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhik1505040\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhik1505040\/repos","events_url":"https:\/\/api.github.com\/users\/abhik1505040\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhik1505040\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for adding this one ! I just did some minor changes and set the timeout back to 100sec instead of 1000","The CI failure is unrelated to this PR - let me take a look","> Thanks for adding this one! I just did some minor changes and set the timeout back to 100sec instead of 1000\r\n\r\nThank you for updating the language tags. I tried timeout values up to 300 sec on my local machine, but some of the larger files still get timed out. Although this could have been a network issue on my end, have you verified that 100 sec works for all files?","Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.\r\nTherefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.\r\n\r\nSo ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.\r\nHF can probably help with hosting the data if needed","> Well the main issue with google drive - even before the time out issues - is that it has a daily quota of downloads per file.\r\n> Therefore if many people start downloading this dataset, it will be unavailable until the quota is reset the next day.\r\n> \r\n> So ideally it would be nice if the data were hosted elsewhere than Google drive, to avoid the quota and time out issue.\r\n> HF can probably help with hosting the data if needed\r\n\r\nIt'd be great if the dataset can be hosted in HF. How should I proceed here though? Upload the dataset files as a community dataset and update the links in this pull request or is there a more straightforward way?","Hi ! Ideally everything should be in the same place, so feel free to create a community dataset on the Hub and upload your data files as well as you dataset script (and also the readme.md and dataset_infos.json).\r\n\r\nThe only change you have to do in your dataset script is use a relative path to your data files instead of urls.\r\nFor example if your repository looks like this:\r\n```\r\nxlsum\/\r\n\u251c\u2500\u2500 data\/\r\n\u2502 \u251c\u2500\u2500 amharic_XLSum_v2.0.tar.bz2\r\n\u2502 \u251c\u2500\u2500 ...\r\n\u2502 \u2514\u2500\u2500 yoruba_XLSum_v2.0.tar.bz2\r\n\u251c\u2500\u2500 xlsum.py\r\n\u251c\u2500\u2500 README.md\r\n\u2514\u2500\u2500 dataset_infos.json\r\n```\r\nThen you just need to pass `\"data\/amharic_XLSum_v2.0.tar.bz2\"` to `dl_manager.download_and_extract(...)`, instead of an url.\r\n\r\nLocally you can test that it's working as expected with\r\n```python\r\nload_dataset(\"path\/to\/my\/directory\/named\/xlsum\")\r\n```\r\n\r\nThen once it's on the Hub, you can load it with\r\n```python\r\nload_dataset(\"username\/xlsum\")\r\n```\r\n\r\nLet me know if you have questions :)","Thank you for your detailed response regarding the community dataset building process. However, will this pull request be merged into the main branch?","If XL-sum is available via the Hub we don't need to add it again in the `datasets` github repo ;)"],"created_at":1629380865000,"updated_at":1631613247000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2819","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2819","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2819.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2819.patch"},"body":"Added XL-Sum dataset published in ACL-IJCNLP 2021. (https:\/\/aclanthology.org\/2021.findings-acl.413\/). The default timeout values in `src\/datasets\/utils\/file_utls.py` were increased to enable downloading from the original google drive links.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2819\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2818","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2818\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2818\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2818\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2818","id":974552009,"node_id":"MDU6SXNzdWU5NzQ1NTIwMDk=","number":2818,"title":"cannot load data from my loacal path","user":{"login":"yang-collect","id":46920280,"node_id":"MDQ6VXNlcjQ2OTIwMjgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46920280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yang-collect","html_url":"https:\/\/github.com\/yang-collect","followers_url":"https:\/\/api.github.com\/users\/yang-collect\/followers","following_url":"https:\/\/api.github.com\/users\/yang-collect\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yang-collect\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yang-collect\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yang-collect\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yang-collect\/orgs","repos_url":"https:\/\/api.github.com\/users\/yang-collect\/repos","events_url":"https:\/\/api.github.com\/users\/yang-collect\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yang-collect\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The `data_files` parameter must be a string, a list\/tuple or a python dict.\r\n\r\nCan you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ?"],"created_at":1629371610000,"updated_at":1630399576000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real.\r\n\r\nhere is my code\r\n```python3\r\n# print my local path\r\nprint(config.train_path)\r\n# read data and print data length\r\ntarin=pd.read_csv(config.train_path)\r\nprint(len(tarin))\r\n\r\n# loading data by load_dataset \r\ndata = load_dataset('csv',data_files=config.train_path)\r\n\r\nprint(len(data))\r\n```\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nC:\\Users\\wie\\Documents\\\u9879\u76ee\\\u6587\u672c\u5206\u7c7b\\data\\train.csv\r\n7613\r\nTraceback (most recent call last):\r\n File \"c:\/Users\/wie\/Documents\/\u9879\u76ee\/\u6587\u672c\u5206\u7c7b\/lib\/DataPrecess.py\", line 17, in \r\n data = load_dataset('csv',data_files=config.train_path)\r\n File \"C:\\Users\\wie\\Miniconda3\\lib\\site-packages\\datasets\\load.py\", line 830, in load_dataset\r\n **config_kwargs,\r\n File \"C:\\Users\\wie\\Miniconda3\\lib\\site-packages\\datasets\\load.py\", line 710, in load_dataset_builder\r\n **config_kwargs,\r\n File \"C:\\Users\\wie\\Miniconda3\\lib\\site-packages\\datasets\\builder.py\", line 271, in __init__\r\n **config_kwargs,\r\n File \"C:\\Users\\wie\\Miniconda3\\lib\\site-packages\\datasets\\builder.py\", line 386, in _create_builder_config\r\n config_kwargs, custom_features=custom_features, use_auth_token=self.use_auth_token\r\n File \"C:\\Users\\wie\\Miniconda3\\lib\\site-packages\\datasets\\builder.py\", line 156, in create_config_id\r\n raise ValueError(\"Please provide a valid `data_files` in `DatasetBuilder`\")\r\nValueError: Please provide a valid `data_files` in `DatasetBuilder`\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: win10\r\n- Python version: 3.7.9\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2818\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2817","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2817\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2817\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2817\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2817","id":974486051,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE1NzgzMDQ3","number":2817,"title":"Rename The Pile subsets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sounds good. Should we also have a \u201cthe_pile\u201d dataset with the subsets as configuration?","I think the main `the_pile` datasets will be the one that is the mix of all the subsets: https:\/\/the-eye.eu\/public\/AI\/pile\/\r\n\r\nWe can also add configurations for each subset, and even allow users to specify the subsets they want:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"the_pile\", subsets=[\"openwebtext2\", \"books3\", \"hn\"])\r\n```\r\n\r\nWe're alrady doing something similar for mC4, where users can specify the list of languages they want to load."],"created_at":1629366982000,"updated_at":1629735850000,"closed_at":1629735849000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2817","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2817","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2817.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2817.patch"},"body":"After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have \"the_pile\" in their names.\r\n\r\nI'm doing the changes for the subsets that @richarddwang added:\r\n- [x] books3 -> the_pile_books3 https:\/\/github.com\/huggingface\/datasets\/pull\/2801\r\n- [x] stack_exchange -> the_pile_stack_exchange https:\/\/github.com\/huggingface\/datasets\/pull\/2803\r\n- [x] openwebtext2 -> the_pile_openwebtext2 https:\/\/github.com\/huggingface\/datasets\/pull\/2802\r\n\r\nFor consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think.\r\n(we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2817\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2816","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2816\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2816\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2816\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2816","id":974031404,"node_id":"MDU6SXNzdWU5NzQwMzE0MDQ=","number":2816,"title":"Add Mostly Basic Python Problems Dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I started working on that."],"created_at":1629318519000,"updated_at":1631261060000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Mostly Basic Python Problems Dataset\r\n- **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases.\r\n- **Paper:** *link to the dataset paper if available*\r\n- **Data:** https:\/\/github.com\/google-research\/google-research\/tree\/master\/mbpp\r\n- **Motivation:** Simple, small dataset related to coding problems.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2816\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2815","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2815\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2815\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2815\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2815","id":973862024,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE1MjUxNDQ5","number":2815,"title":"Tiny typo fixes of \"fo\" -> \"of\"","user":{"login":"aronszanto","id":9934829,"node_id":"MDQ6VXNlcjk5MzQ4Mjk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9934829?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aronszanto","html_url":"https:\/\/github.com\/aronszanto","followers_url":"https:\/\/api.github.com\/users\/aronszanto\/followers","following_url":"https:\/\/api.github.com\/users\/aronszanto\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aronszanto\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aronszanto\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aronszanto\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aronszanto\/orgs","repos_url":"https:\/\/api.github.com\/users\/aronszanto\/repos","events_url":"https:\/\/api.github.com\/users\/aronszanto\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aronszanto\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629304571000,"updated_at":1629360182000,"closed_at":1629360182000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2815","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2815","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2815.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2815.patch"},"body":"Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2815\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2814","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2814\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2814\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2814\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2814","id":973632645,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE1MDUwODc4","number":2814,"title":"Bump tqdm version","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629291089000,"updated_at":1629294251000,"closed_at":1629293990000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2814","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2814","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2814.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2814.patch"},"body":"The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https:\/\/github.com\/tqdm\/tqdm\/pull\/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would previously, if used, raise a PermissionError on Windows.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2814\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2813","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2813\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2813\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2813\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2813","id":973470580,"node_id":"MDU6SXNzdWU5NzM0NzA1ODA=","number":2813,"title":"Remove compression from xopen","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["After discussing with @lhoestq, a reasonable alternative:\r\n- `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: \r\n `bz2::http:\/\/domain.org\/filename.bz2`\r\n- `xopen` parses the `urlpath` and extracts the `compression` parameter and passes it to `fsspec.open`:\r\n `fsspec.open(\"http:\/\/domain.org\/filename.bz2\", compression=\"bz2\")`\r\n\r\nPros:\r\n- clean solution that continues giving support to all compression formats\r\n- no breaking change when opening non-decompressed files: if no compression-protocol-like is passed, fsspec.open does not uncompress (passes compression=None)\r\n\r\nCons:\r\n- we create a \"private\" convention for the format of `urlpath`: although similar to `fsspec` protocols, we add custom prefixes for the `compression` argument"],"created_at":1629279359000,"updated_at":1629734354000,"closed_at":1629734354000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"We implemented support for streaming with 2 requirements:\r\n- transparent use for the end user: just needs to pass the parameter `streaming=True`\r\n- no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming\r\n\r\nIn order to fulfill these requirements, streaming implementation patched some Python functions:\r\n- the `open(urlpath)` function was patched with `fsspec.open(urlpath)`\r\n- the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip:\/\/`), which are required by `fsspec.open`\r\n\r\nRecently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,...\r\nUnder the hood, the implementation:\r\n- passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)`\r\n\r\nSome concerns have been raised about passing the parameter `compression` to `fsspec.open`:\r\n- https:\/\/github.com\/huggingface\/datasets\/pull\/2786#discussion_r689550254\r\n- #2811 \r\n\r\nThe main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset:\r\n```python\r\ngzip.open(open(urlpath\r\n```\r\nWhile this is true:\r\n- it is not natural\/usual to call `open` inside `gzip.open` (never seen this before)\r\n- indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming)\r\n\r\nIn this particular case, there is a natural fix solution: #2811:\r\n- Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath`\r\n- Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression=\"gzip\"` \r\n\r\nAre there other issues apart from this?\r\n\r\nNote that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just\r\n- `gzip.open` \r\n- `open` (after having called dl_manager.download_and_extract)\r\n\r\nTODO:\r\n- [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic.\r\n - For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`:\r\n - oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July)\r\n - In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming\r\n- [ ] If this is indeed an issue, which are the possible alternatives? Pros\/cons?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2813\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2812","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2812\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2812\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2812\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2812","id":972936889,"node_id":"MDU6SXNzdWU5NzI5MzY4ODk=","number":2812,"title":"arXiv Dataset verification problem","user":{"login":"eladsegal","id":13485709,"node_id":"MDQ6VXNlcjEzNDg1NzA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13485709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eladsegal","html_url":"https:\/\/github.com\/eladsegal","followers_url":"https:\/\/api.github.com\/users\/eladsegal\/followers","following_url":"https:\/\/api.github.com\/users\/eladsegal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eladsegal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eladsegal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eladsegal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eladsegal\/orgs","repos_url":"https:\/\/api.github.com\/users\/eladsegal\/repos","events_url":"https:\/\/api.github.com\/users\/eladsegal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eladsegal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629223308000,"updated_at":1629223308000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n`dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples.\r\nTherefore, loading the dataset without `ignore_verifications=True` results in a verification error.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2812\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2811","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2811\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2811\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2811\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2811","id":972522480,"node_id":"MDExOlB1bGxSZXF1ZXN0NzE0MTAzNDIy","number":2811,"title":"Fix stream oscar","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["One additional note: if we can try to not change the code of oscar.py too often, I'm sure users that have it in their cache directory will be happy to not have to redownload it every time they update the library ;)\r\n\r\n(since changing the code changes the cache directory of the dataset)","I don't think this is confusing for users because users don't even know we have patched `open`. The only thing users care is that if the pass `streaming=True`, they want to be able to load the dataset in streaming mode.\r\n\r\nI don't see any other dataset where patching `open` with `fsspec.open`+`compression` is an \"underlying issue\". Are there other datasets where this is an issue?\r\n\r\nThe only dataset where this was an issue is in oscar and the issue is indeed due to the additional `open` you added inside `zip.open`.","Closing this one since https:\/\/github.com\/huggingface\/datasets\/pull\/2822 reverted the change of behavior of `open`"],"created_at":1629195059000,"updated_at":1629973575000,"closed_at":1629973574000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2811","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2811","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2811.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2811.patch"},"body":"Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4.\r\n\r\nThis was argued that might be problematic: https:\/\/github.com\/huggingface\/datasets\/pull\/2786#discussion_r690045921\r\n\r\nThis PR:\r\n- removes that additional `open`\r\n- patches `gzip.open` with `xopen` + `compression=\"gzip\"`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2811\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2810","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2810\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2810\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2810\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2810","id":972040022,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEzNjkzMTI1","number":2810,"title":"Add WIT Dataset","user":{"login":"hassiahk","id":13920778,"node_id":"MDQ6VXNlcjEzOTIwNzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13920778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hassiahk","html_url":"https:\/\/github.com\/hassiahk","followers_url":"https:\/\/api.github.com\/users\/hassiahk\/followers","following_url":"https:\/\/api.github.com\/users\/hassiahk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hassiahk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hassiahk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hassiahk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hassiahk\/orgs","repos_url":"https:\/\/api.github.com\/users\/hassiahk\/repos","events_url":"https:\/\/api.github.com\/users\/hassiahk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hassiahk\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629142449000,"updated_at":1629220458000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2810","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2810","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2810.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2810.patch"},"body":"Adds Google's [WIT](https:\/\/github.com\/google-research-datasets\/wit) dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2810\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2809","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2809\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2809\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2809\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2809","id":971902613,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEzNTc2Njcz","number":2809,"title":"Add Beans Dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629130953000,"updated_at":1629978147000,"closed_at":1629978147000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2809","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2809","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2809.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2809.patch"},"body":"Adds the [beans](https:\/\/github.com\/AI-Lab-Makerere\/ibean\/) image classification dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2809\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2808","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2808\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2808\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2808\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2808","id":971882320,"node_id":"MDU6SXNzdWU5NzE4ODIzMjA=","number":2808,"title":"Enable streaming for Wikipedia corpora","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629129552000,"updated_at":1629129552000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nSeveral of the [Wikipedia corpora](https:\/\/huggingface.co\/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# Throws ValueError: Builder wikipedia is not streamable.\r\nwiki_dataset_streamed = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\", streaming=True)\r\n```\r\n\r\nGiven that these corpora are derived from Wikipedia dumps in XML format which are then processed with Apache Beam, I am not sure whether streaming is possible in principle. The goal of this issue is to discuss whether this feature even makes sense :)\r\n\r\n**Describe the solution you'd like**\r\nIt would be nice to be able to stream Wikipedia corpora from the Hub with something like\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nwiki_dataset_streamed = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\", streaming=True)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2808\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2807","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2807\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2807\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2807\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2807","id":971849863,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEzNTMxNjIw","number":2807,"title":"Add cats_vs_dogs dataset","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629127271000,"updated_at":1630341325000,"closed_at":1630341324000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2807","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2807","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2807.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2807.patch"},"body":"Adds Microsoft's [Cats vs. Dogs](https:\/\/www.microsoft.com\/en-us\/download\/details.aspx?id=54765) dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2807\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2806","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2806\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2806\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2806\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2806","id":971625449,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEzMzM5NDUw","number":2806,"title":"Fix streaming tar files from canonical datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["In case it's relevant for this PR, I'm finding that I cannot stream the `bookcorpus` dataset (using the `master` branch of `datasets`), which is a `.tar.bz2` file:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nbooks_dataset_streamed = load_dataset(\"bookcorpus\", split=\"train\", streaming=True)\r\n# Throws a 404 HTTP error\r\nnext(iter(books_dataset_streamed))\r\n```\r\n\r\nThe full stack trace is:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\n in ()\r\n----> 1 next(iter(books_dataset_streamed))\r\n\r\n11 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py in __iter__(self)\r\n 339 \r\n 340 def __iter__(self):\r\n--> 341 for key, example in self._iter():\r\n 342 if self.features:\r\n 343 # we encode the example for ClassLabel feature types for example\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py in _iter(self)\r\n 336 else:\r\n 337 ex_iterable = self._ex_iterable\r\n--> 338 yield from ex_iterable\r\n 339 \r\n 340 def __iter__(self):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/iterable_dataset.py in __iter__(self)\r\n 76 \r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n 80 \r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/bookcorpus\/44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700\/bookcorpus.py in _generate_examples(self, directory)\r\n 98 for txt_file in files:\r\n 99 with open(txt_file, mode=\"r\", encoding=\"utf-8\") as f:\r\n--> 100 for line in f:\r\n 101 yield _id, {\"text\": line.strip()}\r\n 102 _id += 1\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/implementations\/http.py in read(self, length)\r\n 496 else:\r\n 497 length = min(self.size - self.loc, length)\r\n--> 498 return super().read(length)\r\n 499 \r\n 500 async def async_fetch_all(self):\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/spec.py in read(self, length)\r\n 1481 # don't even bother calling fetch\r\n 1482 return b\"\"\r\n-> 1483 out = self.cache._fetch(self.loc, self.loc + length)\r\n 1484 self.loc += len(out)\r\n 1485 return out\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/caching.py in _fetch(self, start, end)\r\n 374 ):\r\n 375 # First read, or extending both before and after\r\n--> 376 self.cache = self.fetcher(start, bend)\r\n 377 self.start = start\r\n 378 elif start < self.start:\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/asyn.py in wrapper(*args, **kwargs)\r\n 86 def wrapper(*args, **kwargs):\r\n 87 self = obj or args[0]\r\n---> 88 return sync(self.loop, func, *args, **kwargs)\r\n 89 \r\n 90 return wrapper\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/asyn.py in sync(loop, func, timeout, *args, **kwargs)\r\n 67 raise FSTimeoutError\r\n 68 if isinstance(result[0], BaseException):\r\n---> 69 raise result[0]\r\n 70 return result[0]\r\n 71 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/asyn.py in _runner(event, coro, result, timeout)\r\n 23 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 24 try:\r\n---> 25 result[0] = await coro\r\n 26 except Exception as ex:\r\n 27 result[0] = ex\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/fsspec\/implementations\/http.py in async_fetch_range(self, start, end)\r\n 535 # range request outside file\r\n 536 return b\"\"\r\n--> 537 r.raise_for_status()\r\n 538 if r.status == 206:\r\n 539 # partial content, as expected\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/aiohttp\/client_reqrep.py in raise_for_status(self)\r\n 1003 status=self.status,\r\n 1004 message=self.reason,\r\n-> 1005 headers=self.headers,\r\n 1006 )\r\n 1007 \r\n\r\nClientResponseError: 404, message='Not Found', url=URL('https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2\/books_large_p1.txt')\r\n```\r\n\r\nLet me know if this is unrelated and I'll open a separate issue :)\r\n\r\nEnvironment info:\r\n\r\n```\r\n- `datasets` version: 1.11.1.dev0\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n```","@lewtun: `.tar.compression-extension` files are not supported yet. That is the objective of this PR.","> @lewtun: `.tar.compression-extension` files are not supported yet. That is the objective of this PR.\r\n\r\nthanks for the context and the great work on the streaming features (right now i'm writing the streaming section of the HF course, so am acting like a beta tester \ud83d\ude04)","@lewtun this PR fixes previous issue with xjoin:\r\n\r\nGiven:\r\n```python\r\nxjoin(\r\n \"https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2\",\r\n \"books_large_p1.txt\"\r\n)\r\n```\r\n\r\n- Before it gave: \r\n `\"https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2\/books_large_p1.txt\"`\r\n thus raising the 404 error\r\n\r\n- Now it gives:\r\n `tar:\/\/books_large_p1.txt::https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2`\r\n (this is the expected format for `fsspec`) and additionally passes the parameter `compression=\"bz2\"`.\r\n See: https:\/\/github.com\/huggingface\/datasets\/pull\/2806\/files#diff-97bb2d08db65ce3b679aefc43cadad76d053c1e58ecc315e49b80873d0fbdabeR15"],"created_at":1629112228000,"updated_at":1629197780000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2806","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2806","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2806.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2806.patch"},"body":"Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `\"*\"`.\r\n\r\nHowever, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`).\r\n\r\nThis PR fixes this issue and allows streaming tar files both from:\r\n- canonical datasets scripts and\r\n- data files.\r\n\r\nThis PR also adds support for compressed tar files: `.tar.gz`, `.tar.bz2`,...\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2806\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2805","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2805\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2805\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2805\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2805","id":971436456,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEzMTc3MTI4","number":2805,"title":"Fix streaming zip files from canonical datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629097900000,"updated_at":1629110040000,"closed_at":1629110040000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2805","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2805","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2805.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2805.patch"},"body":"Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`.\r\n\r\nHowever, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called.\r\n\r\nThis PR fixes this issue and allows streaming zip files both from:\r\n- canonical datasets scripts and\r\n- data files.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2805\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2804","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2804\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2804\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2804\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2804","id":971353437,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEzMTA2NTMw","number":2804,"title":"Add Food-101","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1629087975000,"updated_at":1629469893000,"closed_at":1629377286000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2804","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2804","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2804.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2804.patch"},"body":"Adds image classification dataset [Food-101](https:\/\/data.vision.ee.ethz.ch\/cvl\/datasets_extra\/food-101\/).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2804\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2803","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2803\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2803\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2803\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2803","id":970858928,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEyNzQxODMz","number":2803,"title":"add stack exchange","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Merging this one since it's all good :)\r\n\r\nHowever I think it would also be better to actually rename it `the_pile_stack_exchange` to make things clearer and to avoid name collisions in the future. I would like to do the same for `books3` as well.\r\n\r\nIf you don't mind I'll open a PR to do the renaming","\r\n> If you don't mind I'll open a PR to do the renaming\r\n\r\n@lhoestq That will be nice !!\r\n"],"created_at":1628928662000,"updated_at":1629367653000,"closed_at":1629360458000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2803","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2803","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2803.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2803.patch"},"body":"stack exchange is part of EleutherAI\/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.\r\n\r\nI also change default `timeout` to 100 seconds instead of 10 seconds, otherwise I keep getting read time out when downloading source data of stack exchange and cc100 dataset.\r\n\r\nWhen I was creating dataset card. I found there is room for creating \/ editing dataset card. I've made it an issue. #2797\r\n\r\nAlso I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2803\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2802","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2802\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2802\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2802\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2802","id":970848302,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEyNzM0MTc3","number":2802,"title":"add openwebtext2","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems we need to `pip install jsonlines` to pass the checks ?","Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replace `jsonlines` with a simple for loop on the lines of the files and use `json.loads`, or you can add `TESTS_REQUIRE` to the test requirements (but in this case users will have to install it as well).","Thanks for your suggestion. I now know `io` and json lines format better and has changed `jsonlines` to just `readlines`."],"created_at":1628924943000,"updated_at":1629727574000,"closed_at":1629727574000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2802","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2802","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2802.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2802.patch"},"body":"openwebtext2 is part of EleutherAI\/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.\r\n\r\nWhen I was creating dataset card. I found there is room for creating \/ editing dataset card. I've made it an issue. #2797\r\n\r\nAlso I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2802\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2801","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2801\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2801\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2801\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2801","id":970844617,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEyNzMwODEz","number":2801,"title":"add books3","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> When I was creating dataset card. I found there is room for creating \/ editing dataset card. I've made it an issue. #2797\r\n\r\nThanks for the message, we'll definitely improve this\r\n\r\n> Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675\r\n\r\nWell currently no, but I think @lewtun was about to do it (though he's currently on vacations)","> > Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675\r\n> \r\n> Well currently no, but I think @lewtun was about to do it (though he's currently on vacations)\r\n\r\nyes i plan to start working on this next week #2185 \r\n\r\none question for @richarddwang - do you know if eleutherai happened to also release the \"existing\" datasets like enron emails and opensubtitles? \r\n\r\nin appendix c of their paper, they provide details on how they extracted these datasets, but it would be nice if we could just point to a url so we can be as close as possible to original implementation.","@lewtun \r\n\r\n> yes i plan to start working on this next week\r\n\r\nNice! Looking forward to it.\r\n\r\n> one question for @richarddwang - do you know if eleutherai happened to also release the \"existing\" datasets like enron emails and opensubtitles?\r\n\r\nSadly, I don't know any existing dataset of enron emails, but I believe opensubtitles dataset is hosted at here. https:\/\/the-eye.eu\/public\/AI\/pile_preliminary_components\/\r\n![image](https:\/\/user-images.githubusercontent.com\/17963619\/130061667-8c17985a-1c2f-432f-89f0-66a5288611b8.png)\r\n","thanks for the link @richarddwang! i think that corpus is actually the youtube subtitles one and my impression is that eleutherai have only uploaded the 14 new datasets they created. i've contacted one of the authors so hopefully they can share some additional info for us :)\r\n\r\nbtw it might take a while to put together all the corpora if i also need to preprocess them (e.g. the open subtitles \/ enron email etc), but i expect no longer than a few weeks."],"created_at":1628924665000,"updated_at":1629391389000,"closed_at":1629301019000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2801","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2801","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2801.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2801.patch"},"body":"books3 is part of EleutherAI\/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.\r\n\r\nWhen I was creating dataset card. I found there is room for creating \/ editing dataset card. I've made it an issue. #2797 \r\n\r\nAlso I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2801\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2800","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2800\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2800\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2800\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2800","id":970819988,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEyNzExNTcx","number":2800,"title":"Support streaming tar files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Why do we need the custom `readline` for exactly ? feel free to add a comment to say why it's needed"],"created_at":1628916017000,"updated_at":1629972150000,"closed_at":1628916957000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2800","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2800","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2800.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2800.patch"},"body":"This PR adds support to stream tar files by using the `fsspec` tar protocol.\r\n\r\nIt also uses the custom `readline` implemented in PR #2786.\r\n\r\nThe corresponding test is implemented in PR #2786.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2800\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2799","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2799\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2799\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2799\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2799","id":970507351,"node_id":"MDU6SXNzdWU5NzA1MDczNTE=","number":2799,"title":"Loading JSON throws ArrowNotImplementedError","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lewtun, thanks for reporting.\r\n\r\nApparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`.\r\n\r\nI will investigate if there is a way to tell pyarrow not to try that timestamp casting.","I think the issue is more complex than that...\r\n\r\nI just took one of your JSON lines and pyarrow.json read it without problem.","> I just took one of your JSON lines an pyarrow.json read it without problem.\r\n\r\nyes, and for some peculiar reason the error is non-deterministic (i was eventually able to load the whole dataset by just re-running the `load_dataset` cell multiple times \ud83e\udd14)\r\n\r\nthanks for looking into this \ud83d\ude4f !","I think the error is generated by the `pyarrow.json.read()` option: `read_options=paj.ReadOptions(block_size=block_size)`...\r\ncc: @lhoestq ","The code works fine on my side.\r\nNot sure what's going on here :\/\r\n\r\nI remember we did a few changes in the JSON loader in #2638 , did you do an update `datasets` when debugging this ?\r\n","OK after upgrading `datasets` to v1.12.1 the issue seems to have gone away. Closing this now :)","Oops, I spoke too soon \ud83d\ude13 \r\n\r\nAfter deleting the cache and trying the above code snippet again I am hitting the same error. You can also reproduce it in the Colab notebook I linked to in the issue description. "],"created_at":1628868708000,"updated_at":1632304346000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI have created a [dataset](https:\/\/huggingface.co\/datasets\/lewtun\/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below).\r\n\r\nCuriously, there is no problem loading the dataset with `pandas` which suggests some incorrect type inference is being made on the `datasets` side. For example, the stack trace indicates that some URL fields are being parsed as timestamps.\r\n\r\nYou can find a Colab notebook which reproduces the error [here](https:\/\/colab.research.google.com\/drive\/1YUCM0j1vx5ZrouQbYSzal6RwB4-Aoh4o?usp=sharing).\r\n\r\n**Edit:** If one repeatedly tries to load the dataset, it _eventually_ works but I think it would still be good to understand why it fails in the first place :)\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nfrom huggingface_hub import hf_hub_url\r\nimport pandas as pd\r\n\r\n# returns https:\/\/huggingface.co\/datasets\/lewtun\/github-issues-test\/resolve\/main\/issues-datasets.jsonl\r\ndata_files = hf_hub_url(repo_id=\"lewtun\/github-issues-test\", filename=\"issues-datasets.jsonl\", repo_type=\"dataset\")\r\n# throws ArrowNotImplementedError\r\ndset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n# no problem with pandas ...\r\ndf = pd.read_json(data_files, orient=\"records\", lines=True)\r\ndf.head()\r\n```\r\n\r\n## Expected results\r\nI can load any line-separated JSON file, similar to `pandas`.\r\n\r\n## Actual results\r\n```\r\n---------------------------------------------------------------------------\r\nArrowNotImplementedError Traceback (most recent call last)\r\n in ()\r\n----> 1 dset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n\r\n9 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: JSON conversion to struct, open_issues: int64, closed_issues: int64, state: timestamp[s], created_at: timestamp[s], updated_at: timestamp[s], due_on: timestamp[s], closed_at: timestamp[s]> is not supported\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2799\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2798","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2798\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2798\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2798\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2798","id":970493126,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEyNDM3ODc2","number":2798,"title":"Fix streaming zip files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I don't fully understand this change @albertvillanova \r\nThe `_extract` method used to return the compound URL that points to the root of the inside of the archive.\r\nThis way users can use the usual os.path.join or other functions to point to the relevant files. I don't see why you're using a glob pattern ?","This change is to allow this:\r\n```python\r\ndata_files = f\"https:\/\/huggingface.co\/datasets\/albertvillanova\/datasets-tests-compression\/resolve\/main\/sample.zip\"\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, streaming=True)\r\nassert isinstance(ds, IterableDataset)\r\n```\r\nNote that in this case the user will not call os.path.join.\r\n\r\nBefore this PR it gave error because pointing to the root, without any subsequent join, gives error:\r\n```python\r\nfsspec.open(\"zip:\/\/::https:\/\/huggingface.co\/datasets\/albertvillanova\/datasets-tests-compression\/resolve\/main\/sample.zip\")\r\n```"],"created_at":1628867821000,"updated_at":1629123410000,"closed_at":1628869108000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2798","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2798","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2798.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2798.patch"},"body":"Currently, streaming remote zip data files gives `FileNotFoundError` message:\r\n```python\r\ndata_files = f\"https:\/\/huggingface.co\/datasets\/albertvillanova\/datasets-tests-compression\/resolve\/main\/sample.zip\"\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, streaming=True)\r\nnext(iter(ds))\r\n```\r\n\r\nThis PR fixes it by adding a glob string.\r\n\r\nThe corresponding test is implemented in PR #2786.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2798\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2797","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2797\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2797\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2797\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2797","id":970331634,"node_id":"MDU6SXNzdWU5NzAzMzE2MzQ=","number":2797,"title":"Make creating\/editing dataset cards easier, by editing on site and dumping info from test command.","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628855689000,"updated_at":1628930529000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nCreating and editing dataset cards should be but not that easy\r\n- If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he\/she should know the description on hf.co comes from README.md under github huggingface\/datasets\/datasets\/the dataset, and willing to make a pr to add or fix information.\r\n- Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again.\r\n- Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser. \r\n- if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal)\r\n- dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us.\r\n\r\n**Describe the solution you'd like**\r\n- Everyone (or at least the author\/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow\r\n- We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2797\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2796","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2796\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2796\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2796\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2796","id":970235846,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEyMjE1ODM2","number":2796,"title":"add cedr dataset","user":{"login":"naumov-al","id":22640075,"node_id":"MDQ6VXNlcjIyNjQwMDc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22640075?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/naumov-al","html_url":"https:\/\/github.com\/naumov-al","followers_url":"https:\/\/api.github.com\/users\/naumov-al\/followers","following_url":"https:\/\/api.github.com\/users\/naumov-al\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/naumov-al\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/naumov-al\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/naumov-al\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/naumov-al\/orgs","repos_url":"https:\/\/api.github.com\/users\/naumov-al\/repos","events_url":"https:\/\/api.github.com\/users\/naumov-al\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/naumov-al\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Hi ! Thanks a lot for adding this one :)\r\n> \r\n> Good job with the dataset card and the dataset script !\r\n> \r\n> I left a few suggestions\r\n\r\nThank you very much for your helpful suggestions. I have tried to carry them all out."],"created_at":1628847455000,"updated_at":1630080096000,"closed_at":1630080096000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2796","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2796","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2796.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2796.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2796\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2794","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2794\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2794\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2794\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2794","id":969728545,"node_id":"MDU6SXNzdWU5Njk3Mjg1NDU=","number":2794,"title":"Warnings and documentation about pickling incorrect","user":{"login":"mbforbes","id":1170062,"node_id":"MDQ6VXNlcjExNzAwNjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1170062?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mbforbes","html_url":"https:\/\/github.com\/mbforbes","followers_url":"https:\/\/api.github.com\/users\/mbforbes\/followers","following_url":"https:\/\/api.github.com\/users\/mbforbes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mbforbes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mbforbes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mbforbes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mbforbes\/orgs","repos_url":"https:\/\/api.github.com\/users\/mbforbes\/repos","events_url":"https:\/\/api.github.com\/users\/mbforbes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mbforbes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628809753000,"updated_at":1628809771000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nI have a docs bug and a closely related docs enhancement suggestion!\r\n\r\n### Bug\r\n\r\nThe warning and documentation say \"either `dill` or `pickle`\" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails.\r\n\r\nWarning:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/450b9174765374111e5c6daab0ed294bc3d9b639\/src\/datasets\/fingerprint.py#L262\r\n\r\nDocs:\r\n\r\n> For a transform to be hashable, it needs to be pickleable using dill or pickle.\r\n> \u2013 [docs](https:\/\/huggingface.co\/docs\/datasets\/processing.html#fingerprinting)\r\n\r\nFor my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https:\/\/github.com\/huggingface\/datasets\/issues\/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/c93525dc291346e54212567fa72d7d607befe937\/setup.py#L83\r\n\r\n... and the hashing will fail if it fails.\r\n\r\n### Enhancement\r\n\r\nI think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https:\/\/github.com\/huggingface\/datasets\/issues\/2516#issuecomment-865173139:\r\n\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nHasher.hash(my_object)\r\n```\r\n\r\nI think add this to the docs will help future users quickly debug any hashing troubles of their own :-)\r\n\r\n## Steps to reproduce the bug\r\n\r\n`dill` but not `pickle` hashing failure in https:\/\/github.com\/huggingface\/datasets\/issues\/2643\r\n\r\n## Expected results\r\nIf either `dill` or `pickle` can successfully hash, the hashing will succeed.\r\n\r\n## Actual results\r\nIf `dill` or `pickle` cannot hash, the hashing fails.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.6\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2794\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2793","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2793\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2793\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2793\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2793","id":968967773,"node_id":"MDExOlB1bGxSZXF1ZXN0NzExMDQ4NDY2","number":2793,"title":"Fix type hint for data_files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628779357000,"updated_at":1628782529000,"closed_at":1628782529000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2793","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2793","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2793.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2793.patch"},"body":"Fix type hint for `data_files` in signatures and docstrings.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2793\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2792","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2792\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2792\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2792\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2792","id":968650274,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEwNzUyMjc0","number":2792,"title":"Update: GooAQ - add train\/val\/test splits","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@albertvillanova my tests are failing here:\r\n```\r\ndataset_name = 'gooaq'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True, use_local_dummy_data=True)\r\n\r\ntests\/test_dataset_common.py:234: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\/test_dataset_common.py:187: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\nWhen I try loading dataset on local machine it works fine. Any suggestions on how can I avoid this error?","Thanks for the help, @albertvillanova! All tests are passing now."],"created_at":1628768418000,"updated_at":1630079925000,"closed_at":1630079894000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2792","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2792","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2792.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2792.patch"},"body":"[GooAQ](https:\/\/github.com\/allenai\/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train\/val\/test splits and updated README as well.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2792\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2791","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2791\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2791\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2791\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2791","id":968360314,"node_id":"MDExOlB1bGxSZXF1ZXN0NzEwNDgxNDAy","number":2791,"title":"Fix typo in cnn_dailymail","user":{"login":"omaralsayed","id":42531544,"node_id":"MDQ6VXNlcjQyNTMxNTQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42531544?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/omaralsayed","html_url":"https:\/\/github.com\/omaralsayed","followers_url":"https:\/\/api.github.com\/users\/omaralsayed\/followers","following_url":"https:\/\/api.github.com\/users\/omaralsayed\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/omaralsayed\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/omaralsayed\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/omaralsayed\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/omaralsayed\/orgs","repos_url":"https:\/\/api.github.com\/users\/omaralsayed\/repos","events_url":"https:\/\/api.github.com\/users\/omaralsayed\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/omaralsayed\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628757522000,"updated_at":1628767079000,"closed_at":1628767079000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2791","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2791","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2791.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2791.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2791\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2790","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2790\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2790\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2790\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2790","id":967772181,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA5OTI3NjM2","number":2790,"title":"Fix typo in test_dataset_common","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628730629000,"updated_at":1628767889000,"closed_at":1628767889000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2790","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2790","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2790.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2790.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2790\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2789","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2789\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2789\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2789\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2789","id":967361934,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA5NTQwMzY5","number":2789,"title":"Updated dataset description of DaNE","user":{"login":"KennethEnevoldsen","id":23721977,"node_id":"MDQ6VXNlcjIzNzIxOTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23721977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KennethEnevoldsen","html_url":"https:\/\/github.com\/KennethEnevoldsen","followers_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/followers","following_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/repos","events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for finishing it @albertvillanova "],"created_at":1628711928000,"updated_at":1628784659000,"closed_at":1628784361000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2789","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2789","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2789.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2789.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2789\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2788","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2788\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2788\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2788\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2788","id":967149389,"node_id":"MDU6SXNzdWU5NjcxNDkzODk=","number":2788,"title":"How to sample every file in a list of files making up a split in a dataset when loading?","user":{"login":"brijow","id":11220949,"node_id":"MDQ6VXNlcjExMjIwOTQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11220949?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/brijow","html_url":"https:\/\/github.com\/brijow","followers_url":"https:\/\/api.github.com\/users\/brijow\/followers","following_url":"https:\/\/api.github.com\/users\/brijow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/brijow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/brijow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/brijow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/brijow\/orgs","repos_url":"https:\/\/api.github.com\/users\/brijow\/repos","events_url":"https:\/\/api.github.com\/users\/brijow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/brijow\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This is not possible just with `load_dataset`.\r\n\r\nYou can do something like this instead:\r\n```python\r\nseed=42\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\r\n \"csv\",\r\n data_files=data_files_dict,\r\n).shuffle(seed=seed)\r\n\r\nsample_dataset = {splitname: split.select(range(8)) for splitname, split in dataset.items()}\r\n```\r\n\r\nAnother alternative is loading each file separately with `split=\"train[:8]\"` and then use `concatenate_datasets` to merge the sample of each file."],"created_at":1628703801000,"updated_at":1629738742000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am loading a dataset with multiple train, test, and validation files like this:\r\n\r\n```\r\ndata_files_dict = {\r\n \"train\": [train_file1, train_file2],\r\n \"test\": [test_file1, test_file2],\r\n \"val\": [val_file1, val_file2]\r\n}\r\ndataset = datasets.load_dataset(\r\n \"csv\",\r\n data_files=data_files_dict,\r\n split=['train[:8]', 'test[:8]', 'val[:8]']\r\n)\r\n\r\n```\r\n\r\nHowever, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists.\r\n\r\nI'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split.\r\n\r\nIs this type of splitting supported? If so, how can I do it?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2788\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2787","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2787\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2787\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2787\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2787","id":967018406,"node_id":"MDU6SXNzdWU5NjcwMTg0MDY=","number":2787,"title":"ConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com","user":{"login":"jinec","id":39627475,"node_id":"MDQ6VXNlcjM5NjI3NDc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39627475?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jinec","html_url":"https:\/\/github.com\/jinec","followers_url":"https:\/\/api.github.com\/users\/jinec\/followers","following_url":"https:\/\/api.github.com\/users\/jinec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jinec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jinec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jinec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jinec\/orgs","repos_url":"https:\/\/api.github.com\/users\/jinec\/repos","events_url":"https:\/\/api.github.com\/users\/jinec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jinec\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the bug code locate in \uff1a\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)","Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https:\/\/raw.githubusercontent.com\r\n\r\nNormally, it should work if you wait a little and then retry.\r\n\r\nCould you please confirm if the problem persists?","cannot connect\uff0ceven by Web browser\uff0cplease check that there is some problems\u3002","I can access https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.7.0\/datasets\/glue\/glue.py without problem...","> I can access https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.7.0\/datasets\/glue\/glue.py without problem...\r\n\r\nI can not access https:\/\/raw.githubusercontent.com\/huggingface\/datasets either, I am in China","Finally i can access it, by the superfast software. Thanks"],"created_at":1628698741000,"updated_at":1629299359000,"closed_at":1629299358000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\nI am trying to run run_glue.py and it gives me this error -\r\n\r\nTraceback (most recent call last):\r\n File \"E:\/BERT\/pytorch_hugging\/transformers\/examples\/pytorch\/text-classification\/run_glue.py\", line 546, in \r\n main()\r\n File \"E:\/BERT\/pytorch_hugging\/transformers\/examples\/pytorch\/text-classification\/run_glue.py\", line 250, in main\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)\r\n File \"C:\\install\\Anaconda3\\envs\\huggingface\\lib\\site-packages\\datasets\\load.py\", line 718, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"C:\\install\\Anaconda3\\envs\\huggingface\\lib\\site-packages\\datasets\\load.py\", line 320, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"C:\\install\\Anaconda3\\envs\\huggingface\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 291, in cached_path\r\n use_auth_token=download_config.use_auth_token,\r\n File \"C:\\install\\Anaconda3\\envs\\huggingface\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 623, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.7.0\/datasets\/glue\/glue.py\r\n\r\nTrying to do python run_glue.py --model_name_or_path\r\nbert-base-cased\r\n--task_name\r\nmrpc\r\n--do_train\r\n--do_eval\r\n--max_seq_length\r\n128\r\n--per_device_train_batch_size\r\n32\r\n--learning_rate\r\n2e-5\r\n--num_train_epochs\r\n3\r\n--output_dir\r\n.\/tmp\/mrpc\/\r\n\r\nIs this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago.\r\nThank you!\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2787\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2786","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2786\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2786\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2786\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2786","id":966282934,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA4NTQwMzU0","number":2786,"title":"Support streaming compressed files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628672526000,"updated_at":1629178119000,"closed_at":1629095779000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2786","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2786","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2786.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2786.patch"},"body":"Add support to stream compressed files (current options in fsspec):\r\n- bz2\r\n- lz4\r\n- xz\r\n- zstd\r\n\r\ncc: @lewtun ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2786\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2783","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2783\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2783\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2783\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2783","id":965461382,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA3NzcxOTM3","number":2783,"title":"Add KS task to SUPERB","user":{"login":"anton-l","id":26864830,"node_id":"MDQ6VXNlcjI2ODY0ODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26864830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anton-l","html_url":"https:\/\/github.com\/anton-l","followers_url":"https:\/\/api.github.com\/users\/anton-l\/followers","following_url":"https:\/\/api.github.com\/users\/anton-l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anton-l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anton-l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anton-l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anton-l\/orgs","repos_url":"https:\/\/api.github.com\/users\/anton-l\/repos","events_url":"https:\/\/api.github.com\/users\/anton-l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anton-l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks a lot for implementing this @anton-l !!\r\n\r\ni won't have time to review this while i'm away, so happy for @albertvillanova and @patrickvonplaten to decide when to merge :)","@albertvillanova thanks! Everything should be ready now :)","> The _background_noise_\/_silence_ audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime)\r\n\r\n@anton-l I was thinking that maybe we could give some hints in the dataset card (in a Usage section); something similar as for diarization: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/superb\/README.md#example-of-usage\r\nNote that for diarization it is not yet finished: we have to test it and then provide an end-to-end example: https:\/\/github.com\/huggingface\/datasets\/pull\/2661\/files#r680224909 ","@albertvillanova yeah, I'm not sure how to best implement it in pure `datasets` yet. It's something like this, where `sample_noise()` needs to be called from a pytorch batch collator or other framework-specific variant:\r\n\r\n```python\r\ndef map_to_array(example):\r\n import soundfile as sf\r\n\r\n speech_array, sample_rate = sf.read(example[\"file\"])\r\n example[\"speech\"] = speech_array\r\n example[\"sample_rate\"] = sample_rate\r\n return example\r\n\r\n\r\ndef sample_noise(example):\r\n # Use a version of this function in a stateless way to extract random 1 sec slices of background noise\r\n # on each epoch\r\n from random import randint\r\n\r\n # _silence_ audios are longer than 1 sec\r\n if example[\"label\"] == \"_silence_\":\r\n random_offset = randint(0, len(example[\"speech\"]) - example[\"sample_rate\"] - 1)\r\n example[\"speech\"] = example[\"speech\"][random_offset : random_offset + example[\"sample_rate\"]]\r\n\r\n return example\r\n```","I see... Yes, not trivial indeed. Maybe for the moment you could add those functions above to the README (as it is the case for now in diarization)? What do you think?"],"created_at":1628633647000,"updated_at":1628786701000,"closed_at":1628713157000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2783","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2783","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2783.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2783.patch"},"body":"Add the KS (keyword spotting) task as described in the [SUPERB paper](https:\/\/arxiv.org\/abs\/2105.01051).\r\n\r\n- [s3prl instructions](https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/README.md#ks-keyword-spotting)\r\n- [s3prl implementation](https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/speech_commands\/dataset.py)\r\n- [TFDS implementation](https:\/\/github.com\/tensorflow\/datasets\/blob\/master\/tensorflow_datasets\/audio\/speech_commands.py)\r\n\r\nSome notable quirks:\r\n- The dataset is originally single-archive (train+val+test all in one), but the test set has a \"canonical\" distribution in a separate archive, which is also used here (see `_split_ks_files()`). \r\n- The `_background_noise_`\/`_silence_` audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime)\r\n\r\nRelated to #2619.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2783\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2782","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2782\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2782\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2782\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2782","id":964858439,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA3MjQ5NDE5","number":2782,"title":"Fix renaming of corpus_bleu args","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628593354000,"updated_at":1628594167000,"closed_at":1628594167000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2782","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2782","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2782.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2782.patch"},"body":"Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https:\/\/github.com\/mjpost\/sacrebleu\/pull\/152\/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15\r\n\r\nThis PR passes the args without parameter names, so that it is valid for all versions of `sacrebleu`.\r\n\r\nThis is a partial hotfix of #2781.\r\n\r\nClose #2781.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2782\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2781","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2781\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2781\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2781\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2781","id":964805351,"node_id":"MDU6SXNzdWU5NjQ4MDUzNTE=","number":2781,"title":"Latest v2.0.0 release of sacrebleu has broken some metrics","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1628589581000,"updated_at":1628594167000,"closed_at":1628594167000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nAfter `sacrebleu` v2.0.0 release (see changes here: https:\/\/github.com\/mjpost\/sacrebleu\/pull\/152\/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken:\r\n- Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists:\r\n - #2739\r\n - #2778\r\n- Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`:\r\n - #2779\r\n- `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`: \r\n - #2782 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2781\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2780","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2780\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2780\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2780\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2780","id":964794764,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA3MTk2NjA3","number":2780,"title":"VIVOS dataset for Vietnamese ASR","user":{"login":"binh234","id":57580923,"node_id":"MDQ6VXNlcjU3NTgwOTIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57580923?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/binh234","html_url":"https:\/\/github.com\/binh234","followers_url":"https:\/\/api.github.com\/users\/binh234\/followers","following_url":"https:\/\/api.github.com\/users\/binh234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/binh234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/binh234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/binh234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/binh234\/orgs","repos_url":"https:\/\/api.github.com\/users\/binh234\/repos","events_url":"https:\/\/api.github.com\/users\/binh234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/binh234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628588856000,"updated_at":1628766570000,"closed_at":1628766570000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2780","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2780","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2780.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2780.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2780\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2779","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2779\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2779\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2779\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2779","id":964775085,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA3MTgwNTgw","number":2779,"title":"Fix sacrebleu tokenizers","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628587467000,"updated_at":1628593388000,"closed_at":1628593074000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2779","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2779","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2779.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2779.patch"},"body":"Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https:\/\/github.com\/mjpost\/sacrebleu\/pull\/152\/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15\r\n\r\nThis PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`.\r\n\r\nEventually, this should be further fixed in order to use only public functions.\r\n\r\nThis is a partial hotfix of #2781.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2779\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2778","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2778\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2778\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2778\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2778","id":964737422,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA3MTQ5MTk2","number":2778,"title":"Do not pass tokenize to sacrebleu","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628584837000,"updated_at":1628589817000,"closed_at":1628589817000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2778","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2778","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2778.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2778.patch"},"body":"Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https:\/\/github.com\/mjpost\/sacrebleu\/pull\/152\/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15\r\n\r\nThis PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will use its default, no matter where it is and how it is called.\r\n\r\nRelated to #2739.\r\n\r\nThis is a partial hotfix of #2781.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2778\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2777","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2777\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2777\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2777\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2777","id":964696380,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA3MTEzNzg3","number":2777,"title":"Use packaging to handle versions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628581899000,"updated_at":1629294987000,"closed_at":1629294987000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2777","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2777","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2777.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2777.patch"},"body":"Use packaging module to handle\/validate\/check versions of Python packages.\r\n\r\nRelated to #2769.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2777\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2776","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2776\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2776\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2776\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2776","id":964400596,"node_id":"MDU6SXNzdWU5NjQ0MDA1OTY=","number":2776,"title":"document `config.HF_DATASETS_OFFLINE` and precedence","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628544197000,"updated_at":1628544197000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/huggingface\/datasets\/pull\/1976 implemented `HF_DATASETS_OFFLINE`, but:\r\n1. `config.HF_DATASETS_OFFLINE` is not documented\r\n2. the precedence is not documented (env, config)\r\n\r\nI'm thinking it probably should be similar to what it says https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`:\r\n\r\nQuote:\r\n> The default in \ud83e\udd17 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero.\r\n\r\nContext: trying to use `config.HF_DATASETS_OFFLINE` here:\r\nhttps:\/\/github.com\/bigscience-workshop\/Megatron-DeepSpeed\/pull\/48\r\nbut are uncertain if it's safe, since it's not documented as a public API.\r\n\r\nThank you!\r\n\r\n@lhoestq, @albertvillanova ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2776\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2775","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2775\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2775\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2775\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2775","id":964303626,"node_id":"MDU6SXNzdWU5NjQzMDM2MjY=","number":2775,"title":"`generate_random_fingerprint()` deterministic with \ud83e\udd17Transformers' `set_seed()`","user":{"login":"mbforbes","id":1170062,"node_id":"MDQ6VXNlcjExNzAwNjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1170062?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mbforbes","html_url":"https:\/\/github.com\/mbforbes","followers_url":"https:\/\/api.github.com\/users\/mbforbes\/followers","following_url":"https:\/\/api.github.com\/users\/mbforbes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mbforbes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mbforbes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mbforbes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mbforbes\/orgs","repos_url":"https:\/\/api.github.com\/users\/mbforbes\/repos","events_url":"https:\/\/api.github.com\/users\/mbforbes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mbforbes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo","Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints.\r\n\r\nAny opinion on this @LysandreJik ?","Yes, this sounds good @lhoestq "],"created_at":1628537331000,"updated_at":1629966654000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\n**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the \"random\" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below.\r\n\r\nHi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/450b9174765374111e5c6daab0ed294bc3d9b639\/src\/datasets\/fingerprint.py#L260-L265\r\n\r\nHowever, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like:\r\n\r\n```text\r\n Loading cached processed dataset at \/home\/xxx\/.cache\/huggingface\/datasets\/csv\/default-xxx\/0.0.0\/xxx\/cache-xxx.arrow\r\n```\r\n\r\nThe path is exactly the same each run (e.g., last 26 runs).\r\n\r\nThis becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/pytorch\/multiple-choice\/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000.\r\n\r\nI think that\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/450b9174765374111e5c6daab0ed294bc3d9b639\/src\/datasets\/fingerprint.py#L248\r\n\r\n... is actually consistent because randomness is being controlled in HuggingFace\/Transformers for reproducibility. I've added a demo of this below.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\n# Contents of print_fingerprint.py\r\nfrom transformers import set_seed\r\nfrom datasets.fingerprint import generate_random_fingerprint\r\nset_seed(42)\r\nprint(generate_random_fingerprint())\r\n```\r\n\r\n```bash\r\nfor i in {0..10}; do\r\n python print_fingerprint.py\r\ndone\r\n\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n1c80317fa3b1799d\r\n```\r\n\r\n## Expected results\r\nAfter the \"random hash\" warning is emitted, a random hash is generated, and no outdated cached datasets are reused.\r\n\r\n## Actual results\r\nAfter the \"random hash\" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run.\r\n\r\n## Environment info\r\n\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.6\r\n- PyArrow version: 4.0.1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2775\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2774","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2774\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2774\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2774\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2774","id":963932199,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA2NDY2MDc0","number":2774,"title":"Prevent .map from using multiprocessing when loading from cache","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm guessing tests are failling, because this was pushed before https:\/\/github.com\/huggingface\/datasets\/pull\/2779 was merged? cc @albertvillanova ","Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r\ngit checkout sequential_map_when_cached\r\ngit fetch upstream master\r\ngit merge upstream\/master\r\ngit push -u origin sequential_map_when_cached\r\n```","Thanks for working on this ! I'm sure we can figure something out ;)\r\n\r\nCurrently `map` starts a process to apply the map function on each shard. If the shard has already been processed, then the process that has been spawned loads the processed shard from the cache and returns it.\r\n\r\nI think we should be able to simply not start a process if a shard is already processed and cached.\r\nThis way:\r\n- you won't need to specify `sequential=True`\r\n- it won't create new processes if the dataset is already processed and cached\r\n- it will properly reload each processed shard that is cached\r\n\r\nTo know if we have to start a new process for a shard you can use the function `update_fingerprint` from fingerprint.py to know the expected fingerprint of the processed shard.\r\nThen, if the shard has already been processed, there will be a cache file named `cached-.arrow` and you can load it with\r\n```\r\nDataset.from_file(path_to_cache_file, info=self.info, split=self.split)\r\n```\r\n\r\nLet me know if that makes sense !","Yes that makes total sense, I tried to initially do that, except the way fingerprint is handled doesn't allow to easily manipulate such a field. Typically the fingerprinting is handled in `@fingerprint_transform` which has a bunch of params that aren't quite easy to extract. Those params are used to manipulate args, kwargs in fancy ways in order to finally obtain a dictionary used for fingerprint. I could duplicate everything, but this look like a very risky thing to do. I'll take a look if I can make something work with `inspect` if I can make a very simple wrapper.\r\n\r\nA much more simpler solution I think is adding an optional `shard: Optional[int] = None` parameter. If None, we use the number of proc as the number of shards, otherwise we pass down the expected number of shards and use either sequential\/multiprocessing (with arbitrary number of workers) to load the shards? This would allow the weird case where one wants a large number of shards with a limited amount of processes. Not the smartest thing to do, but it's not an absurd behaviour. Would this be acceptable?","@lhoestq friendly ping as I feel it's up for review.","The CI error is unrelated to the changes of this PR - it looks like an SSL issue with conda"],"created_at":1628511098000,"updated_at":1631182828000,"closed_at":1631182828000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2774","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2774","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2774.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2774.patch"},"body":"## Context\r\n\r\nOn our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"lib\/python3.8\/site-packages\/multiprocess\/pool.py\", line 131, in worker\r\n put((job, i, result))\r\n File \"lib\/python3.8\/site-packages\/multiprocess\/queues.py\", line 371, in put\r\n self._writer.send_bytes(obj)\r\n File \"lib\/python3.8\/site-packages\/multiprocess\/connection.py\", line 203, in send_bytes\r\n self._send_bytes(m[offset:offset + size])\r\n File \"lib\/python3.8\/site-packages\/multiprocess\/connection.py\", line 414, in _send_bytes\r\n self._send(header + buf)\r\n File \"lib\/python3.8\/site-packages\/multiprocess\/connection.py\", line 371, in _send\r\n n = write(self._handle, buf)\r\nBrokenPipeError: [Errno 32] Broken pipe\r\n```\r\n\r\nOur current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint.\r\n\r\nInstead what we suggest:\r\n - Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache.\r\n\r\n## Current issues\r\n\r\n~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~\r\n\r\n**EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`:\r\n - sequential : `datasets.arrow_dataset.Dataset._map_single`\r\n - multiprocessing: `datasets.arrow_dataset._map_single`\r\n \r\n This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue.\r\n\r\n## What was done\r\n\r\n~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~\r\n\r\nI couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method.\r\n\r\n## TODO\r\n - [x] Check that the multiprocessed version and the sequential version output the same output\r\n - [x] Check that sequential can load multiprocessed\r\n - [x] Check that multiprocessed can load sequential\r\n \r\n ## Test\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom multiprocessing import Pool\r\nimport random\r\n\r\ndef process(batch, rng):\r\n length = len(batch[\"text\"])\r\n return {**batch, \"processed_text\": [f\"PROCESSED {rng.random()}\" for _ in range(length)]}\r\n\r\ndataset = load_dataset(\"stas\/openwebtext-10k\", split=\"train\")\r\nprint(dataset.column_names)\r\nprint(type(dataset))\r\n\r\nrng = random.Random(42)\r\ndataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={\"rng\": rng})\r\n\r\n# This one should be loaded from cache\r\nrng = random.Random(42)\r\ndataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={\"rng\": rng}, sequential=True)\r\n\r\n# Just to check that the random generator was correct\r\nprint(dataset1[-1][\"processed_text\"])\r\nprint(dataset2[-1][\"processed_text\"])\r\n```\r\n \r\n ## Other solutions\r\n\r\nI chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache).\r\n\r\nAlso we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing).\r\ncc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2774\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2773","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2773\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2773\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2773\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2773","id":963730497,"node_id":"MDU6SXNzdWU5NjM3MzA0OTc=","number":2773,"title":"Remove dataset_infos.json","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628494999000,"updated_at":1628494999000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nAs discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file.\r\n\r\nOthers could be migrated to the README, like: \"dataset_size\", \"size_in_bytes\", \"download_size\", \"splits.split_name.[num_bytes, num_examples]\",...\r\n\r\nHowever, there are others that do not seem too meaningful in the README, like the checksums.\r\n\r\n**Describe the solution you'd like**\r\nOpen a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and\/or which information to be kept.\r\n\r\ncc: @julien-c @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2773\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2772","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2772\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2772\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2772\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2772","id":963348834,"node_id":"MDU6SXNzdWU5NjMzNDg4MzQ=","number":2772,"title":"Remove returned feature constrain","user":{"login":"PosoSAgapo","id":33200481,"node_id":"MDQ6VXNlcjMzMjAwNDgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33200481?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PosoSAgapo","html_url":"https:\/\/github.com\/PosoSAgapo","followers_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/followers","following_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/orgs","repos_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/repos","events_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628395290000,"updated_at":1628412481000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. \r\n\r\nMostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. \r\n\r\nI do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2772\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2771","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2771\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2771\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2771\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2771","id":963257036,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA1OTExMDMw","number":2771,"title":"[WIP][Common Voice 7] Add common voice 7.0","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I think the name `common_voice_7` is fine :)\r\nMoreover if the dataset_infos.json is missing I'm pretty sure you don't need to specify `ignore_verifications=True`"],"created_at":1628352070000,"updated_at":1629294237000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2771","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2771","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2771.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2771.patch"},"body":"This PR allows to load the new common voice dataset manually as explained when doing: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\".\/datasets\/datasets\/common_voice_7\", \"ab\")\r\n```\r\n\r\n=>\r\n\r\n```\r\n Please follow the manual download instructions:\r\n\r\n You need to manually the dataset from `https:\/\/commonvoice.mozilla.org\/en\/datasets`.\r\n Make sure you choose the version `Common Voice Corpus 7.0`.\r\n Choose a language of your choice and find the corresponding language-id, *e.g.*, `Abkhaz` with language-id `ab`. The following language-ids are available:\r\n\r\n ['ab', 'ar', 'as', 'az', 'ba', 'bas', 'be', 'bg', 'br', 'ca', 'cnh', 'cs', 'cv', 'cy', 'de', 'dv', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy-NL', 'ga-IE', 'gl', 'gn', 'ha', 'hi', 'hsb', 'hu', 'hy-AM', 'ia', 'id', 'it', 'ja', 'ka', 'kab', 'kk', 'kmr', 'ky', 'lg', 'lt', 'lv', 'mn', 'mt', 'nl', 'or', 'pa-IN', 'pl', 'pt', 'rm-sursilv', 'rm-vallader', 'ro', 'ru', 'rw', 'sah', 'sk', 'sl', 'sr', 'sv-SE', 'ta', 'th', 'tr', 'tt', 'ug', 'uk', 'ur', 'uz', 'vi', 'vot', 'zh-CN', 'zh-HK', 'zh-TW']\r\n\r\n Next, you will have to enter your email address to download the dataset in the `tar.gz` format. Save the file under .\r\n The file should then be extracted with: ``tar -xvzf `` which will extract a folder called ``cv-corpus-7.0-2021-07-21``.\r\n The dataset can then be loaded with `datasets.load_dataset(\"common_voice\", , data_dir=\"\", ignore_verifications=True).\r\n```\r\n\r\nHaving followed those instructions one can then download the data as follows: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\".\/datasets\/datasets\/common_voice_7\", \"ab\", data_dir=\".\/cv-corpus-7.0-2021-07-21\/\", ignore_verifications=True)\r\n```\r\n\r\n## TODO\r\n- [ ] Discuss naming. Is the name ok here \"common_voice_7\"? The dataset script differs only really in one point from `common_voice.py` in that all the metadata is different (more hours etc...) and that it has to use manual data dir for now\r\n- [ ] Ideally we should get a bundled download link. For `common_voice.py` there is a bundled download link: `https:\/\/voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com\/cv-corpus-6.1-2020-12-11\/{}.tar.gz` that allows one to directly download the data. However such a link is missing for Common Voice 7. I guess we should try to contact common voice about it and ask whether we could host the data or help otherwise somehow. See: https:\/\/github.com\/common-voice\/common-voice-bundler\/issues\/15 cc @yjernite \r\n- [ ] I did not compute the dataset.json and it would mean that I'd have to download 76 datasets totalling around 1TB manually before running the checksum command. This just takes too much time. For now the user will have to add a `ignore_verifications=True` to download the data. This step would also be much easier if we could get a bundled link\r\n- [ ] Add dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2771\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2770","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2770\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2770\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2770\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2770","id":963246512,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA1OTAzMzIy","number":2770,"title":"Add support for fast tokenizer in BertScore","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628348403000,"updated_at":1628512483000,"closed_at":1628507785000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2770","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2770","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2770.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2770.patch"},"body":"This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib.\r\nFixes #2765 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2770\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2769","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2769\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2769\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2769\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2769","id":963240802,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA1ODk5MTYy","number":2769,"title":"Allow PyArrow from source","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628346404000,"updated_at":1628523519000,"closed_at":1628523519000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2769","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2769","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2769.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2769.patch"},"body":"When installing pyarrow from source the version is:\r\n\r\n```python\r\n>>> import pyarrow; pyarrow.__version__\r\n'2.1.0.dev612'\r\n```\r\n\r\n-> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2769\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2768","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2768\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2768\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2768\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2768","id":963229173,"node_id":"MDU6SXNzdWU5NjMyMjkxNzM=","number":2768,"title":"`ArrowInvalid: Added column's length must match table's length.` after using `select`","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthe `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"tweets_hate_speech_detection\")['train'].select(range(128))\r\nds = ds.flatten_indices()\r\nds = ds.add_column('ones', [1]*128)\r\n```","Thanks for the question @lvwerra. And thanks for the answer @mariosasko. ^^"],"created_at":1628342249000,"updated_at":1628508403000,"closed_at":1628508403000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"tweets_hate_speech_detection\")['train'].select(range(128))\r\nds = ds.add_column('ones', [1]*128)\r\n```\r\n\r\n## Expected results\r\nI would expect a new column named `ones` filled with `1`. When I check the length of `ds` it says `128`. Interestingly, it works when calling `ds = ds.map(lambda x: x)` before adding the column.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n```python\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n\/var\/folders\/l4\/2905jygx4tx5jv8_kn03vxsw0000gn\/T\/ipykernel_6301\/868709636.py in \r\n 1 from datasets import load_dataset\r\n 2 ds = load_dataset(\"tweets_hate_speech_detection\")['train'].select(range(128))\r\n----> 3 ds = ds.add_column('ones', [0]*128)\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 183 }\r\n 184 # apply actual function\r\n--> 185 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 186 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 187 # re-apply format to the output\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 395 # Call actual function\r\n 396 \r\n--> 397 out = func(self, *args, **kwargs)\r\n 398 \r\n 399 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py in add_column(self, name, column, new_fingerprint)\r\n 2965 column_table = InMemoryTable.from_pydict({name: column})\r\n 2966 # Concatenate tables horizontally\r\n-> 2967 table = ConcatenationTable.from_tables([self._data, column_table], axis=1)\r\n 2968 # Update features\r\n 2969 info = self.info.copy()\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/table.py in from_tables(cls, tables, axis)\r\n 715 table_blocks = to_blocks(table)\r\n 716 blocks = _extend_blocks(blocks, table_blocks, axis=axis)\r\n--> 717 return cls.from_blocks(blocks)\r\n 718 \r\n 719 @property\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/table.py in from_blocks(cls, blocks)\r\n 663 return cls(table, blocks)\r\n 664 else:\r\n--> 665 table = cls._concat_blocks_horizontally_and_vertically(blocks)\r\n 666 return cls(table, blocks)\r\n 667 \r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/table.py in _concat_blocks_horizontally_and_vertically(cls, blocks)\r\n 623 if not tables:\r\n 624 continue\r\n--> 625 pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1)\r\n 626 pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated)\r\n 627 return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0)\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/datasets\/table.py in _concat_blocks(blocks, axis)\r\n 612 else:\r\n 613 for name, col in zip(table.column_names, table.columns):\r\n--> 614 pa_table = pa_table.append_column(name, col)\r\n 615 return pa_table\r\n 616 else:\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.append_column()\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.add_column()\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n~\/git\/semantic-clustering\/env\/lib\/python3.8\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Added column's length must match table's length. Expected length 31962 but got length 128\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.11.0\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2768\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2767","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2767\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2767\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2767\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2767","id":963002120,"node_id":"MDU6SXNzdWU5NjMwMDIxMjA=","number":2767,"title":"equal operation to perform unbatch for huggingface datasets ","user":{"login":"dorooddorood606","id":79288051,"node_id":"MDQ6VXNlcjc5Mjg4MDUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79288051?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorooddorood606","html_url":"https:\/\/github.com\/dorooddorood606","followers_url":"https:\/\/api.github.com\/users\/dorooddorood606\/followers","following_url":"https:\/\/api.github.com\/users\/dorooddorood606\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorooddorood606\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorooddorood606\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorooddorood606\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorooddorood606\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorooddorood606\/repos","events_url":"https:\/\/api.github.com\/users\/dorooddorood606\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorooddorood606\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\nMaybe this is clearer to explain like this, currently map function, map one example to \"one\" modified one, lets assume we want to map one example to \"multiple\" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can handle this operation, thanks a lot","Hi,\r\nthis is also my question on how to perform similar operation as \"unbatch\" in tensorflow in great huggingface dataset library. \r\nthanks.","Hi,\r\n\r\n`Dataset.map` in the batched mode allows you to map a single row to multiple rows. So to perform \"unbatch\", you can do the following:\r\n```python\r\nimport collections\r\n\r\ndef unbatch(batch):\r\n new_batch = collections.defaultdict(list)\r\n keys = batch.keys()\r\n for values in zip(*batch.values()):\r\n ex = {k: v for k, v in zip(keys, values)}\r\n inputs = f\"record query: {ex['query']} entities: {', '.join(ex['entities'])} passage: {ex['passage']}\"\r\n new_batch[\"inputs\"].extend([inputs] * len(ex[\"answers\"]))\r\n new_batch[\"targets\"].extend(ex[\"answers\"])\r\n return new_batch\r\n\r\ndset = dset.map(unbatch, batched=True, remove_columns=dset.column_names)\r\n```","Dear @mariosasko \r\nFirst, thank you very much for coming back to me on this, I appreciate it a lot. I tried this solution, I am getting errors, do you mind\r\ngiving me one test example to be able to run your code, to understand better the format of the inputs to your function?\r\nin this function https:\/\/github.com\/google-research\/text-to-text-transfer-transformer\/blob\/3c58859b8fe72c2dbca6a43bc775aa510ba7e706\/t5\/data\/preprocessors.py#L952 they copy each example to the number of \"answers\", do you mean one should not do the copying part and use directly your function? \r\n\r\n\r\nthank you very much for your help and time.","Hi @mariosasko \r\nI think finally I got this, I think you mean to do things in one step, here is the full example for completeness:\r\n\r\n```\r\ndef unbatch(batch):\r\n new_batch = collections.defaultdict(list)\r\n keys = batch.keys()\r\n for values in zip(*batch.values()):\r\n ex = {k: v for k, v in zip(keys, values)}\r\n # updates the passage.\r\n passage = ex['passage']\r\n passage = re.sub(r'(\\.|\\?|\\!|\\\"|\\')\\n@highlight\\n', r'\\1 ', passage)\r\n passage = re.sub(r'\\n@highlight\\n', '. ', passage)\r\n inputs = f\"record query: {ex['query']} entities: {', '.join(ex['entities'])} passage: {passage}\"\r\n # duplicates the samples based on number of answers.\r\n num_answers = len(ex[\"answers\"])\r\n num_duplicates = np.maximum(1, num_answers)\r\n new_batch[\"inputs\"].extend([inputs] * num_duplicates) #len(ex[\"answers\"]))\r\n new_batch[\"targets\"].extend(ex[\"answers\"] if num_answers > 0 else [\"\"])\r\n return new_batch\r\n\r\ndata = datasets.load_dataset('super_glue', 'record', split=\"train\", script_version=\"master\")\r\ndata = data.map(unbatch, batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nThanks a lot again, this was a super great way to do it."],"created_at":1628279152000,"updated_at":1628366181000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI need to use \"unbatch\" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:\r\n\r\nI am considering \"record\" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did:\r\n\r\nhttps:\/\/github.com\/google-research\/text-to-text-transfer-transformer\/blob\/3c58859b8fe72c2dbca6a43bc775aa510ba7e706\/t5\/data\/preprocessors.py#L925\r\n\r\nHere please find an example:\r\n\r\n For example, a typical example from ReCoRD might look like\r\n {\r\n 'passsage': 'This is the passage.',\r\n 'query': 'A @placeholder is a bird.',\r\n 'entities': ['penguin', 'potato', 'pigeon'],\r\n 'answers': ['penguin', 'pigeon'],\r\n }\r\n and I need a prosessor which would turn this example into the following two examples:\r\n {\r\n 'inputs': 'record query: A @placeholder is a bird. entities: penguin, '\r\n 'potato, pigeon passage: This is the passage.',\r\n 'targets': 'penguin',\r\n }\r\n and\r\n {\r\n 'inputs': 'record query: A @placeholder is a bird. entities: penguin, '\r\n 'potato, pigeon passage: This is the passage.',\r\n 'targets': 'pigeon',\r\n }\r\n\r\n\r\nFor doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help\r\n\r\n@lhoestq \r\n\r\nThank you very much.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2767\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2766","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2766\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2766\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2766\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2766","id":962994198,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA1NzAyNjM5","number":2766,"title":"fix typo (ShuffingConfig -> ShufflingConfig)","user":{"login":"daleevans","id":4944007,"node_id":"MDQ6VXNlcjQ5NDQwMDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944007?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/daleevans","html_url":"https:\/\/github.com\/daleevans","followers_url":"https:\/\/api.github.com\/users\/daleevans\/followers","following_url":"https:\/\/api.github.com\/users\/daleevans\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/daleevans\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/daleevans\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/daleevans\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/daleevans\/orgs","repos_url":"https:\/\/api.github.com\/users\/daleevans\/repos","events_url":"https:\/\/api.github.com\/users\/daleevans\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/daleevans\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628278300000,"updated_at":1628605023000,"closed_at":1628605022000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2766","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2766","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2766.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2766.patch"},"body":"pretty straightforward, it should be Shuffling instead of Shuffing","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2766\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2765","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2765\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2765\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2765\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2765","id":962861395,"node_id":"MDU6SXNzdWU5NjI4NjEzOTU=","number":2765,"title":"BERTScore Error","user":{"login":"gagan3012","id":49101362,"node_id":"MDQ6VXNlcjQ5MTAxMzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49101362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gagan3012","html_url":"https:\/\/github.com\/gagan3012","followers_url":"https:\/\/api.github.com\/users\/gagan3012\/followers","following_url":"https:\/\/api.github.com\/users\/gagan3012\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gagan3012\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gagan3012\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gagan3012\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gagan3012\/orgs","repos_url":"https:\/\/api.github.com\/users\/gagan3012\/repos","events_url":"https:\/\/api.github.com\/users\/gagan3012\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gagan3012\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nThe `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work:\r\n```\r\npip uninstall bert-score\r\npip install \"bert-score<0.3.10\"\r\n```"],"created_at":1628265537000,"updated_at":1628507785000,"closed_at":1628507785000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\npredictions = [\"hello there\", \"general kenobi\"]\r\nreferences = [\"hello there\", \"general kenobi\"]\r\nbert = load_metric('bertscore')\r\nbert.compute(predictions=predictions, references=references,lang='en')\r\n```\r\n\r\n# Bug\r\n`TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'`\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version:\r\n- Platform: Colab \r\n- Python version:\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2765\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2764","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2764\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2764\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2764\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2764","id":962554799,"node_id":"MDExOlB1bGxSZXF1ZXN0NzA1MzI3MDQ5","number":2764,"title":"Add DER metric for SUPERB speaker diarization task","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628241156000,"updated_at":1628244413000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2764","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2764","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2764.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2764.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2764\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2763","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2763\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2763\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2763\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2763","id":961895523,"node_id":"MDU6SXNzdWU5NjE4OTU1MjM=","number":2763,"title":"English wikipedia datasets is not clean","user":{"login":"lucadiliello","id":23355969,"node_id":"MDQ6VXNlcjIzMzU1OTY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23355969?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lucadiliello","html_url":"https:\/\/github.com\/lucadiliello","followers_url":"https:\/\/api.github.com\/users\/lucadiliello\/followers","following_url":"https:\/\/api.github.com\/users\/lucadiliello\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lucadiliello\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lucadiliello\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lucadiliello\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lucadiliello\/orgs","repos_url":"https:\/\/api.github.com\/users\/lucadiliello\/repos","events_url":"https:\/\/api.github.com\/users\/lucadiliello\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lucadiliello\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Certain users might need these data (for training or simply to explore\/index the dataset).\r\n\r\nFeel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training"],"created_at":1628174244000,"updated_at":1629738016000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWikipedia english dumps contain many wikipedia paragraphs like \"References\", \"Category:\" and \"See Also\" that should not be used for training.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nfrom datasets import load_dataset\r\nw = load_dataset('wikipedia', '20200501.en')\r\nprint(w['train'][0]['text'])\r\n```\r\n\r\n> 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the \"famous historical and cultural market towns in China\".\\n\\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\\'s games often interwoven with auspiciouse objects.\\n\\n, it had 27 residential communities () and 25 villages under its administration.\\n\\nShi Family Grand Courtyard\\n\\nShi Family Grand Courtyard (Ti\u0101nj\u012bn Sh\u00ed Ji\u0101 D\u00e0 Yu\u00e0n, \u5929\u6d25\u77f3\u5bb6\u5927\u9662) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\\n\\nShi\\'s ancestor came from Dong\\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\\n\\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\\n\\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\\n\\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\\' construction, local folk art and customs, and traditional period furnishings and crafts.\\n\\n**See also \\n\\nList of township-level divisions of Tianjin\\n\\nReferences \\n\\n http:\/\/arts.cultural-china.com\/en\/65Arts4795.html\\n\\nCategory:Towns in Tianjin'**\r\n\r\n## Expected results\r\nI expect no junk in the data.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n## Environment info\r\n- `datasets` version: 1.10.2\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2763\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2762","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2762\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2762\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2762\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2762","id":961652046,"node_id":"MDU6SXNzdWU5NjE2NTIwNDY=","number":2762,"title":"Add RVL-CDIP dataset","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @nateraw "],"created_at":1628157425000,"updated_at":1628157442000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** RVL-CDIP\r\n- **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels.\r\n- **Paper:** https:\/\/www.cs.cmu.edu\/~aharley\/icdar15\/\r\n- **Data:** https:\/\/www.cs.cmu.edu\/~aharley\/rvl-cdip\/\r\n- **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2762\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2761","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2761\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2761\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2761\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2761","id":961568287,"node_id":"MDU6SXNzdWU5NjE1NjgyODc=","number":2761,"title":"Error loading C4 realnewslike dataset","user":{"login":"danshirron","id":32061512,"node_id":"MDQ6VXNlcjMyMDYxNTEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32061512?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/danshirron","html_url":"https:\/\/github.com\/danshirron","followers_url":"https:\/\/api.github.com\/users\/danshirron\/followers","following_url":"https:\/\/api.github.com\/users\/danshirron\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/danshirron\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/danshirron\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/danshirron\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/danshirron\/orgs","repos_url":"https:\/\/api.github.com\/users\/danshirron\/repos","events_url":"https:\/\/api.github.com\/users\/danshirron\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/danshirron\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @danshirron, \r\n`c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.","@bhavitvyamalik @lhoestq , just tried the above and got:\r\n>>> a=datasets.load_dataset('c4','en.realnewslike')\r\nDownloading: 3.29kB [00:00, 1.66MB\/s] \r\nDownloading: 2.40MB [00:00, 12.6MB\/s] \r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 819, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 701, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1049, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 268, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 360, in _create_builder_config\r\n raise ValueError(\r\nValueError: BuilderConfig en.realnewslike not found. Available: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']\r\n>>> \r\n\r\ndatasets version is 1.11.0\r\n","I think I had an older version of datasets installed and that's why I commented the old configurations in my last comment, my bad! I re-checked and updated it to latest version (`datasets==1.11.0`) and it's showing `available configs: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']`. \r\n\r\nI tried `raw_datasets = load_dataset('c4', 'realnewslike')` and the download started. Make sure you don't have any old copy of this dataset and you download it fresh using the latest version of datasets. Sorry for the mix up!","It works. I probably had some issue with the cache. after cleaning it im able to download the dataset. Thanks"],"created_at":1628151418000,"updated_at":1628451874000,"closed_at":1628451874000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nError loading C4 realnewslike dataset. Validation part mismatch\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir)\r\n## Expected results\r\nsuccess on data loading\r\n## Actual results\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 15.3M\/15.3M [00:00<00:00, 28.1MB\/s]Traceback (most recent call last): \r\n File \"run_mlm_tf.py\", line 794, in \r\n main() \r\n File \"run_mlm_tf.py\", line 425, in main \r\n raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 843, in load_dataset \r\n builder_instance.download_and_prepare( \r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 608, in download_and_prepare \r\n self._download_and_prepare( \r\n File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File \"\/home\/dshirron\/.local\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py\", line 74, in verify_splits \r\n raise NonMatchingSplitsSizesError(str(bad_splits)) \r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] \r\n\r\n## Environment info\r\n- `datasets` version: 1.10.2\r\n- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 4.0.1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2761\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2760","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2760\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2760\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2760\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2760","id":961372667,"node_id":"MDU6SXNzdWU5NjEzNzI2Njc=","number":2760,"title":"Add Nuswide dataset","user":{"login":"shivangibithel","id":19774925,"node_id":"MDQ6VXNlcjE5Nzc0OTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19774925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shivangibithel","html_url":"https:\/\/github.com\/shivangibithel","followers_url":"https:\/\/api.github.com\/users\/shivangibithel\/followers","following_url":"https:\/\/api.github.com\/users\/shivangibithel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shivangibithel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shivangibithel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shivangibithel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shivangibithel\/orgs","repos_url":"https:\/\/api.github.com\/users\/shivangibithel\/repos","events_url":"https:\/\/api.github.com\/users\/shivangibithel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shivangibithel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628132441000,"updated_at":1628132441000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *NUSWIDE*\r\n- **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https:\/\/lms.comp.nus.edu.sg\/wp-content\/uploads\/2019\/research\/nuswide\/NUS-WIDE.html)*\r\n- **Paper:** *[here](https:\/\/lms.comp.nus.edu.sg\/wp-content\/uploads\/2019\/research\/nuswide\/nuswide-civr2009.pdf)*\r\n- **Data:** *[here](https:\/\/github.com\/wenting-zhao\/nuswide)*\r\n- **Motivation:** *This dataset is a benchmark in the Text Retrieval task.*\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2760\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2759","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2759\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2759\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2759\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2759","id":960636572,"node_id":"MDU6SXNzdWU5NjA2MzY1NzI=","number":2759,"title":"the meteor metric seems not consist with the official version","user":{"login":"jianguda","id":9079360,"node_id":"MDQ6VXNlcjkwNzkzNjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9079360?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jianguda","html_url":"https:\/\/github.com\/jianguda","followers_url":"https:\/\/api.github.com\/users\/jianguda\/followers","following_url":"https:\/\/api.github.com\/users\/jianguda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jianguda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jianguda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jianguda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jianguda\/orgs","repos_url":"https:\/\/api.github.com\/users\/jianguda\/repos","events_url":"https:\/\/api.github.com\/users\/jianguda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jianguda\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the issue is caused by the differences between varied meteor versions:\r\nmeteor1.0 is for https:\/\/aclanthology.org\/W07-0734.pdf\r\nmeteor1.5 is for https:\/\/aclanthology.org\/W14-3348.pdf\r\n\r\nhere is a very similar issue in NLTK\r\nhttps:\/\/github.com\/nltk\/nltk\/issues\/2655","Hi @jianguda, thanks for reporting.\r\n\r\nCurrently, at \ud83e\udd17 `datasets` we are using METEOR 1.0 (indeed using NLTK: `from nltk.translate import meteor_score`): See the [citation here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/meteor\/meteor.py#L23-L35).\r\n\r\nIf there is some open source implementation of METEOR 1.5, that could be an interesting contribution! \ud83d\ude09 "],"created_at":1628091197000,"updated_at":1628097534000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https:\/\/github.com\/Maluuba\/nlg-eval) as the reference (which reuses the official jar file for the computation)\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_metric\r\nfrom nlgeval import NLGEval, compute_individual_metrics\r\n\r\nmeteor = load_metric('meteor')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of the party\"]\r\nreferences = [\"It is a guide to action that ensures that the military will forever heed Party commands\"]\r\nresults = meteor.compute(predictions=predictions, references=references)\r\n# print the actual result\r\nprint(round(results[\"meteor\"], 4))\r\nmetrics_dict = compute_individual_metrics(references, predictions[0])\r\n# print the expected result\r\nprint(round(metrics_dict[\"METEOR\"], 4))\r\n```\r\nBy the way, you need to install the `nlg-eval` library first. Please check the installation guide [here](https:\/\/github.com\/Maluuba\/nlg-eval#setup), thanks!\r\n\r\n## Expected results\r\n`0.4474`\r\n\r\n## Actual results\r\n`0.7398`\r\n\r\n## Environment info\r\n- `datasets` version: 1.10.2\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2759\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2758","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2758\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2758\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2758\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2758","id":960206575,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAzMjQ5Nzky","number":2758,"title":"Raise ManualDownloadError when loading a dataset that requires previous manual download","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628072395000,"updated_at":1628076990000,"closed_at":1628076990000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2758","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2758","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2758.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2758.patch"},"body":"This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing.\r\n\r\nThe `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode.\r\n\r\nClose #2749.\r\n\r\ncc: @severo ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2758\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2757","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2757\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2757\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2757\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2757","id":959984081,"node_id":"MDU6SXNzdWU5NTk5ODQwODE=","number":2757,"title":"Unexpected type after `concatenate_datasets`","user":{"login":"JulesBelveze","id":32683010,"node_id":"MDQ6VXNlcjMyNjgzMDEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32683010?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JulesBelveze","html_url":"https:\/\/github.com\/JulesBelveze","followers_url":"https:\/\/api.github.com\/users\/JulesBelveze\/followers","following_url":"https:\/\/api.github.com\/users\/JulesBelveze\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JulesBelveze\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JulesBelveze\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JulesBelveze\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JulesBelveze\/orgs","repos_url":"https:\/\/api.github.com\/users\/JulesBelveze\/repos","events_url":"https:\/\/api.github.com\/users\/JulesBelveze\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JulesBelveze\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @JulesBelveze, thanks for your question.\r\n\r\nNote that \ud83e\udd17 `datasets` internally store their data in Apache Arrow format.\r\n\r\nHowever, when accessing dataset columns, by default they are returned as native Python objects (lists in this case).\r\n\r\nIf you would like their columns to be returned in a more suitable format for your use case (torch arrays), you can use the method `set_format()`:\r\n```python\r\nconcat_dataset.set_format(type=\"torch\")\r\n```\r\n\r\nYou have detailed information in our docs:\r\n- [Using a Dataset with PyTorch\/Tensorflow](https:\/\/huggingface.co\/docs\/datasets\/torch_tensorflow.html)\r\n- [Dataset.set_format()](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html#datasets.Dataset.set_format)","Thanks @albertvillanova it indeed\u00a0did the job \ud83d\ude03 \r\nThanks for your answer!"],"created_at":1628061039000,"updated_at":1628092884000,"closed_at":1628092883000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. \r\nIt then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n>>> featurized_teacher\r\nDataset({\r\n features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'],\r\n num_rows: 502\r\n})\r\n>>> for f in featurized_teacher.features:\r\n print(featurized_teacher[f].shape)\r\ntorch.Size([502])\r\ntorch.Size([502, 300])\r\ntorch.Size([502, 300])\r\ntorch.Size([502, 300])\r\n\r\n>>> featurized_student\r\nDataset({\r\n features: ['s_features', 's_labels'],\r\n num_rows: 502\r\n})\r\n>>> for f in featurized_student.features:\r\n print(featurized_student[f].shape)\r\ntorch.Size([502, 64])\r\ntorch.Size([502])\r\n```\r\nThe shapes seem alright to me. Then the results after concatenation are as follow:\r\n```python\r\n>>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1)\r\n>>> type(concat_dataset[\"t_labels\"])\r\n\r\n```\r\nOne would expect to obtain the same type as the one before concatenation.\r\n\r\nAm I doing something wrong here? Any idea on how to fix this unexpected behavior?\r\n\r\n## Environment info\r\n- `datasets` version: 1.9.0\r\n- Platform: macOS-10.14.6-x86_64-i386-64bit\r\n- Python version: 3.9.5\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2757\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2756","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2756\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2756\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2756\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2756","id":959255646,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMzk4Mjk1","number":2756,"title":"Fix metadata JSON for ubuntu_dialogs_corpus dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1628005739000,"updated_at":1628070205000,"closed_at":1628070205000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2756","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2756","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2756.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2756.patch"},"body":"Related to #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2756\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2755","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2755\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2755\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2755\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2755","id":959115888,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMjgwMjI4","number":2755,"title":"Fix metadata JSON for turkish_movie_sentiment dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627997144000,"updated_at":1628068014000,"closed_at":1628068013000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2755","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2755","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2755.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2755.patch"},"body":"Related to #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2755\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2754","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2754\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2754\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2754\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2754","id":959105577,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMjcxMjM4","number":2754,"title":"Generate metadata JSON for telugu_books dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627996492000,"updated_at":1628066942000,"closed_at":1628066942000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2754","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2754","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2754.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2754.patch"},"body":"Related to #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2754\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2753","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2753\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2753\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2753\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2753","id":959036995,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMjEyMjMz","number":2753,"title":"Generate metadata JSON for reclor dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627991549000,"updated_at":1628064435000,"closed_at":1628064435000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2753","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2753","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2753.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2753.patch"},"body":"Related to #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2753\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2752","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2752\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2752\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2752\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2752","id":959023608,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMjAxMjAy","number":2752,"title":"Generate metadata JSON for lm1b dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627990496000,"updated_at":1628059240000,"closed_at":1628059239000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2752","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2752","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2752.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2752.patch"},"body":"Related to #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2752\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2751","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2751\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2751\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2751\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2751","id":959021262,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMTk5MjA5","number":2751,"title":"Update metadata for wikihow dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627990317000,"updated_at":1628005929000,"closed_at":1628005929000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2751","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2751","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2751.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2751.patch"},"body":"Update metadata for wikihow dataset:\r\n- Remove leading new line character in description and citation\r\n- Update metadata JSON\r\n- Remove no longer necessary `urls_checksums\/checksums.txt` file\r\n\r\nRelated to #2748.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2751\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2750","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2750\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2750\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2750\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2750","id":958984730,"node_id":"MDU6SXNzdWU5NTg5ODQ3MzA=","number":2750,"title":"Second concatenation of datasets produces errors","user":{"login":"Aktsvigun","id":36672861,"node_id":"MDQ6VXNlcjM2NjcyODYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36672861?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Aktsvigun","html_url":"https:\/\/github.com\/Aktsvigun","followers_url":"https:\/\/api.github.com\/users\/Aktsvigun\/followers","following_url":"https:\/\/api.github.com\/users\/Aktsvigun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Aktsvigun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Aktsvigun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Aktsvigun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Aktsvigun\/orgs","repos_url":"https:\/\/api.github.com\/users\/Aktsvigun\/repos","events_url":"https:\/\/api.github.com\/users\/Aktsvigun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Aktsvigun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["@albertvillanova ","Hi @Aktsvigun, thanks for reporting.\r\n\r\nI'm investigating this.","Hi @albertvillanova ,\r\nany update on this? Can I probably help in some way?","Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. \ud83d\ude05 \r\n\r\nIn the meantime, if you would like to contribute, feel free to open a Pull Request. You are welcome. Here you can find more information: [How to contribute to Datasets?](CONTRIBUTING.md)"],"created_at":1627987624000,"updated_at":1628595727000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`.\r\n\r\n```\r\nfrom datasets import load_dataset, concatenate_datasets\r\n\r\ndata = load_dataset('trec')['train']\r\nconcatenated = concatenate_datasets([data, data])\r\nconcatenated_2 = concatenate_datasets([concatenated, concatenated])\r\nprint('True features of features:', concatenated.features)\r\nprint('\\nProduced features of features:', concatenated_2.features)\r\n```\r\noutputs \r\n\r\n```\r\nTrue features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)}\r\n\r\nProduced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}\r\n```\r\n\r\nI am using `datasets` v.1.11.0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2750\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2749","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2749\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2749\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2749\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2749","id":958968748,"node_id":"MDU6SXNzdWU5NTg5Njg3NDg=","number":2749,"title":"Raise a proper exception when trying to stream a dataset that requires to manually download files","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @severo, thanks for reporting.\r\n\r\nAs discussed, datasets requiring manual download should be:\r\n- programmatically identifiable\r\n- properly handled with more clear error message when trying to load them with streaming\r\n\r\nIn relation with programmatically identifiability, note that for datasets requiring manual download, their builder have a property `manual_download_instructions` which is not None:\r\n```python\r\n# Dataset requiring manual download:\r\nbuilder.manual_download_instructions is not None\r\n```","Thanks @albertvillanova "],"created_at":1627986387000,"updated_at":1628499215000,"closed_at":1628076990000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nAt least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"reclor\", streaming=True)\r\n```\r\n\r\n## Expected results\r\n\r\nIdeally: raise a specific exception, something like `ManualDownloadError`.\r\n\r\nOr at least give the reason in the message, as when we load in normal mode:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"reclor\")\r\n```\r\n\r\n```\r\nAssertionError: The dataset reclor with config default requires manual data.\r\n Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http:\/\/whyu.me\/reclor\/) fill the google\r\n form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path\/to\/folder\/folder_name')\r\n .\r\n Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='')\r\n```\r\n\r\n## Actual results\r\n\r\n```\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: macOS-11.5-x86_64-i386-64bit\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2749\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2748","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2748\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2748\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2748\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2748","id":958889041,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMDg4NTk4","number":2748,"title":"Generate metadata JSON for wikihow dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627980940000,"updated_at":1627985871000,"closed_at":1627985871000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2748","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2748","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2748.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2748.patch"},"body":"Related to #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2748\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2747","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2747\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2747\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2747\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2747","id":958867627,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAyMDcwOTgy","number":2747,"title":"add multi-proc in `to_json`","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for working on this, @bhavitvyamalik \r\n\r\n10% is not solving the issue, we want 5-10x faster on a machine that has lots of resources, but limited processing time.\r\n\r\nSo let's benchmark it on an instance with many more cores, I can test with 12 on my dev box and 40 on JZ. \r\n\r\nCould you please share the test I could run with both versions?\r\n\r\nShould we also test the sharded version I shared in https:\/\/github.com\/huggingface\/datasets\/issues\/2663#issue-946552273 so optionally 3 versions to test.","Since I was facing `OSError: [Errno 12] Cannot allocate memory` in CircleCI tests, I've added `num_proc` option instead of always using full `cpu_count`. You can test both v1 and v2 through this branch (some redundancy needs to be removed). \r\n\r\nUpdate: I was able to convert into json which took 50% less time as compared to v1 on `ascent_kb` dataset. Will post the benchmarking script with results here.","Here are the benchmarks with the current branch for both v1 and v2 (dataset: `ascent_kb`, 8.9M samples):\r\n| batch_size | time (in sec) | time (in sec) |\r\n|------------|---------------|---------------|\r\n| | num_proc = 1 | num_proc = 4 |\r\n| 10k | 185.56 | 170.11 |\r\n| 50k | 175.79 | 86.84 |\r\n| **100k** | 191.09 | **78.35** |\r\n| 125k | 198.28 | 90.89 |\r\n\r\nIncreasing the batch size on my machine helped in making v2 around 50% faster as compared to v1. Timings may vary depending on the machine. I'm including the benchmarking script as well. CircleCI errors are unrelated (something related to `bertscore`)\r\n```\r\nimport time\r\nfrom datasets import load_dataset\r\nimport pathlib\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\nimport gc\r\n\r\nbatch_sizes = [10_000, 50_000, 100_000, 125_000]\r\nnum_procs = [1, 4] # change this according to your machine\r\n\r\nSAVE_LOC = \".\/new_dataset.json\"\r\n\r\nfor batch in batch_sizes:\r\n for num in num_procs:\r\n dataset = load_dataset(\"ascent_kb\")\r\n\r\n local_start = time.time()\r\n ans = dataset['train'].to_json(SAVE_LOC, batch_size=batch, num_proc=num)\r\n local_end = time.time() - local_start\r\n\r\n print(f\"Time taken on {num} num_proc and {batch} batch_size: \", local_end)\r\n\r\n # remove that dataset and its contents from cache and newly generated json\r\n new_json = pathlib.Path(SAVE_LOC)\r\n new_json.unlink()\r\n\r\n try:\r\n shutil.rmtree(os.path.join(str(Path.home()), \".cache\", \"huggingface\"))\r\n except OSError as e:\r\n print(\"Error: %s - %s.\" % (e.filename, e.strerror))\r\n\r\n gc.collect()\r\n```\r\nThis will download the dataset in every iteration and run `to_json`. I didn't do multiple iterations here for `to_json` (for a specific batch_size and num_proc) and took average time as I found that v1 got faster after 1st iteration (maybe it's caching somewhere). Since you'll be doing this operation only once, I thought it'll be better to report how both v1 and v2 performed in single iteration only. \r\n\r\nImportant: Benchmarking script will delete the newly generated json and `~\/.cache\/huggingface\/` after every iteration so that it doesn't end up using any cached data (just to be on a safe side)","Thank you for sharing the benchmark, @bhavitvyamalik. Your results look promising.\r\n\r\nBut if I remember correctly the sharded version at https:\/\/github.com\/huggingface\/datasets\/issues\/2663#issue-946552273 was much faster. So we probably should compare to it as well? And if it's faster than at least document that manual sharding version?\r\n\r\n-------\r\n\r\nThat's a dangerous benchmark as it'd wipe out many other HF things. Why not wipe out:\r\n```\r\n~\/.cache\/huggingface\/datasets\/ascent_kb\/\r\n```\r\n\r\nRunning the benchmark now.","Weird, I tried to adapt your benchmark to using shards and the program no longer works. It instead quickly uses up all available RAM and hangs. Has something changed recently in `datasets`? You can try:\r\n\r\n```\r\nimport time\r\nfrom datasets import load_dataset\r\nimport pathlib\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\nimport gc\r\nfrom multiprocessing import cpu_count, Process, Queue\r\n\r\nbatch_sizes = [10_000, 50_000, 100_000, 125_000]\r\nnum_procs = [1, 8] # change this according to your machine\r\n\r\nDATASET_NAME = (\"ascent_kb\")\r\nnum_shards = [1, 8]\r\nfor batch in batch_sizes:\r\n for shards in num_shards:\r\n dataset = load_dataset(DATASET_NAME)[\"train\"]\r\n #print(dataset)\r\n\r\n def process_shard(idx):\r\n print(f\"Sharding {idx}\")\r\n ds_shard = dataset.shard(shards, idx, contiguous=True)\r\n # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling\r\n print(f\"Saving {DATASET_NAME}-{idx}.jsonl\")\r\n ds_shard.to_json(f\"{DATASET_NAME}-{idx}.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n\r\n local_start = time.time()\r\n queue = Queue()\r\n processes = [Process(target=process_shard, args=(idx,)) for idx in range(shards)]\r\n for p in processes:\r\n p.start()\r\n\r\n for p in processes:\r\n p.join()\r\n local_end = time.time() - local_start\r\n\r\n print(f\"Time taken on {shards} shards and {batch} batch_size: \", local_end)\r\n```\r\n\r\nJust careful, so that it won't crash your compute environment. As it almost crashed mine.","So this part seems to no longer work:\r\n```\r\n dataset = load_dataset(\"ascent_kb\")[\"train\"]\r\n ds_shard = dataset.shard(1, 0, contiguous=True)\r\n ds_shard.to_json(\"ascent_kb-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n```","If you are using `to_json` without any `num_proc`or `num_proc=1` then essentially it'll fall back to v1 only and I've kept it as it is (the tests were passing as well)\r\n\r\n> That's a dangerous benchmark as it'd wipe out many other HF things. Why not wipe out:\r\n\r\nThat's because some dataset related files were still left inside `~\/.cache\/huggingface\/datasets` folder. You can wipe off datasets folder inside your cache maybe\r\n\r\n> dataset = load_dataset(\"ascent_kb\")[\"train\"]\r\n> ds_shard = dataset.shard(1, 0, contiguous=True)\r\n> ds_shard.to_json(\"ascent_kb-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n\r\nI tried this `lama` dataset (1.3M) and it worked fine. Trying it with `ascent_kb` currently, will update it here.","I don't think the issue has anything to do with your work, @bhavitvyamalik. I forgot to mention I tested to see the same problem with the latest datasets release.\r\n\r\nInteresting, I tried your suggestion. This:\r\n```\r\npython -c 'import datasets; ds=\"lama\"; dataset = datasets.load_dataset(ds)[\"train\"]; \\\r\ndataset.shard(1, 0, contiguous=True).to_json(f\"{ds}-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)'\r\n```\r\nworks fine and takes just a few GBs to complete.\r\n\r\nthis on the other hand blows up memory-wise:\r\n```\r\npython -c 'import datasets; ds=\"ascent_kb\"; dataset = datasets.load_dataset(ds)[\"train\"]; \\\r\ndataset.shard(1, 0, contiguous=True).to_json(f\"{ds}-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)'\r\n```\r\nand I have to kill it before it uses up all RAM. (I have 128GB of it, so it should be more than enough)","> That's because some dataset related files were still left inside ~\/.cache\/huggingface\/datasets folder. You can wipe off datasets folder inside your cache maybe\r\n\r\nI think recent datasets added a method that will print out the path for all the different components for a given dataset, I can't recall the name though. It was when we were discussing a janitor program to clear up space selectively.","> and I have to kill it before it uses up all RAM. (I have 128GB of it, so it should be more than enough)\r\n\r\nSame thing just happened on my machine too. Memory leak somewhere maybe? Even if you were to load this dataset in your memory it shouldn't take more than 4GB. You were earlier doing this for `oscar` dataset. Is it working fine for that?","Hmm, looks like `datasets` has changed and won't accept my currently cached oscar-en (crashes), so I'd rather not download 0.5TB again. \r\n\r\nWere you able to reproduce the memory blow up with `ascent_kb`? It's should be a much quicker task to verify.\r\n\r\nBut yes, oscar worked just fine with `.shard()` which is what I used to process it fast.","What I tried is:\r\n```\r\nHF_DATASETS_OFFLINE=1 HF_DATASETS_CACHE=cache python -c 'import datasets; ds=\"oscar\"; \\\r\ndataset = datasets.load_dataset(ds, \"unshuffled_deduplicated_en\")[\"train\"]; \\\r\ndataset.shard(1000000, 0, contiguous=True).to_json(f\"{ds}-0.jsonl\", orient=\"records\", lines=True, force_ascii=False)'\r\n```\r\nand got:\r\n```\r\nUsing the latest cached version of the module from \/gpfswork\/rech\/six\/commun\/modules\/datasets_modules\/datasets\/oscar\/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d (last modified on Fri Aug 6 01:52:35 2021) since it couldn't be found locally at oscar\/oscar.py or remotely (OfflineModeIsEnabled).\r\nReusing dataset oscar (cache\/oscar\/unshuffled_deduplicated_en\/1.0.0\/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 755, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 737, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 204, in \r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 764, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 834, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 217, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 238, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 173, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 308, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/arrow_reader.py\", line 327, in read_table\r\n return table_cls.from_file(filename)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/table.py\", line 450, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"\/gpfswork\/rech\/six\/commun\/conda\/stas\/lib\/python3.8\/site-packages\/datasets\/table.py\", line 43, in _memory_mapped_arrow_table_from_file\r\n memory_mapped_stream = pa.memory_map(filename)\r\n File \"pyarrow\/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow\/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```","> Were you able to reproduce the memory blow up with ascent_kb? It's should be a much quicker task to verify.\r\n\r\nYes, this blows up memory-wise on my machine too. \r\n\r\nI found that a [similar error](https:\/\/discuss.huggingface.co\/t\/saving-memory-with-run-mlm-py-with-wikipedia-datasets\/4160) was posted on the forum on 5th March. Since you already knew how much time [#2663 comment](https:\/\/github.com\/huggingface\/datasets\/issues\/2663#issue-946552273) took, can you try benchmarking v1 and v2 for now maybe until we have a fix for this memory blow up?","OK, so I benchmarked using \"lama\" though it's too small for this kind of test, since the sharding is much slower than one thread here.\r\n\r\nResults: https:\/\/gist.github.com\/stas00\/dc1597a1e245c5915cfeefa0eee6902c\r\n\r\nSo sharding does really bad there, and your json over procs is doing great!\r\n\r\nAny suggestions to a somewhat bigger dataset, but not too big? say 10 times of lama?","Looks great! I had a few questions\/suggestions related to `benchmark-datasets-to_json.py`:\r\n \r\n1. You have used only 10_000 and 100_000 batch size. Including more batch sizes may help you find the perfect batch size for your machine and even give you some extra speed-up. \r\nFor eg, I found `load_dataset(\"cc100\", lang=\"eu\")` with batch size 125_000 took less time as compared to batch size 100_000 (71.16 sec v\/s 67.26 sec) since this dataset has 2 fields only `['id', 'text']`, so that's why we can go for higher batch size here. \r\n \r\n2. Why have you used `num_procs` 1 and 4 only? \r\n\r\nYou can use:\r\n1. `dataset = load_dataset(\"cc100\", lang=\"af\")`. Even though it has only 2 fields but there are around 9.9 mil samples. (lama had around 1.3 mil samples)\r\n2. `dataset = load_dataset(\"cc100\", lang=\"eu\")` -> 16 mil samples. (if you want something more than 9.9 mil)\r\n3. `dataset = load_dataset(\"neural_code_search\", 'search_corpus')` -> 4.7 mil samples","Thank you, @bhavitvyamalik \r\n\r\nMy apologies, at the moment I have not found time to do more benchmark with the proposed other datasets. I will try to do it later, but I don't want it to hold your PR, it's definitely a great improvement based on the benchmarks I did run! And the comparison to sharded is really just of interest to me to see if it's on par or slower.\r\n\r\nSo if other reviewers are happy, this definitely looks like a great improvement to me and addresses the request I made in the first place.\r\n\r\n> Why have you used num_procs 1 and 4 only?\r\n\r\nOh, no particular reason, I was just comparing to 4 shards on my desktop. Typically it's sufficient to go from 1 to 2-4 to see whether the distributed approach is faster or not. Once hit larger numbers you often run into bottlenecks like IO, and then numbers can be less representative. I hope it makes sense.","Tested it with a larger dataset (`srwac`) and memory utilisation remained constant with no swap memory used. @lhoestq should I also add test for the same? Last time I tried this, I got `OSError: [Errno 12] Cannot allocate memory` in CircleCI tests"],"created_at":1627979413000,"updated_at":1631541397000,"closed_at":1631541397000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2747","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2747","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2747.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2747.patch"},"body":"Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air)\r\n\r\n1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a single run)\r\nv1- ~225 seconds for converting whole dataset to json\r\nv2- ~200 seconds for converting whole dataset to json\r\n\r\n2. Dataset name: `lama` - 1.3M samples (all samples were used, reporting this for 2 runs)\r\nv1- ~26 seconds for converting whole dataset to json\r\nv2- ~23.6 seconds for converting whole dataset to json\r\n\r\nI think it's safe to say that v2 is 10% faster as compared to v1. Timings may improve further with better configuration.\r\n\r\nThe only bottleneck I feel is writing to file from the output list. If we can improve that aspect then timings may improve further. \r\n\r\nLet me know if any changes\/improvements can be done in this @stas00, @lhoestq, @albertvillanova. @lhoestq even suggested to extend this work with other export methods as well like `csv` or `parquet`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2747\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2746","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2746\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2746\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2746\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2746","id":958551619,"node_id":"MDU6SXNzdWU5NTg1NTE2MTk=","number":2746,"title":"Cannot load `few-nerd` dataset","user":{"login":"Mehrad0711","id":28717374,"node_id":"MDQ6VXNlcjI4NzE3Mzc0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28717374?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Mehrad0711","html_url":"https:\/\/github.com\/Mehrad0711","followers_url":"https:\/\/api.github.com\/users\/Mehrad0711\/followers","following_url":"https:\/\/api.github.com\/users\/Mehrad0711\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Mehrad0711\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Mehrad0711\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Mehrad0711\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Mehrad0711\/orgs","repos_url":"https:\/\/api.github.com\/users\/Mehrad0711\/repos","events_url":"https:\/\/api.github.com\/users\/Mehrad0711\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Mehrad0711\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"\/\"): we, the Hugging Face team, supervise their implementation and we make sure they work correctly by means of our test suite\r\n- community datasets (their identifier contains a slash \"\/\", where before the slash it is the username or the organization name): those datasets are uploaded to the Hub by the community, and we, the Hugging Face team, do not supervise them; it is the responsibility of the user\/organization implementing them properly if they want them to be used by other users.\r\n\r\nIn this specific case, there is no \"canonical\" dataset named \"few-nerd\". On the other hand, there are two \"community\" datasets named \"few-nerd\":\r\n- [\"nbroad\/few-nerd\"](https:\/\/huggingface.co\/datasets\/nbroad\/few-nerd)\r\n- [\"dfki-nlp\/few-nerd\"](https:\/\/huggingface.co\/datasets\/dfki-nlp\/few-nerd)\r\n\r\nIf they were properly implemented, you should be able to load them this way:\r\n```python\r\n# \"nbroad\/few-nerd\" community dataset\r\nds = load_dataset(\"nbroad\/few-nerd\", \"supervised\")\r\n\r\n# \"dfki-nlp\/few-nerd\" community dataset\r\nds = load_dataset(\"dfki-nlp\/few-nerd\", \"supervised\")\r\n```\r\n\r\nHowever, they are not correctly implemented and both of them give errors:\r\n- \"nbroad\/few-nerd\":\r\n ```\r\n TypeError: expected str, bytes or os.PathLike object, not dict\r\n ```\r\n- \"dfki-nlp\/few-nerd\":\r\n ```\r\n ConnectionError: Couldn't reach https:\/\/cloud.tsinghua.edu.cn\/f\/09265750ae6340429827\/?dl=1\r\n ```\r\n\r\nYou could try to contact their users\/organizations to inform them about their bugs and ask them if they are planning to fix them. Alternatively you could try to implement your own script for this dataset.","Thanks @albertvillanova for your detailed explanation! I will resort to my own scripts for now. "],"created_at":1627942737000,"updated_at":1628019944000,"closed_at":1628019943000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nCannot load `few-nerd` dataset.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset('few-nerd', 'supervised')\r\n```\r\n\r\n## Actual results\r\n\r\nExecuting above code will give the following error:\r\n\r\n```\r\nUsing the latest cached version of the module from \/Users\/Mehrad\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/few-nerd\/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at \/Users\/Mehrad\/Documents\/GitHub\/genienlp\/few-nerd\/few-nerd.py, or remotely (FileNotFoundError).\r\nDownloading and preparing dataset few_nerd\/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/Users\/Mehrad\/.cache\/huggingface\/datasets\/few_nerd\/supervised\/0.0.0\/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53...\r\nTraceback (most recent call last):\r\n File \"\/Users\/Mehrad\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/Users\/Mehrad\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1107, in _prepare_split\r\n disable=bool(logging.get_verbosity() == logging.NOTSET),\r\n File \"\/Users\/Mehrad\/opt\/anaconda3\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"\/Users\/Mehrad\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/few-nerd\/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53\/few-nerd.py\", line 196, in _generate_examples\r\n with open(filepath, encoding=\"utf-8\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/Users\/Mehrad\/.cache\/huggingface\/datasets\/downloads\/supervised\/train.json'\r\n```\r\nThe bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https:\/\/github.com\/nbroad1881\/few-nerd\/tree\/main\/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. \r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Python version: 3.8\r\n- PyArrow version: 1.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2746\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2745","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2745\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2745\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2745\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2745","id":958269579,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAxNTc0Mjcz","number":2745,"title":"added semeval18_emotion_classification dataset","user":{"login":"maxpel","id":31095360,"node_id":"MDQ6VXNlcjMxMDk1MzYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31095360?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maxpel","html_url":"https:\/\/github.com\/maxpel","followers_url":"https:\/\/api.github.com\/users\/maxpel\/followers","following_url":"https:\/\/api.github.com\/users\/maxpel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maxpel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maxpel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maxpel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maxpel\/orgs","repos_url":"https:\/\/api.github.com\/users\/maxpel\/repos","events_url":"https:\/\/api.github.com\/users\/maxpel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maxpel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For training the multilabel classifier, I would combine the labels into a list, for example for the English dataset:\r\n\r\n```\r\ndfpre=pd.read_csv(path+\"2018-E-c-En-train.txt\",sep=\"\\t\")\r\ndfpre['list'] = dfpre[dfpre.columns[2:]].values.tolist()\r\ndf = dfpre[['Tweet', 'list']].copy()\r\ndf.rename(columns={'list': 'labels'}, inplace=True)\r\n```","Hi @maxpel , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)","Hi @lhoestq ! I did take your comments into account, changed the naming and tried to add dummy data (manually). I am not sure if the dummy data is correct, maybe you can take a look at that.\r\nThe model card is still missing as I am currently very busy.","Thanks ! The dummy data looks all good, good job :)\r\n\r\nThe CI error can be fixed by merging `master` into your branch\r\n```bash\r\ngit fetch upstream\r\ngit merge upstream\/master\r\n```","Hi! I just added the model card and I did the merge you showed above. Should I then add and commit again? The CI error is still there right now."],"created_at":1627918795000,"updated_at":1632217715000,"closed_at":1632217715000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2745","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2745","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2745.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2745.patch"},"body":"I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages.\r\n\r\n```\r\ndatasets-cli test datasets\/semeval18_emotion_classification\/ --save_infos --all_configs\r\n\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification\r\n```\r\nBoth commands ran successfully.\r\n\r\nI couldn't create the dummy data (the files are tsvs but have .txt ending, maybe that's the problem?) and therefore the test on the dummy data fails, maybe someone can help here.\r\n\r\nI also formatted the code:\r\n```\r\nblack --line-length 119 --target-version py36 datasets\/semeval18_emotion_classification\/\r\nisort datasets\/semeval18_emotion_classification\/\r\nflake8 datasets\/semeval18_emotion_classification\/\r\n```\r\nThat's the publication for reference:\r\n\r\nMohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). SemEval-2018 task 1: Affect in tweets. Proceedings of the 12th International Workshop on Semantic Evaluation, 1\u201317. https:\/\/doi.org\/10.18653\/v1\/S18-1001","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2745\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2744","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2744\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2744\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2744\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2744","id":958146637,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAxNDY4NDcz","number":2744,"title":"Fix key by recreating metadata JSON for journalists_questions dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627910873000,"updated_at":1627982734000,"closed_at":1627982733000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2744","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2744","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2744.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2744.patch"},"body":"Close #2743.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2744\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2743","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2743\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2743\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2743\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2743","id":958119251,"node_id":"MDU6SXNzdWU5NTgxMTkyNTE=","number":2743,"title":"Dataset JSON is incorrect","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder:\r\n> Indeed there is some problem\/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file...\r\nIn the meanwhile, in order to be able to use the datasets_info.json file content, you can create the builder without passing the name :\r\n```\r\nIn [25]: builder = datasets.load_dataset_builder(\"journalists_questions\")\r\nIn [26]: builder.info.splits\r\nOut[26]: {'train': SplitInfo(name='train', num_bytes=342296, num_examples=10077, dataset_name='journalists_questions')}\r\n```\r\n\r\nAfter regenerating the metadata JSON file for this dataset, I get the right key:\r\n```\r\n{\"plain_text\": {\"description\": \"The journalists_questions corpus (\r\n```","Thanks!"],"created_at":1627909286000,"updated_at":1627985217000,"closed_at":1627982733000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nThe JSON file generated for https:\/\/github.com\/huggingface\/datasets\/blob\/573f3d35081cee239d1b962878206e9abe6cde91\/datasets\/journalists_questions\/journalists_questions.py is https:\/\/github.com\/huggingface\/datasets\/blob\/573f3d35081cee239d1b962878206e9abe6cde91\/datasets\/journalists_questions\/dataset_infos.json.\r\n\r\nThe only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead.\r\n\r\n```json\r\n{\r\n \"journalists_questions\": {\r\n \"description\": \"The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\\n\",\r\n ...\r\n```\r\n\r\n## Steps to reproduce the bug\r\n\r\nLook at the files.\r\n\r\n## Expected results\r\n\r\nThe first key should be `plain_text`:\r\n\r\n```json\r\n{\r\n \"plain_text\": {\r\n \"description\": \"The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\\n\",\r\n ...\r\n```\r\n\r\n## Actual results\r\n\r\n```json\r\n{\r\n \"journalists_questions\": {\r\n \"description\": \"The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\\n\",\r\n ...\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2743\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2742","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2742\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2742\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2742\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2742","id":958114064,"node_id":"MDU6SXNzdWU5NTgxMTQwNjQ=","number":2742,"title":"Improve detection of streamable file types","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["maybe we should rather attempt to download a `Range` from the server and see if it works?"],"created_at":1627908909000,"updated_at":1627922149000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\nfrom datasets.utils.streaming_download_manager import StreamingDownloadManager\r\nbuilder = load_dataset_builder(\"journalists_questions\", name=\"plain_text\")\r\nbuilder._split_generators(StreamingDownloadManager(base_path=builder.base_path))\r\n```\r\n\r\nraises\r\n\r\n```\r\nNotImplementedError: Extraction protocol for file at https:\/\/drive.google.com\/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet\r\n```\r\n\r\nBut the file at https:\/\/drive.google.com\/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed:\r\n\r\n```bash\r\ncurl --header \"Range: bytes=0-100\" -L https:\/\/drive.google.com\/uc\\?export\\=download\\&id\\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U\r\n506938088174940160 yes 1\r\n302221719412830209 yes 1\r\n289761704907268096 yes 1\r\n513820885032378369 yes %\r\n```\r\n\r\nYet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats.\r\n\r\n**Describe the solution you'd like**\r\n\r\nIn the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text\/html; charset=UTF-8`.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nAdd a variable in the dataset script to set the data format by hand.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2742\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2741","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2741\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2741\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2741\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2741","id":957979559,"node_id":"MDU6SXNzdWU5NTc5Nzk1NTk=","number":2741,"title":"Add Hypersim dataset","user":{"login":"osanseviero","id":7246357,"node_id":"MDQ6VXNlcjcyNDYzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7246357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/osanseviero","html_url":"https:\/\/github.com\/osanseviero","followers_url":"https:\/\/api.github.com\/users\/osanseviero\/followers","following_url":"https:\/\/api.github.com\/users\/osanseviero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/osanseviero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/osanseviero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/osanseviero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/osanseviero\/orgs","repos_url":"https:\/\/api.github.com\/users\/osanseviero\/repos","events_url":"https:\/\/api.github.com\/users\/osanseviero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/osanseviero\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627898810000,"updated_at":1627898810000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Hypersim\r\n- **Description:** photorealistic synthetic dataset for holistic indoor scene understanding\r\n- **Paper:** *link to the dataset paper if available*\r\n- **Data:** https:\/\/github.com\/apple\/ml-hypersim\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2741\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2740","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2740\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2740\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2740\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2740","id":957911035,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAxMjY0NTI3","number":2740,"title":"Update release instructions","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627893960000,"updated_at":1627915196000,"closed_at":1627915196000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2740","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2740","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2740.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2740.patch"},"body":"Update release instructions.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2740\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2739","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2739\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2739\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2739\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2739","id":957751260,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAxMTI0ODQ3","number":2739,"title":"Pass tokenize to sacrebleu only if explicitly passed by user","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627880945000,"updated_at":1627964617000,"closed_at":1627964617000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2739","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2739","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2739.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2739.patch"},"body":"Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https:\/\/github.com\/mjpost\/sacrebleu\/pull\/152\/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15\r\n\r\nThis PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called).\r\n\r\nClose: #2737.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2739\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2738","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2738\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2738\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2738\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2738","id":957517746,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAwOTI5NzA4","number":2738,"title":"Sunbird AI Ugandan low resource language dataset","user":{"login":"ak3ra","id":12105163,"node_id":"MDQ6VXNlcjEyMTA1MTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12105163?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ak3ra","html_url":"https:\/\/github.com\/ak3ra","followers_url":"https:\/\/api.github.com\/users\/ak3ra\/followers","following_url":"https:\/\/api.github.com\/users\/ak3ra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ak3ra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ak3ra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ak3ra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ak3ra\/orgs","repos_url":"https:\/\/api.github.com\/users\/ak3ra\/repos","events_url":"https:\/\/api.github.com\/users\/ak3ra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ak3ra\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ak3ra , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)"],"created_at":1627831080000,"updated_at":1631008047000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2738","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2738","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2738.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2738.patch"},"body":"Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2738\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2737","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2737\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2737\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2737\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2737","id":957124881,"node_id":"MDU6SXNzdWU5NTcxMjQ4ODE=","number":2737,"title":"SacreBLEU update","user":{"login":"devrimcavusoglu","id":46989091,"node_id":"MDQ6VXNlcjQ2OTg5MDkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46989091?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/devrimcavusoglu","html_url":"https:\/\/github.com\/devrimcavusoglu","followers_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/followers","following_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/repos","events_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/devrimcavusoglu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @devrimcavusoglu, \r\nI tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:\r\n```\r\nsacrebleu = datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of the party\"]\r\nreferences = [[\"It is a guide to action that ensures that the military will forever heed Party commands\"]] # double brackets here should do the work\r\nresults = sacrebleu.compute(predictions=predictions, references=references)\r\nprint(results)\r\noutput: {'score': 41.180376356915765, 'counts': [11, 8, 6, 4], 'totals': [18, 17, 16, 15], 'precisions': [61.111111111111114, 47.05882352941177, 37.5, 26.666666666666668], 'bp': 1.0, 'sys_len': 18, 'ref_len': 16}\r\n```","@bhavitvyamalik hmm. I forgot double brackets, but still didn't work when used it with double brackets. It may be an isseu with platform (using win-10 currently), or versions. What is your platform and your version info for datasets, python, and sacrebleu ?","You can check that here, I've reproduced your code in [Google colab](https:\/\/colab.research.google.com\/drive\/1X90fHRgMLKczOVgVk7NDEw_ciZFDjaCM?usp=sharing). Looks like there was some issue in `sacrebleu` which was fixed later from what I've found [here](https:\/\/github.com\/pytorch\/fairseq\/issues\/2049#issuecomment-622367967). Upgrading `sacrebleu` to latest version should work.","It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https:\/\/colab.research.google.com\/drive\/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing\r\n\r\nI'm reopening this Issue and making a Pull Request to fix it."],"created_at":1627689188000,"updated_at":1627964617000,"closed_at":1627964617000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"With the latest release of [sacrebleu](https:\/\/github.com\/mjpost\/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.\r\n\r\n AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'\r\n\r\nthis happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0`\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nsacrebleu= datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of the party\"]\r\nreferences = [\"It is a guide to action that ensures that the military will forever heed Party commands\"]\r\nresults = sacrebleu.compute(predictions=predictions, references=references)\r\nprint(results)\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: Python 3.8.0\r\n- PyArrow version: 5.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2737\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2736","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2736\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2736\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2736\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2736","id":956895199,"node_id":"MDU6SXNzdWU5NTY4OTUxOTk=","number":2736,"title":"Add Microsoft Building Footprints dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!"],"created_at":1627661828000,"updated_at":1627707739000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Microsoft Building Footprints\r\n- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge.\r\n- **Paper:** *link to the dataset paper if available*\r\n- **Data:** https:\/\/www.microsoft.com\/en-us\/maps\/building-footprints\r\n- **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nReported by: @sashavor","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2736\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2735","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2735\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2735\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2735\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2735","id":956889365,"node_id":"MDU6SXNzdWU5NTY4ODkzNjU=","number":2735,"title":"Add Open Buildings dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627661319000,"updated_at":1627707685000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Open Buildings\r\n- **Description:** A dataset of building footprints to support social good applications.\r\n\r\n Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa.\r\n\r\n See: \"Mapping Africa's Buildings with Satellite Imagery\" https:\/\/ai.googleblog.com\/2021\/07\/mapping-africas-buildings-with.html\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2107.12283\r\n- **Data:** https:\/\/sites.research.google\/open-buildings\/\r\n- **Motivation:** *what are some good reasons to have this dataset*\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nReported by: @osanseviero ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2735\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2734","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2734\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2734\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2734\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2734","id":956844874,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAwMzc4NjI4","number":2734,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627658571000,"updated_at":1627660078000,"closed_at":1627660078000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2734","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2734","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2734.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2734.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2734\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2733","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2733\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2733\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2733\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2733","id":956725476,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAwMjc1NDMy","number":2733,"title":"Add missing parquet known extension","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627650080000,"updated_at":1627651471000,"closed_at":1627651470000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2733","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2733","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2733.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2733.patch"},"body":"This code was failing because the parquet extension wasn't recognized:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nbase_url = \"https:\/\/storage.googleapis.com\/huggingface-nlp\/cache\/datasets\/wikipedia\/20200501.en\/1.0.0\/\"\r\ndata_files = {\"train\": base_url + \"wikipedia-train.parquet\"}\r\nwiki = load_dataset(\"parquet\", data_files=data_files, split=\"train\", streaming=True)\r\n```\r\n\r\nIt raises\r\n```python\r\nNotImplementedError: Extraction protocol for file at https:\/\/storage.googleapis.com\/huggingface-nlp\/cache\/datasets\/wikipedia\/20200501.en\/1.0.0\/wikipedia-train.parquet is not implemented yet\r\n```\r\n\r\nI added `parquet` to the list of known extensions\r\n\r\nEDIT: added pickle, conllu, xml extensions as well","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2733\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2732","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2732\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2732\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2732\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2732","id":956676360,"node_id":"MDExOlB1bGxSZXF1ZXN0NzAwMjMzMzQy","number":2732,"title":"Updated TTC4900 Dataset","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, l\u00fctfen bu PR'\u0131 g\u00f6zden ge\u00e7irebilir misiniz?","> Thanks ! This looks all good now :)\r\n\r\nThanks"],"created_at":1627645934000,"updated_at":1627660851000,"closed_at":1627660694000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2732","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2732","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2732.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2732.patch"},"body":"- The source address of the TTC4900 dataset of [@savasy](https:\/\/github.com\/savasy) has been updated for direct download.\r\n- Updated readme.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2732\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2731","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2731\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2731\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2731\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2731","id":956087452,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk5NzQwMjg5","number":2731,"title":"Adding to_tf_dataset method","user":{"login":"Rocketknight1","id":12866554,"node_id":"MDQ6VXNlcjEyODY2NTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12866554?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Rocketknight1","html_url":"https:\/\/github.com\/Rocketknight1","followers_url":"https:\/\/api.github.com\/users\/Rocketknight1\/followers","following_url":"https:\/\/api.github.com\/users\/Rocketknight1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Rocketknight1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Rocketknight1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Rocketknight1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Rocketknight1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Rocketknight1\/repos","events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Rocketknight1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This seems to be working reasonably well in testing, and performance is way better. `tf.py_function` has been dropped for an input generator, but I moved as much of the code as possible outside the generator to allow TF to compile it correctly. I also avoid `tf.RaggedTensor` at all costs, and do the shuffle in the dataset followed by accessing sequential chunks, instead of shuffling an index tensor. The combination of all of these gives us a more flexible data loader as well as a ~20X boost in performance compared to the first solution.","I made a change to the `TFFormatter` in this PR that will need some changes to the tests, so I wanted to ping @lhoestq and anyone else before I made those changes.\r\n\r\nThe key problem is that up until now the `TFFormatter` always returns `RaggedTensor`, created using the very slow `tf.ragged.constant` function. This is a big performance penalty, but it's also (imo) surprising for users - `RaggedTensor` handles tensors where one dimension has variable length. This is a good choice for tokenized datasets with variable sequence length, but it's an odd choice when the non-batch dimensions are constant, such as in image datasets, or in datasets where all samples are padded to the same length (e.g. for TPU training).\r\n\r\nThe change I made was to try to return standard `Tensor` objects instead of `RaggedTensor` when all the samples in the batch had the same shape, and if that was not the case to fall back to fast `RaggedTensor` creation with `tf.ragged.stack`, and only falling back to the very slow `tf.ragged.constant` function as a last resort. I think this will match user expectations in most cases and greatly improve performance, but it's a (very slightly) breaking change, so any feedback is welcome!","Also I really can't emphasize enough how slow `tf.ragged.constant` is, it's bad enough to create a data pipeline bottleneck in more or less any training setup:\r\n![image](https:\/\/user-images.githubusercontent.com\/12866554\/131121785-4fbe942a-1ca4-4af6-a9da-cd6d5ea67b30.png)\r\n","Hi @lhoestq, the tests have been modified and everything is passing. The Windows tests look to be failing for an unrelated reason, but other than that I'm ready to merge if you are!","Hi @Rocketknight1 ! Feel free to merge `master` into this branch to fix and run the full CI :)","@lhoestq rebased onto master and it looks good! I'm doing some testing with new notebook examples, but are you happy to merge if that looks good?","@lhoestq No, I'm happy to merge it as-is and add documentation afterwards!"],"created_at":1627582225000,"updated_at":1631800254000,"closed_at":1631800254000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2731","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2731","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2731.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2731.patch"},"body":"Oh my **god** do not merge this yet, it's just a draft.\r\n\r\nI've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the whole dataset in memory!), correct shuffling, variable-length batches to reduce compute, and correct support for unusual padding. It achieves that by calling the tokenizer `pad` method in the middle of a TF compute graph via a very hacky call to `tf.py_function`, which is heretical but seems to work.\r\n\r\nA number of issues need to be resolved before it's ready to merge, though:\r\n\r\n1) Is a MixIn the right way to do this? Do other classes besides `arrow_dataset.Dataset` need this method too?\r\n2) Needs an argument to support constant-length batches for TPU training - this is easy to add and I'll do it soon.\r\n3) Needs the user to supply the list of columns to drop from the arrow `Dataset`. Is there some automatic way to get the columns we want, or see which columns were added by the tokenizer?\r\n4) Assumes the label column is always present and always called \"label\" - this is probably not great, but I'm not sure what the 'correct' thing to do here is.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2731\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2730","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2730\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2730\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2730\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2730","id":955987834,"node_id":"MDU6SXNzdWU5NTU5ODc4MzQ=","number":2730,"title":"Update CommonVoice with new release","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @patrickvonplaten?","Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https:\/\/voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com\/cv-corpus-6.1-2020-12-11\/ab.tar.gz` ? cc @patil-suraj \r\n","Also see: https:\/\/github.com\/common-voice\/common-voice-bundler\/issues\/15"],"created_at":1627574399000,"updated_at":1628353159000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** CommonVoice mid-2021 release\r\n- **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220).\r\n- **Paper:** https:\/\/discourse.mozilla.org\/t\/common-voice-2021-mid-year-dataset-release\/83812\r\n- **Data:** https:\/\/commonvoice.mozilla.org\/en\/datasets\r\n- **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2730\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2729","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2729\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2729\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2729\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2729","id":955920489,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk5NTk5MjA4","number":2729,"title":"Fix IndexError while loading Arabic Billion Words dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627570022000,"updated_at":1627650235000,"closed_at":1627650235000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2729","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2729","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2729.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2729.patch"},"body":"Catch `IndexError` and ignore that record.\r\n\r\nClose #2727.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2729\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2728","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2728\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2728\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2728\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2728","id":955892970,"node_id":"MDU6SXNzdWU5NTU4OTI5NzA=","number":2728,"title":"Concurrent use of same dataset (already downloaded)","user":{"login":"PierreColombo","id":22492839,"node_id":"MDQ6VXNlcjIyNDkyODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22492839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PierreColombo","html_url":"https:\/\/github.com\/PierreColombo","followers_url":"https:\/\/api.github.com\/users\/PierreColombo\/followers","following_url":"https:\/\/api.github.com\/users\/PierreColombo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PierreColombo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PierreColombo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PierreColombo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PierreColombo\/orgs","repos_url":"https:\/\/api.github.com\/users\/PierreColombo\/repos","events_url":"https:\/\/api.github.com\/users\/PierreColombo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PierreColombo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.","If i have two jobs that use the same dataset. I got :\r\n\r\n\r\n File \"compute_measures.py\", line 181, in \r\n train_loader, val_loader, test_loader = get_dataloader(args)\r\n File \"\/gpfsdswork\/projects\/rech\/toto\/intRAOcular\/dataset_utils.py\", line 69, in get_dataloader\r\n dataset_train = load_dataset('paws', \"labeled_final\", split='train', download_mode=\"reuse_cache_if_exists\")\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 582, in download_and_prepare\r\n self._save_info()\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 690, in _save_info\r\n self.info.write_to_directory(self._cache_dir)\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/info.py\", line 195, in write_to_directory\r\n with open(os.path.join(dataset_info_dir, config.LICENSE_FILENAME), \"wb\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/gpfswork\/rech\/toto\/datasets\/paws\/labeled_final\/1.1.0\/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete\/LICENSE'","You can probably have a solution much faster than me (first time I use the library). But I suspect some write function are used when loading the dataset from cache.","I have the same issue:\r\n```\r\nTraceback (most recent call last):\r\n File \"\/dccstor\/tslm\/envs\/anaconda3\/envs\/trf-a100\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/dccstor\/tslm\/envs\/anaconda3\/envs\/trf-a100\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1040, in _prepare_split\r\n with ArrowWriter(features=self.info.features, path=fpath) as writer:\r\n File \"\/dccstor\/tslm\/envs\/anaconda3\/envs\/trf-a100\/lib\/python3.9\/site-packages\/datasets\/arrow_writer.py\", line 192, in __init__\r\n self.stream = pa.OSFile(self._path, \"wb\")\r\n File \"pyarrow\/io.pxi\", line 829, in pyarrow.lib.OSFile.__cinit__\r\n File \"pyarrow\/io.pxi\", line 844, in pyarrow.lib.OSFile._open_writable\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '\/dccstor\/tslm-gen\/.cache\/csv\/default-387f1f95c084d4df\/0.0.0\/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0.incomplete\/csv-validation.arrow'. Detail: [errno 2] No such file or directory\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"\/dccstor\/tslm\/elron\/tslm-gen\/train.py\", line 510, in \r\n main()\r\n File \"\/dccstor\/tslm\/elron\/tslm-gen\/train.py\", line 246, in main\r\n datasets = prepare_dataset(dataset_args, logger)\r\n File \"\/dccstor\/tslm\/elron\/tslm-gen\/data.py\", line 157, in prepare_dataset\r\n datasets = load_dataset(extension, data_files=data_files, split=dataset_split, cache_dir=dataset_args.dataset_cache_dir, na_filter=False, download_mode=dataset_args.dataset_generate_mode)\r\n File \"\/dccstor\/tslm\/envs\/anaconda3\/envs\/trf-a100\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 742, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/dccstor\/tslm\/envs\/anaconda3\/envs\/trf-a100\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/dccstor\/tslm\/envs\/anaconda3\/envs\/trf-a100\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 654, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 2] Failed to open local file '\/dccstor\/tslm-gen\/.cache\/csv\/default-387f1f95c084d4df\/0.0.0\/2dc6629a9ff6b5697d82c25b73731dd440507a69cbce8b425db50b751e8fcfd0.incomplete\/csv-validation.arrow'. Detail: [errno 2] No such file or directory\r\n```"],"created_at":1627568318000,"updated_at":1627889157000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen launching several jobs at the same time loading the same dataset trigger some errors see (last comments).\r\n\r\n## Steps to reproduce the bug\r\nexport HF_DATASETS_CACHE=\/gpfswork\/rech\/toto\/datasets\r\nfor MODEL in \"bert-base-uncased\" \"roberta-base\" \"distilbert-base-cased\"; do # \"bert-base-uncased\" \"bert-large-cased\" \"roberta-large\" \"albert-base-v1\" \"albert-large-v1\"; do\r\n for TASK_NAME in \"mrpc\" \"rte\" 'imdb' \"paws\" \"mnli\"; do\r\n export OUTPUT_DIR=${MODEL}_${TASK_NAME}\r\n sbatch --job-name=${OUTPUT_DIR} \\\r\n --gres=gpu:1 \\\r\n --no-requeue \\\r\n --cpus-per-task=10 \\\r\n --hint=nomultithread \\\r\n --time=1:00:00 \\\r\n --output=jobinfo\/${OUTPUT_DIR}_%j.out \\\r\n --error=jobinfo\/${OUTPUT_DIR}_%j.err \\\r\n --qos=qos_gpu-t4 \\\r\n --wrap=\"module purge; module load pytorch-gpu\/py3\/1.7.0 ; export HF_DATASETS_OFFLINE=1; export HF_DATASETS_CACHE=\/gpfswork\/rech\/toto\/datasets; python compute_measures.py --seed=$SEED --saving_path=results --batch_size=$BATCH_SIZE --task_name=$TASK_NAME --model_name=\/gpfswork\/rech\/toto\/transformers_models\/$MODEL\"\r\n\r\n done\r\ndone\r\n\r\n\r\n\r\n```python\r\n# Sample code to reproduce the bug\r\n dataset_train = load_dataset('imdb', split='train', download_mode=\"reuse_cache_if_exists\")\r\n dataset_train = dataset_train.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'),\r\n batched=True).select(list(range(args.filter)))\r\n\r\n dataset_val = load_dataset('imdb', split='train', download_mode=\"reuse_cache_if_exists\")\r\n dataset_val = dataset_val.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'),\r\n batched=True).select(list(range(args.filter, args.filter + 5000)))\r\n\r\n dataset_test = load_dataset('imdb', split='test', download_mode=\"reuse_cache_if_exists\")\r\n dataset_test = dataset_test.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'),\r\n batched=True)\r\n```\r\n\r\n## Expected results\r\nI believe I am doing something wrong with the objects. \r\n\r\n## Actual results\r\nTraceback (most recent call last):\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 983, in _prepare_split\r\n check_duplicates=True,\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 192, in __init__\r\n self.stream = pa.OSFile(self._path, \"wb\")\r\n File \"pyarrow\/io.pxi\", line 829, in pyarrow.lib.OSFile.__cinit__\r\n File \"pyarrow\/io.pxi\", line 844, in pyarrow.lib.OSFile._open_writable\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '\/gpfswork\/rech\/tts\/unm25jp\/datasets\/paws\/labeled_final\/1.1.0\/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete\/paws-test.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"compute_measures.py\", line 181, in \r\n train_loader, val_loader, test_loader = get_dataloader(args)\r\n File \"\/gpfsdswork\/projects\/rech\/toto\/intRAOcular\/dataset_utils.py\", line 69, in get_dataloader\r\n dataset_train = load_dataset('paws', \"labeled_final\", split='train', download_mode=\"reuse_cache_if_exists\")\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/gpfslocalsup\/pub\/anaconda-py3\/2020.02\/envs\/pytorch-gpu-1.7.0\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 658, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] Failed to open local file '\/gpfswork\/rech\/toto\/datasets\/paws\/labeled_final\/1.1.0\/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete\/paws-test.arrow'. Detail: [errno 2] No such file or directory\r\n\r\n## Environment info\r\n\r\n- `datasets` version: datasets==1.8.0\r\n- Platform: linux (jeanzay)\r\n- Python version: pyarrow==2.0.0\r\n- PyArrow version: 3.7.8\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2728\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2727","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2727\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2727\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2727\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2727","id":955812149,"node_id":"MDU6SXNzdWU5NTU4MTIxNDk=","number":2727,"title":"Error in loading the Arabic Billion Words Corpus","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n\r\n TRN_ARB_0248167<\/ID>\r\n http:\/\/tishreen.news.sy\/tishreen\/public\/read\/248240<\/URL>\r\n Removed, because the original articles was in English<\/Headline>\r\n<\/Techreen>\r\n```\r\n\r\nand all the 288 faulty records in the `Almustaqbal` config look like:\r\n```\r\n\r\n MTL_ARB_0028398<\/ID>\r\n \r\n http:\/\/www.almustaqbal.com\/v4\/article.aspx?type=NP&ArticleID=179015<\/URL>\r\n Removed because it is not available in the original site<\/Headline>\r\n<\/Almustaqbal>\r\n```\r\n\r\nso the error is happening because the articles were removed and so the associated records lack the `Text` tag.\r\n\r\nIn this case, I think we just need to catch the `IndexError` and ignore (pass) it.\r\n","Thanks @M-Salti for reporting this issue and for your investigation.\r\n\r\nIndeed, those `IndexError` should be catched and the corresponding record should be ignored.\r\n\r\nI'm opening a Pull Request to fix it."],"created_at":1627563189000,"updated_at":1627650235000,"closed_at":1627650235000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nload_dataset(\"arabic_billion_words\", \"Techreen\")\r\nload_dataset(\"arabic_billion_words\", \"Almustaqbal\")\r\n```\r\n\r\n## Expected results\r\nThe datasets load succefully.\r\n\r\n## Actual results\r\n```python\r\n_extract_tags(self, sample, tag)\r\n 139 if len(out) > 0:\r\n 140 break\r\n--> 141 return out[0]\r\n 142 \r\n 143 def _clean_text(self, text):\r\n\r\nIndexError: list index out of range\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.10.2\r\n- Platform: Ubuntu 18.04.5 LTS\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2727\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2726","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2726\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2726\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2726\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2726","id":955674388,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk5Mzg5MDk1","number":2726,"title":"Typo fix `tokenize_exemple`","user":{"login":"shabie","id":30535146,"node_id":"MDQ6VXNlcjMwNTM1MTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30535146?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shabie","html_url":"https:\/\/github.com\/shabie","followers_url":"https:\/\/api.github.com\/users\/shabie\/followers","following_url":"https:\/\/api.github.com\/users\/shabie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shabie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shabie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shabie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shabie\/orgs","repos_url":"https:\/\/api.github.com\/users\/shabie\/repos","events_url":"https:\/\/api.github.com\/users\/shabie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shabie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627553017000,"updated_at":1627560025000,"closed_at":1627560025000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2726","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2726","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2726.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2726.patch"},"body":"There is a small typo in the main README.md","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2726\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2725","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2725\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2725\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2725\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2725","id":955020776,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk4ODMwNjYw","number":2725,"title":"Pass use_auth_token to request_etags","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627488809000,"updated_at":1627490282000,"closed_at":1627490282000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2725","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2725","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2725.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2725.patch"},"body":"Fix #2724.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2725\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2724","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2724\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2724\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2724\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2724","id":954919607,"node_id":"MDU6SXNzdWU5NTQ5MTk2MDc=","number":2724,"title":"404 Error when loading remote data files from private repo","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5\/src\/datasets\/builder.py#L160-L160","Yes, I remember having properly implemented that: \r\n- https:\/\/github.com\/huggingface\/datasets\/commit\/7a9c62f7cef9ecc293f629f859d4375a6bd26dc8#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R160\r\n- https:\/\/github.com\/huggingface\/datasets\/pull\/2628\/commits\/6350a03b4b830339a745f7b1da46ece784ca734c\r\n\r\nBut a subsequent refactoring accidentally removed it...","I have opened a PR to fix it @lewtun."],"created_at":1627482263000,"updated_at":1627534729000,"closed_at":1627490281000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen loading remote data files from a private repo, a 404 error is raised.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nurl = hf_hub_url(\"lewtun\/asr-preds-test\", \"preds.jsonl\", repo_type=\"dataset\")\r\ndset = load_dataset(\"json\", data_files=url, use_auth_token=True)\r\n# HTTPError: 404 Client Error: Not Found for url: https:\/\/huggingface.co\/datasets\/lewtun\/asr-preds-test\/resolve\/main\/preds.jsonl\r\n```\r\n\r\n## Expected results\r\nLoad dataset.\r\n\r\n## Actual results\r\n404 Error.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2724\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2723","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2723\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2723\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2723\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2723","id":954864104,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk4Njk0NDMw","number":2723,"title":"Fix en subset by modifying dataset_info with correct validation infos","user":{"login":"thomasw21","id":24695242,"node_id":"MDQ6VXNlcjI0Njk1MjQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24695242?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomasw21","html_url":"https:\/\/github.com\/thomasw21","followers_url":"https:\/\/api.github.com\/users\/thomasw21\/followers","following_url":"https:\/\/api.github.com\/users\/thomasw21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomasw21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomasw21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomasw21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomasw21\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomasw21\/repos","events_url":"https:\/\/api.github.com\/users\/thomasw21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomasw21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627479379000,"updated_at":1627485743000,"closed_at":1627485743000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2723","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2723","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2723.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2723.patch"},"body":"- Related to: #2682 \r\n\r\nWe correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`.\r\n\r\nInstead of having:\r\n\r\n`{\"name\": \"validation\", \"num_bytes\": 828589180707, \"num_examples\": 364868892, \"dataset_name\": \"c4\"}`\r\n\r\nWe replace with correct values:\r\n\r\n`{\"name\": \"validation\", \"num_bytes\": 825767266, \"num_examples\": 364608, \"dataset_name\": \"c4\"}`\r\n\r\nThere are still issues with validation with other subsets, but I can't download all the files, unzip to check for the correct number of bytes. (If you have a fast way to obtain those values for other subsets, I can do this in this PR ... otherwise I can't spend those resources)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2723\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2722","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2722\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2722\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2722\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2722","id":954446053,"node_id":"MDU6SXNzdWU5NTQ0NDYwNTM=","number":2722,"title":"Missing cache file","user":{"login":"PosoSAgapo","id":33200481,"node_id":"MDQ6VXNlcjMzMjAwNDgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33200481?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PosoSAgapo","html_url":"https:\/\/github.com\/PosoSAgapo","followers_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/followers","following_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/orgs","repos_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/repos","events_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PosoSAgapo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This could be solved by going to the glue\/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.","Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset"],"created_at":1627444327000,"updated_at":1627463223000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Strangely missing cache file after I restart my program again.\r\n\r\n`glue_dataset = datasets.load_dataset('glue', 'sst2')`\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: \/Users\/chris\/.cache\/huggingface\/datasets\/glue\/sst2\/1.0.0\/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad\/dataset_info.json'`\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2722\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2721","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2721\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2721\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2721\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2721","id":954238230,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk4MTY0Njg3","number":2721,"title":"Deal with the bad check in test_load.py","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I did a change for this test already in #2662 :\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/00686c46b7aaf6bfcd4102cec300a3c031284a5a\/tests\/test_load.py#L312-L316\r\n\r\n(though I have to change the variable name `m_combined_path` to `m_url` or something)\r\n\r\nI guess it's ok to remove this check for now :)"],"created_at":1627417403000,"updated_at":1627466314000,"closed_at":1627462398000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2721","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2721","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2721.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2721.patch"},"body":"This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with:\r\n```python\r\nm_paths = re.findall(r\"\\S*_dummy\/_dummy.py\\b\", str(exc_info.value)) # on Linux this will match an URL as well as a local_path due to different os.sep, so take the last element (an URL always comes last in the list)\r\nassert len(m_paths) > 0 and is_remote_url(m_paths[-1]) # is_remote_url comes from datasets.utils.file_utils\r\n```\r\n\r\n@lhoestq Let me know which one of these two approaches (delete or replace) do you prefer?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2721\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2720","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2720\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2720\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2720\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2720","id":954024426,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk3OTgxNjMx","number":2720,"title":"fix: \ud83d\udc1b fix two typos","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627401017000,"updated_at":1627411097000,"closed_at":1627411096000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2720","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2720","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2720.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2720.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2720\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2719","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2719\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2719\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2719\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2719","id":953932416,"node_id":"MDU6SXNzdWU5NTM5MzI0MTY=","number":2719,"title":"Use ETag in streaming mode to detect resource updates","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627395429000,"updated_at":1627395429000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nI want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache.\r\n\r\n**Describe the solution you'd like**\r\n\r\nTake the ETag of the data files into account and provide it (directly or through a hash) to give a signal that I can invalidate my cache.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nNone\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2719\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2718","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2718\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2718\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2718\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2718","id":953360663,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk3NDE0NTQy","number":2718,"title":"New documentation structure","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)","I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in more details how to share a community or a canonical dataset - focus in their differences and the steps to upload them.\r\n\r\nAlso given that making a dataset script or a dataset card both require several steps, I feel like it's better to have dedicated pages for them.\r\n\r\nLet me know what you think @stevhliu and others. We can still revert this change if you feel like it was better with everything in the same place.","I just added some minor changes to match the style, fix typos, etc. Great work on the conceptual guides, I learned a lot from them and I'm sure they will help a lot of other people too!\r\n\r\nI am fine with splitting `Share` into three separate pages. I think this probably makes it easier for users to navigate, instead of having to scroll up and down on a really long single page.","Thanks a lot for all the suggestions ! I'm doing the final changes based on the remaining comments, then we can merge and release v1.12 of `datasets` and the new documentation ^^","Alright I think I took all the suggestions and comments into account :)\r\nThanks everyone for the help !"],"created_at":1627341313000,"updated_at":1631553653000,"closed_at":1631553652000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2718","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2718","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2718.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2718.patch"},"body":"Organize Datasets documentation into four documentation types to improve clarity and discoverability of content.\r\n\r\n**Content to add in the very short term (feel free to add anything I'm missing):**\r\n- A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also be great to talk about Datasets speed and performance here, and if you can share any benchmarking\/tests you did, that would be awesome! Finally, a discussion about how memory-mapping frees the user from RAM constraints would be very helpful.\r\n- Explain why you would want to disable or override verifications when loading a dataset.\r\n- If possible, include a code sample of when the number of elements in the field of an output dictionary aren\u2019t the same as the other fields in the output dictionary (taken from the [note](https:\/\/huggingface.co\/docs\/datasets\/processing.html#augmenting-the-dataset) here).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2718\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2717","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2717\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2717\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2717\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2717","id":952979976,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk3MDkzNDEx","number":2717,"title":"Fix shuffle on IterableDataset that disables batching in case any functions were mapped","user":{"login":"amankhandelia","id":7098967,"node_id":"MDQ6VXNlcjcwOTg5Njc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7098967?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/amankhandelia","html_url":"https:\/\/github.com\/amankhandelia","followers_url":"https:\/\/api.github.com\/users\/amankhandelia\/followers","following_url":"https:\/\/api.github.com\/users\/amankhandelia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/amankhandelia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/amankhandelia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/amankhandelia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/amankhandelia\/orgs","repos_url":"https:\/\/api.github.com\/users\/amankhandelia\/repos","events_url":"https:\/\/api.github.com\/users\/amankhandelia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/amankhandelia\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627310542000,"updated_at":1627322654000,"closed_at":1627317006000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2717","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2717","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2717.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2717.patch"},"body":"Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call.\r\n\r\nAs discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable`\r\n\r\nFix #2716.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2717\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2716","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2716\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2716\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2716\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2716","id":952902778,"node_id":"MDU6SXNzdWU5NTI5MDI3Nzg=","number":2716,"title":"Calling shuffle on IterableDataset will disable batching in case any functions were mapped","user":{"login":"amankhandelia","id":7098967,"node_id":"MDQ6VXNlcjcwOTg5Njc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7098967?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/amankhandelia","html_url":"https:\/\/github.com\/amankhandelia","followers_url":"https:\/\/api.github.com\/users\/amankhandelia\/followers","following_url":"https:\/\/api.github.com\/users\/amankhandelia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/amankhandelia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/amankhandelia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/amankhandelia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/amankhandelia\/orgs","repos_url":"https:\/\/api.github.com\/users\/amankhandelia\/repos","events_url":"https:\/\/api.github.com\/users\/amankhandelia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/amankhandelia\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)","Have raised the PR [here](https:\/\/github.com\/huggingface\/datasets\/pull\/2717)","Fixed by #2717."],"created_at":1627305899000,"updated_at":1627322683000,"closed_at":1627322683000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`\r\n\r\nI did RCA on the dataset codebase, the problem is emerging from [this line of code](https:\/\/github.com\/huggingface\/datasets\/blob\/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729\/src\/datasets\/iterable_dataset.py#L197) here as it is\r\n`self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`.\r\nTo remedy the problem we can change this line to\r\n`self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2716\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2715","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2715\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2715\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2715\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2715","id":952845229,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk2OTc5MjQ1","number":2715,"title":"Update PAN-X data URL in XTREME dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging since the CI is just about missing infos in the dataset card"],"created_at":1627302077000,"updated_at":1627306079000,"closed_at":1627306079000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2715","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2715","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2715.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2715.patch"},"body":"Related to #2710, #2691.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2715\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2714","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2714\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2714\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2714\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2714","id":952580820,"node_id":"MDU6SXNzdWU5NTI1ODA4MjA=","number":2714,"title":"add more precise information for size","user":{"login":"pennyl67","id":1493902,"node_id":"MDQ6VXNlcjE0OTM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1493902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pennyl67","html_url":"https:\/\/github.com\/pennyl67","followers_url":"https:\/\/api.github.com\/users\/pennyl67\/followers","following_url":"https:\/\/api.github.com\/users\/pennyl67\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pennyl67\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pennyl67\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pennyl67\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pennyl67\/orgs","repos_url":"https:\/\/api.github.com\/users\/pennyl67\/repos","events_url":"https:\/\/api.github.com\/users\/pennyl67\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pennyl67\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We already have this information in the dataset_infos.json files of each dataset.\r\nMaybe we can parse these files in the backend to return their content with the endpoint at huggingface.co\/api\/datasets\r\n\r\nFor now if you want to access this info you have to load the json for each dataset. For example:\r\n- for a dataset on github like `squad` \r\n- https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/squad\/dataset_infos.json\r\n- for a community dataset on the hub like `lhoestq\/squad`:\r\n https:\/\/huggingface.co\/datasets\/lhoestq\/squad\/resolve\/main\/dataset_infos.json"],"created_at":1627283463000,"updated_at":1627290985000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2714\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2713","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2713\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2713\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2713\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2713","id":952515256,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk2Njk3MzU0","number":2713,"title":"Enumerate all ner_tags values in WNUT 17 dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1627276936000,"updated_at":1627291855000,"closed_at":1627291855000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2713","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2713","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2713.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2713.patch"},"body":"This PR does:\r\n- Enumerate all ner_tags in dataset card Data Fields section\r\n- Add all metadata tags to dataset card\r\n\r\nClose #2709.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2713\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2710","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2710\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2710\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2710\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2710","id":951723326,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk2MDYyNjAy","number":2710,"title":"Update WikiANN data URL","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We have to update the URL in the XTREME benchmark as well:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/0dfc639cec450ed8762a997789a2ed63e63cdcf2\/datasets\/xtreme\/xtreme.py#L411-L411\r\n\r\n"],"created_at":1627057761000,"updated_at":1627292063000,"closed_at":1627292063000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2710","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2710","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2710.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2710.patch"},"body":"WikiANN data source URL is no longer accessible: 404 error from Dropbox.\r\n\r\nWe have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card.\r\n\r\nClose #2691.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2710\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2709","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2709\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2709\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2709\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2709","id":951534757,"node_id":"MDU6SXNzdWU5NTE1MzQ3NTc=","number":2709,"title":"Missing documentation for wnut_17 (ner_tags)","user":{"login":"maxpel","id":31095360,"node_id":"MDQ6VXNlcjMxMDk1MzYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31095360?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maxpel","html_url":"https:\/\/github.com\/maxpel","followers_url":"https:\/\/api.github.com\/users\/maxpel\/followers","following_url":"https:\/\/api.github.com\/users\/maxpel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maxpel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maxpel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maxpel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maxpel\/orgs","repos_url":"https:\/\/api.github.com\/users\/maxpel\/repos","events_url":"https:\/\/api.github.com\/users\/maxpel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maxpel\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @maxpel, thanks for reporting this issue.\r\n\r\nIndeed, the documentation in the dataset card is not complete. I\u2019m opening a Pull Request to fix it.\r\n\r\nAs the paper explains, there are 6 entity types and we have ordered them alphabetically: `corporation`, `creative-work`, `group`, `location`, `person` and `product`. \r\n\r\nEach of these entity types has 2 possible IOB2 format tags: \r\n- `B-`: to indicate that the token is the beginning of an entity name, and the \r\n- `I-`: to indicate that the token is inside an entity name. \r\n\r\nAdditionally, there is the standalone IOB2 tag \r\n- `O`: that indicates that the token belongs to no named entity. \r\n\r\nIn total there are 13 possible tags, which correspond to the following integer numbers:\r\n\r\n0. `O`\r\n1. `B-corporation`\r\n2. `I-corporation`\r\n3. `B-creative-work`\r\n4. `I-creative-work`\r\n5. `B-group`\r\n6. `I-group`\r\n7. `B-location`\r\n8. `I-location`\r\n9. `B-person`\r\n10. `I-person`\r\n11. `B-product`\r\n12. `I-product`"],"created_at":1627043132000,"updated_at":1627291855000,"closed_at":1627291855000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"On the info page of the wnut_17 data set (https:\/\/huggingface.co\/datasets\/wnut_17), the model output of ner-tags is only documented for these 5 cases:\r\n\r\n`ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).`\r\n\r\nI trained a model with the data and it gives me 13 classes:\r\n\r\n```\r\n\"id2label\": {\r\n \"0\": 0,\r\n \"1\": 1,\r\n \"2\": 2,\r\n \"3\": 3,\r\n \"4\": 4,\r\n \"5\": 5,\r\n \"6\": 6,\r\n \"7\": 7,\r\n \"8\": 8,\r\n \"9\": 9,\r\n \"10\": 10,\r\n \"11\": 11,\r\n \"12\": 12\r\n }\r\n\r\n \"label2id\": {\r\n \"0\": 0,\r\n \"1\": 1,\r\n \"10\": 10,\r\n \"11\": 11,\r\n \"12\": 12,\r\n \"2\": 2,\r\n \"3\": 3,\r\n \"4\": 4,\r\n \"5\": 5,\r\n \"6\": 6,\r\n \"7\": 7,\r\n \"8\": 8,\r\n \"9\": 9\r\n }\r\n```\r\nThe paper (https:\/\/www.aclweb.org\/anthology\/W17-4418.pdf) explains those 6 categories, but the ordering does not match:\r\n\r\n```\r\n1. person\r\n2. location (including GPE, facility)\r\n3. corporation\r\n4. product (tangible goods, or well-defined\r\nservices)\r\n5. creative-work (song, movie, book and\r\nso on)\r\n6. group (subsuming music band, sports team,\r\nand non-corporate organisations)\r\n```\r\nI would be very helpful for me, if somebody could clarify the model ouputs and explain the \"B-\" and \"I-\" prefixes to me.\r\n\r\nReally great work with that and the other packages, I couldn't believe that training the model with that data was basically a one-liner!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2709\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2708","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2708\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2708\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2708\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2708","id":951092660,"node_id":"MDU6SXNzdWU5NTEwOTI2NjA=","number":2708,"title":"QASC: incomplete training set ","user":{"login":"danyaljj","id":2441454,"node_id":"MDQ6VXNlcjI0NDE0NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2441454?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/danyaljj","html_url":"https:\/\/github.com\/danyaljj","followers_url":"https:\/\/api.github.com\/users\/danyaljj\/followers","following_url":"https:\/\/api.github.com\/users\/danyaljj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/danyaljj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/danyaljj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/danyaljj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/danyaljj\/orgs","repos_url":"https:\/\/api.github.com\/users\/danyaljj\/repos","events_url":"https:\/\/api.github.com\/users\/danyaljj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/danyaljj\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @danyaljj, thanks for reporting.\r\n\r\nUnfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:\r\n```ipython\r\nIn [10]: ds[\"train\"]\r\nOut[10]:\r\nDataset({\r\n features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_question'],\r\n num_rows: 8134\r\n})\r\n\r\nIn [11]: ds[\"train\"].shape\r\nOut[11]: (8134, 8)\r\n```\r\nand the content of the last 5 examples is:\r\n```ipython\r\nIn [12]: for i in range(8129, 8134):\r\n ...: print(json.dumps(ds[\"train\"][i]))\r\n ...:\r\n{\"id\": \"3KAKFY4PGU1LGXM77JAK2700NGCI3X\", \"question\": \"Chitin can be used for protection by whom?\", \"choices\": {\"text\": [\"Fungi\", \"People\", \"Man\", \"Fish\", \"trees\", \"Dogs\", \"animal\", \"Birds\"], \"label\": [\"A\", \"B\",\r\n \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"D\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Fish scales are also composed of chitin.\", \"combinedfact\": \"Chitin can be used for prote\r\nction by fish.\", \"formatted_question\": \"Chitin can be used for protection by whom? (A) Fungi (B) People (C) Man (D) Fish (E) trees (F) Dogs (G) animal (H) Birds\"}\r\n{\"id\": \"336YQZE83VDAQVZ26HW59X51JZ9M5M\", \"question\": \"Which type of animal uses plates for protection?\", \"choices\": {\"text\": [\"squids\", \"reptiles\", \"sea urchins\", \"fish\", \"amphibians\", \"Frogs\", \"mammals\", \"salm\r\non\"], \"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"B\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Reptiles have scales or plates.\", \"combinedfact\": \"Reptiles use\r\n their plates for protection.\", \"formatted_question\": \"Which type of animal uses plates for protection? (A) squids (B) reptiles (C) sea urchins (D) fish (E) amphibians (F) Frogs (G) mammals (H) salmon\"}\r\n{\"id\": \"3WZ36BJEV3FGS66VGOOUYX0LN8GTBU\", \"question\": \"What are used for protection by fish?\", \"choices\": {\"text\": [\"scales\", \"fins\", \"streams.\", \"coral\", \"gills\", \"Collagen\", \"mussels\", \"whiskers\"], \"label\": [\"\r\nA\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"A\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Fish are backboned aquatic animals.\", \"combinedfact\": \"scales are used for prote\r\nction by fish \", \"formatted_question\": \"What are used for protection by fish? (A) scales (B) fins (C) streams. (D) coral (E) gills (F) Collagen (G) mussels (H) whiskers\"}\r\n{\"id\": \"3Z2R0DQ0JHDKFAO2706OYIXGNA4E28\", \"question\": \"What are pangolins covered in?\", \"choices\": {\"text\": [\"tunicates\", \"Echinoids\", \"shells\", \"exoskeleton\", \"blastoids\", \"barrel-shaped\", \"protection\", \"white\"\r\n], \"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"G\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Pangolins have an elongate and tapering body covered above with ov\r\nerlapping scales.\", \"combinedfact\": \"Pangolins are covered in overlapping protection.\", \"formatted_question\": \"What are pangolins covered in? (A) tunicates (B) Echinoids (C) shells (D) exoskeleton (E) blastoids\r\n (F) barrel-shaped (G) protection (H) white\"}\r\n{\"id\": \"3PMBY0YE272GIWPNWIF8IH5RBHVC9S\", \"question\": \"What are covered with protection?\", \"choices\": {\"text\": [\"apples\", \"trees\", \"coral\", \"clams\", \"roses\", \"wings\", \"hats\", \"fish\"], \"label\": [\"A\", \"B\", \"C\", \"D\r\n\", \"E\", \"F\", \"G\", \"H\"]}, \"answerKey\": \"H\", \"fact1\": \"scales are used for protection by scaled animals\", \"fact2\": \"Fish are covered with scales.\", \"combinedfact\": \"Fish are covered with protection\", \"formatted_q\r\nuestion\": \"What are covered with protection? (A) apples (B) trees (C) coral (D) clams (E) roses (F) wings (G) hats (H) fish\"}\r\n```\r\n\r\nCould you please load again your dataset and print its shape, like this:\r\n```python\r\nds = load_dataset(\"qasc\", split=\"train)\r\nprint(ds.shape)\r\n```\r\nand confirm which is your output?","Hmm .... it must have been a mistake on my side. Sorry for the hassle! "],"created_at":1626991184000,"updated_at":1627047007000,"closed_at":1627047007000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe training instances are not loaded properly. \r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"qasc\", script_version='1.10.2')\r\n \r\ndef load_instances(split): \r\n instances = dataset[split]\r\n print(f\"split: {split} - size: {len(instances)}\")\r\n for x in instances:\r\n print(json.dumps(x))\r\n\r\n\r\nload_instances('test')\r\nload_instances('validation')\r\nload_instances('train')\r\n```\r\n\r\n## results\r\nFor test and validation, we can see the examples in the output (which is good!): \r\n```\r\nsplit: test - size: 920\r\n{\"answerKey\": \"\", \"choices\": {\"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"], \"text\": [\"Anthax\", \"under water\", \"uterus\", \"wombs\", \"two\", \"moles\", \"live\", \"embryo\"]}, \"combinedfact\": \"\", \"fact1\": \"\", \"fact2\": \"\", \"formatted_question\": \"What type of birth do therian mammals have? (A) Anthax (B) under water (C) uterus (D) wombs (E) two (F) moles (G) live (H) embryo\", \"id\": \"3C44YUNSI1OBFBB8D36GODNOZN9DPA\", \"question\": \"What type of birth do therian mammals have?\"}\r\n{\"answerKey\": \"\", \"choices\": {\"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"], \"text\": [\"Corvidae\", \"arthropods\", \"birds\", \"backbones\", \"keratin\", \"Jurassic\", \"front paws\", \"Parakeets.\"]}, \"combinedfact\": \"\", \"fact1\": \"\", \"fact2\": \"\", \"formatted_question\": \"By what time had mouse-sized viviparous mammals evolved? (A) Corvidae (B) arthropods (C) birds (D) backbones (E) keratin (F) Jurassic (G) front paws (H) Parakeets.\", \"id\": \"3B1NLC6UGZVERVLZFT7OUYQLD1SGPZ\", \"question\": \"By what time had mouse-sized viviparous mammals evolved?\"}\r\n{\"answerKey\": \"\", \"choices\": {\"label\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\"], \"text\": [\"Reduced friction\", \"causes infection\", \"vital to a good life\", \"prevents water loss\", \"camouflage from consumers\", \"Protection against predators\", \"spur the growth of the plant\", \"a smooth surface\"]}, \"combinedfact\": \"\", \"fact1\": \"\", \"fact2\": \"\", \"formatted_question\": \"What does a plant's skin do? (A) Reduced friction (B) causes infection (C) vital to a good life (D) prevents water loss (E) camouflage from consumers (F) Protection against predators (G) spur the growth of the plant (H) a smooth surface\", \"id\": \"3QRYMNZ7FYGITFVSJET3PS0F4S0NT9\", \"question\": \"What does a plant's skin do?\"}\r\n...\r\n```\r\nHowever, only a few instances are loaded for the training split, which is not correct. \r\n\r\n## Environment info\r\n- `datasets` version: '1.10.2' \r\n- Platform: MaxOS \r\n- Python version:3.7\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2708\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2707","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2707\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2707\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2707\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2707","id":950812945,"node_id":"MDU6SXNzdWU5NTA4MTI5NDU=","number":2707,"title":"404 Not Found Error when loading LAMA dataset","user":{"login":"dwil2444","id":26467159,"node_id":"MDQ6VXNlcjI2NDY3MTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26467159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dwil2444","html_url":"https:\/\/github.com\/dwil2444","followers_url":"https:\/\/api.github.com\/users\/dwil2444\/followers","following_url":"https:\/\/api.github.com\/users\/dwil2444\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dwil2444\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dwil2444\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dwil2444\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dwil2444\/orgs","repos_url":"https:\/\/api.github.com\/users\/dwil2444\/repos","events_url":"https:\/\/api.github.com\/users\/dwil2444\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dwil2444\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @dwil2444! I was able to reproduce your error when I downgraded to v1.1.2. Updating to the latest version of Datasets fixed the error for me :)","Hi @dwil2444, thanks for reporting.\r\n\r\nCould you please confirm which `datasets` version you were using and if the problem persists after you update it to the latest version: `pip install -U datasets`?\r\n\r\nThanks @stevhliu for the hint to fix this! ;)","@stevhliu @albertvillanova updating to the latest version of datasets did in fact fix this issue. Thanks a lot for your help!"],"created_at":1626969153000,"updated_at":1627309747000,"closed_at":1627309747000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The [LAMA](https:\/\/huggingface.co\/datasets\/viewer\/?dataset=lama) probing dataset is not available for download: \r\n\r\nSteps to Reproduce: \r\n\r\n1. `from datasets import load_dataset`\r\n2. `dataset = load_dataset('lama', 'trex')`. \r\n\r\n\r\nResults: \r\n`FileNotFoundError: Couldn't find file locally at lama\/lama.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/lama\/lama.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/lama\/lama.py`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2707\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2706","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2706\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2706\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2706\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2706","id":950606561,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk1MTI3ODgz","number":2706,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626956969000,"updated_at":1626957780000,"closed_at":1626957780000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2706","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2706","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2706.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2706.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2706\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2705","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2705\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2705\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2705\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2705","id":950488583,"node_id":"MDU6SXNzdWU5NTA0ODg1ODM=","number":2705,"title":"404 not found error on loading WIKIANN dataset","user":{"login":"ronbutan","id":39296659,"node_id":"MDQ6VXNlcjM5Mjk2NjU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39296659?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ronbutan","html_url":"https:\/\/github.com\/ronbutan","followers_url":"https:\/\/api.github.com\/users\/ronbutan\/followers","following_url":"https:\/\/api.github.com\/users\/ronbutan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ronbutan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ronbutan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ronbutan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ronbutan\/orgs","repos_url":"https:\/\/api.github.com\/users\/ronbutan\/repos","events_url":"https:\/\/api.github.com\/users\/ronbutan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ronbutan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi\/mmner#4) and we have also contacted the author by email to ask if they are planning to fix this issue. See the details here: https:\/\/github.com\/huggingface\/datasets\/issues\/2691#issuecomment-885463027\r\n\r\nI close this issue because it is the same as in #2691. Feel free to subscribe to that other issue to be informed about any updates."],"created_at":1626947750000,"updated_at":1627027652000,"closed_at":1627027652000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nUnable to retreive wikiann English dataset\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import list_datasets, load_dataset, list_metrics, load_metric\r\nWIKIANN = load_dataset(\"wikiann\",\"en\")\r\n```\r\n\r\n## Expected results\r\nColab notebook should display successful download status\r\n\r\n## Actual results\r\nFileNotFoundError: Couldn't find file at https:\/\/www.dropbox.com\/s\/12h3qqog6q4bjve\/panx_dataset.tar?dl=1\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.10.1\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2705\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2704","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2704\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2704\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2704\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2704","id":950483980,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk1MDIzMTEz","number":2704,"title":"Fix pick default config name message","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626947383000,"updated_at":1626948161000,"closed_at":1626948160000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2704","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2704","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2704.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2704.patch"},"body":"The error message to tell which config name to load is not displayed. \r\n\r\nThis is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https:\/\/github.com\/huggingface\/datasets\/pull\/2659\r\n\r\nI fixed that by making the config kwargs empty by default, even if default parameters are passed\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2703","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2704\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2703","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2703\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2703\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2703\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2703","id":950482284,"node_id":"MDU6SXNzdWU5NTA0ODIyODQ=","number":2703,"title":"Bad message when config name is missing","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1626947243000,"updated_at":1626948160000,"closed_at":1626948160000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name.\r\n\r\nHowever in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message:\r\n\r\n```python\r\nimport datasets\r\n\r\ndatasets.load_dataset(\"glue\")\r\n```\r\nraises\r\n```python\r\nAttributeError: 'BuilderConfig' object has no attribute 'text_features'\r\n```\r\ninstead of\r\n```python\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']\r\nExample of usage:\r\n `load_dataset('glue', 'cola')`\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2703\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2702","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2702\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2702\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2702\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2702","id":950448159,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0OTkyOTc1","number":2702,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626944679000,"updated_at":1626945459000,"closed_at":1626945458000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2702","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2702","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2702.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2702.patch"},"body":"Update BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2702\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2701","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2701\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2701\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2701\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2701","id":950422403,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0OTcxMzM3","number":2701,"title":"Fix download_mode docstrings","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626942625000,"updated_at":1626946411000,"closed_at":1626946411000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2701","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2701","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2701.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2701.patch"},"body":"Fix `download_mode` docstrings.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2701\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2700","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2700\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2700\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2700\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2700","id":950276325,"node_id":"MDU6SXNzdWU5NTAyNzYzMjU=","number":2700,"title":"from datasets import Dataset is failing ","user":{"login":"kswamy15","id":5582286,"node_id":"MDQ6VXNlcjU1ODIyODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5582286?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kswamy15","html_url":"https:\/\/github.com\/kswamy15","followers_url":"https:\/\/api.github.com\/users\/kswamy15\/followers","following_url":"https:\/\/api.github.com\/users\/kswamy15\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kswamy15\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kswamy15\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kswamy15\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kswamy15\/orgs","repos_url":"https:\/\/api.github.com\/users\/kswamy15\/repos","events_url":"https:\/\/api.github.com\/users\/kswamy15\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kswamy15\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @kswamy15, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm`"],"created_at":1626925883000,"updated_at":1626938625000,"closed_at":1626937747000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nfrom datasets import Dataset\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in ()\r\n 25 import posixpath\r\n 26 import requests\r\n---> 27 from tqdm.contrib.concurrent import thread_map\r\n 28 \r\n 29 from .. import __version__, config, utils\r\n\r\nModuleNotFoundError: No module named 'tqdm.contrib.concurrent'\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: If your import is failing due to a missing package, you can\r\nmanually install dependencies using either !pip or !apt.\r\n\r\nTo view examples of installing some common dependencies, click the\r\n\"Open Examples\" button below.\r\n---------------------------------------------------------------------------\r\n\r\n## Environment info\r\n\r\n- `datasets` version: latest version as of 07\/21\/2021\r\n- Platform: Google Colab\r\n- Python version: 3.7\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2700\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2699","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2699\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2699\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2699\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2699","id":950221226,"node_id":"MDU6SXNzdWU5NTAyMjEyMjY=","number":2699,"title":"cannot combine splits merging and streaming?","user":{"login":"eyaler","id":4436747,"node_id":"MDQ6VXNlcjQ0MzY3NDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4436747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eyaler","html_url":"https:\/\/github.com\/eyaler","followers_url":"https:\/\/api.github.com\/users\/eyaler\/followers","following_url":"https:\/\/api.github.com\/users\/eyaler\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eyaler\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eyaler\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eyaler\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eyaler\/orgs","repos_url":"https:\/\/api.github.com\/users\/eyaler\/repos","events_url":"https:\/\/api.github.com\/users\/eyaler\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eyaler\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! That's missing indeed. We'll try to implement this for the next version :)\r\n\r\nI guess we just need to implement #2564 first, and then we should be able to add support for splits combinations"],"created_at":1626916405000,"updated_at":1626942467000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"this does not work:\r\n`dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)`\r\nwith error:\r\n`ValueError: Bad split: train+validation. Available splits: ['train', 'validation']`\r\n\r\nthese work:\r\n`dataset = datasets.load_dataset('mc4','iw',split='train+validation')`\r\n`dataset = datasets.load_dataset('mc4','iw',split='train',streaming=True)`\r\n`dataset = datasets.load_dataset('mc4','iw',split='validation',streaming=True)`\r\n\r\ni could not find a reference to this in the documentation and the error message is confusing. also would be nice to allow streaming for the merged splits","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2699\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2698","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2698\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2698\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2698\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2698","id":950159867,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0NzUxMzMw","number":2698,"title":"Ignore empty batch when writing","user":{"login":"pcuenca","id":1177582,"node_id":"MDQ6VXNlcjExNzc1ODI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1177582?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pcuenca","html_url":"https:\/\/github.com\/pcuenca","followers_url":"https:\/\/api.github.com\/users\/pcuenca\/followers","following_url":"https:\/\/api.github.com\/users\/pcuenca\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pcuenca\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pcuenca\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pcuenca\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pcuenca\/orgs","repos_url":"https:\/\/api.github.com\/users\/pcuenca\/repos","events_url":"https:\/\/api.github.com\/users\/pcuenca\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pcuenca\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626906930000,"updated_at":1627311363000,"closed_at":1627305926000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2698","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2698","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2698.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2698.patch"},"body":"This prevents an schema update with unknown column types, as reported in #2644.\r\n\r\nThis is my first attempt at fixing the issue. I tested the following:\r\n- First batch returned by a batched map operation is empty.\r\n- An intermediate batch is empty.\r\n- `python -m unittest tests.test_arrow_writer` passes.\r\n\r\nHowever, `arrow_writer` looks like a pretty generic interface, I'm not sure if there are other uses I may have overlooked. Let me know if that's the case, or if a better approach would be preferable.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2698\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2697","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2697\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2697\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2697\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2697","id":950021623,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0NjMyODg0","number":2697,"title":"Fix import on Colab","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq @albertvillanova - It might be a good idea to have a patch release after this gets merged (presumably tomorrow morning when you're around). The Colab issue linked to this PR is a pretty big blocker. "],"created_at":1626894218000,"updated_at":1626937748000,"closed_at":1626937747000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2697","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2697","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2697.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2697.patch"},"body":"Fix #2695, fix #2700. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2697\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2696","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2696\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2696\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2696\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2696","id":949901726,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0NTMwODg3","number":2696,"title":"Add support for disable_progress_bar on Windows","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The CI failure seems unrelated to this PR (probably has something to do with Transformers)."],"created_at":1626885293000,"updated_at":1627306274000,"closed_at":1627292317000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2696","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2696","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2696.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2696.patch"},"body":"This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https:\/\/stackoverflow.com\/a\/6596695\/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would not work on Windows.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2696\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2695","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2695\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2695\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2695\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2695","id":949864823,"node_id":"MDU6SXNzdWU5NDk4NjQ4MjM=","number":2695,"title":"Cannot import load_dataset on Colab","user":{"login":"bayartsogt-ya","id":43239645,"node_id":"MDQ6VXNlcjQzMjM5NjQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43239645?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bayartsogt-ya","html_url":"https:\/\/github.com\/bayartsogt-ya","followers_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/followers","following_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/orgs","repos_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/repos","events_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bayartsogt-ya\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm facing the same issue on Colab today too.\r\n\r\n```\r\nModuleNotFoundError Traceback (most recent call last)\r\n in ()\r\n 3 \r\n 4 from ray import tune\r\n----> 5 from datasets import DatasetDict, Dataset\r\n 6 from datasets import load_dataset, load_metric\r\n 7 from dataclasses import dataclass\r\n\r\n7 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in ()\r\n 25 import posixpath\r\n 26 import requests\r\n---> 27 from tqdm.contrib.concurrent import thread_map\r\n 28 \r\n 29 from .. import __version__, config, utils\r\n\r\nModuleNotFoundError: No module named 'tqdm.contrib.concurrent'\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: If your import is failing due to a missing package, you can\r\nmanually install dependencies using either !pip or !apt.\r\n\r\nTo view examples of installing some common dependencies, click the\r\n\"Open Examples\" button below.\r\n---------------------------------------------------------------------------\r\n```","@phosseini \r\nI think it is related to [1.10.0](https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/1052653701) release done 3 hours ago. (cc: @lhoestq )\r\nFor now I just downgraded to 1.9.0 and it is working fine.","> @phosseini\r\n> I think it is related to [1.10.0](https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/1052653701) release done 3 hours ago. (cc: @lhoestq )\r\n> For now I just downgraded to 1.9.0 and it is working fine.\r\n\r\nSame here, downgraded to 1.9.0 for now and works fine.","Hi, \r\n\r\nupdating tqdm to the newest version resolves the issue for me. You can do this as follows in Colab:\r\n```\r\n!pip install tqdm --upgrade\r\n```","Hi @bayartsogt-ya and @phosseini, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, as pointed out by @mariosasko, you can circumvent this issue by updating the `tqdm` library: \r\n```\r\n!pip install -U tqdm\r\n```"],"created_at":1626882771000,"updated_at":1626938785000,"closed_at":1626937747000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nGot tqdm concurrent module not found error during importing load_dataset from datasets.\r\n\r\n## Steps to reproduce the bug\r\nHere [colab notebook](https:\/\/colab.research.google.com\/drive\/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error\r\n\r\nOn colab:\r\n```python\r\n!pip install datasets\r\nfrom datasets import load_dataset\r\n```\r\n\r\n## Expected results\r\nWorks without error\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n```\r\nModuleNotFoundError Traceback (most recent call last)\r\n in ()\r\n----> 1 from datasets import load_dataset, load_metric, Metric, MetricInfo, Features, Value\r\n 2 from sklearn.metrics import mean_squared_error\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/__init__.py in ()\r\n 31 )\r\n 32 \r\n---> 33 from .arrow_dataset import Dataset, concatenate_datasets\r\n 34 from .arrow_reader import ArrowReader, ReadInstruction\r\n 35 from .arrow_writer import ArrowWriter\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py in ()\r\n 40 from tqdm.auto import tqdm\r\n 41 \r\n---> 42 from datasets.tasks.text_classification import TextClassification\r\n 43 \r\n 44 from . import config, utils\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/tasks\/__init__.py in ()\r\n 1 from typing import Optional\r\n 2 \r\n----> 3 from ..utils.logging import get_logger\r\n 4 from .automatic_speech_recognition import AutomaticSpeechRecognition\r\n 5 from .base import TaskTemplate\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/__init__.py in ()\r\n 19 \r\n 20 from . import logging\r\n---> 21 from .download_manager import DownloadManager, GenerateMode\r\n 22 from .file_utils import DownloadConfig, cached_path, hf_bucket_url, is_remote_url, temp_seed\r\n 23 from .mock_download_manager import MockDownloadManager\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/download_manager.py in ()\r\n 24 \r\n 25 from .. import config\r\n---> 26 from .file_utils import (\r\n 27 DownloadConfig,\r\n 28 cached_path,\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in ()\r\n 25 import posixpath\r\n 26 import requests\r\n---> 27 from tqdm.contrib.concurrent import thread_map\r\n 28 \r\n 29 from .. import __version__, config, utils\r\n\r\nModuleNotFoundError: No module named 'tqdm.contrib.concurrent'\r\n```\r\n## Environment info\r\n\r\n- `datasets` version: 1.10.0\r\n- Platform: Colab\r\n- Python version: 3.7.11\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2695\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2694","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2694\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2694\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2694\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2694","id":949844722,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0NDg0NTcy","number":2694,"title":"fix: \ud83d\udc1b change string format to allow copy\/paste to work in bash","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626881440000,"updated_at":1626950507000,"closed_at":1626950507000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2694","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2694","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2694.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2694.patch"},"body":"Before: copy\/paste resulted in an error because the square bracket\r\ncharacters `[]` are special characters in bash","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2694\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2693","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2693\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2693\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2693\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2693","id":949797014,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0NDQ1ODAz","number":2693,"title":"Fix OSCAR Esperanto","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626878630000,"updated_at":1626879232000,"closed_at":1626879231000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2693","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2693","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2693.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2693.patch"},"body":"The Esperanto part (original) of OSCAR has the wrong number of examples:\r\n```python\r\nfrom datasets import load_dataset\r\nraw_datasets = load_dataset(\"oscar\", \"unshuffled_original_eo\")\r\n```\r\nraises\r\n```python\r\nNonMatchingSplitsSizesError:\r\n[{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, dataset_name='oscar'),\r\n'recorded': SplitInfo(name='train', num_bytes=314064514, num_examples=121168, dataset_name='oscar')}]\r\n```\r\n\r\nI updated the number of expected examples in dataset_infos.json\r\n\r\ncc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2693\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2692","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2692\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2692\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2692\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2692","id":949765484,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0NDE4MDg1","number":2692,"title":"Update BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626877415000,"updated_at":1626881501000,"closed_at":1626881500000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2692","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2692","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2692.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2692.patch"},"body":"Update BibTeX entry","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2692\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2691","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2691\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2691\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2691\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2691","id":949758379,"node_id":"MDU6SXNzdWU5NDk3NTgzNzk=","number":2691,"title":"xtreme \/ pan-x cannot be downloaded","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @severo, thanks for reporting.\r\n\r\nHowever I have not been able to reproduce this issue. Could you please confirm if the problem persists for you?\r\n\r\nMaybe Dropbox (where the data source is hosted) was temporarily unavailable when you tried.","Hmmm, the file (https:\/\/www.dropbox.com\/s\/dl\/12h3qqog6q4bjve\/panx_dataset.tar) really seems to be unavailable... I tried from various connexions and machines and got the same 404 error. Maybe the dataset has been loaded from the cache in your case?","Yes @severo, weird... I could access the file when I answered to you, but now I cannot longer access it either... Maybe it was from the cache as you point out.\r\n\r\nAnyway, I have opened an issue in the GitHub repository responsible for the original dataset: https:\/\/github.com\/afshinrahimi\/mmner\/issues\/4\r\nI have also contacted the maintainer by email.\r\n\r\nI'll keep you informed with their answer.","Reply from the author\/maintainer: \r\n> Will fix the issue and let you know during the weekend.","The author told that apparently Dropbox has changed their policy and no longer allow downloading the file without having signed in first. The author asked Hugging Face to host their dataset."],"created_at":1626877085000,"updated_at":1627292062000,"closed_at":1627292062000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nDataset xtreme \/ pan-x cannot be loaded\r\n\r\nSeems related to https:\/\/github.com\/huggingface\/datasets\/pull\/2326\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\ndataset = load_dataset(\"xtreme\", \"PAN-X.fr\")\r\n```\r\n\r\n## Expected results\r\n\r\nLoad the dataset\r\n\r\n## Actual results\r\n\r\n```\r\nFileNotFoundError: Couldn't find file at https:\/\/www.dropbox.com\/s\/12h3qqog6q4bjve\/panx_dataset.tar?dl=1\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: macOS-11.4-x86_64-i386-64bit\r\n- Python version: 3.8.11\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2691\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2690","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2690\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2690\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2690\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2690","id":949574500,"node_id":"MDExOlB1bGxSZXF1ZXN0Njk0MjU5MDc1","number":2690,"title":"Docs details","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for all the comments and for the corrections in the docs !\r\n\r\nAbout all the points you mentioned:\r\n\r\n> * the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https:\/\/huggingface.co\/docs\/datasets\/installation.html + a one-liner that installs all the requirements \/ alternatively a requirements.txt file)\r\n\r\nYes good idea\r\n\r\n> * \"If you\u2019d like to play with the examples, you must install it from source.\" in https:\/\/huggingface.co\/docs\/datasets\/installation.html: it's not clear to me what this means (what are these \"examples\"?)\r\n\r\nIt refers to examples scripts inside the git repository of the library, see the `examples` folder in the `transformers` repo.\r\nWe don't have examples yet in the git repo of `datasets` as in transformers. So currently there are no examples. Maybe we can just remove this sentence from the docs for now\r\n\r\n> * in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html: \"or AWS bucket if it\u2019s not already stored in the library\". It's the only place in the doc (aside from the docstring https:\/\/huggingface.co\/docs\/datasets\/package_reference\/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the \"AWS bucket\" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https:\/\/s3.amazonaws.com\/datasets.huggingface.co and\/or https:\/\/huggingface.co\/docs\/datasets\/filesystems.html.\r\n\r\nThis is outdated and must be replaced by\r\n```\r\nor from the Hugging Face Hub if it\u2019s not already stored in the library\r\n```\r\n\r\n> * example in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#manually-downloading-files is obsoleted by [Enable auto-download for PAN-X \/ Wikiann domain in XTREME\u00a0#2326](https:\/\/github.com\/huggingface\/datasets\/pull\/2326). Also: see [xtreme \/ pan-x cannot be downloaded\u00a0#2691](https:\/\/github.com\/huggingface\/datasets\/issues\/2691) for a bug on this specific dataset.\r\n\r\nWe can replace the `XTREME` `PANX` dataste by `matinf` instead for example\r\n\r\n> * in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#manually-downloading-files the doc says \"After you\u2019ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:\", but the following example does not show how to use `data_dir`\r\n\r\nLet's add `data_dir=\"path\/to\/your\/downloaded\/data\"` for example\r\n\r\n> * in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code\/functions\/classes... and I would want a lot more links inside the doc pointing to the API entries.\r\n\r\nCurrently there's no documentation for the CSV loader config. Maybe we can add the docstrings to the `CsvConfig` class to explain the parameters and how it works, and then redirect to the doc of this class in this section of the documentation.\r\n\r\n> * in the API reference (docstrings) I would prefer \"SOURCE\" to link to github instead of a copy of the code inside the docs site (eg. https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/load.py#L711 instead of https:\/\/huggingface.co\/docs\/datasets\/_modules\/datasets\/load.html#load_dataset)\r\n\r\nThis is the same as in `transformers`, not sure if this is a big issue\r\n\r\n> * it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https:\/\/github.com\/huggingface\/datasets\/search?q=disable_progress_bar), see https:\/\/huggingface.co\/docs\/datasets\/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https:\/\/huggingface.co\/docs\/datasets\/package_reference\/logging_methods.html)\r\n\r\nThe function `disable_progress_bar` should definitely be in the docs, thanks. We can add it to the logging methods\r\n\r\n> * in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, \"The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:\", maybe link to https:\/\/en.wikipedia.org\/wiki\/JSON_streaming#Line-delimited_JSON and give it a name (\"line-delimited JSON\"? \"JSON Lines\" as in https:\/\/huggingface.co\/docs\/datasets\/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?)\r\n\r\nYes good idea !\r\n\r\n> * in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html, for the local files sections, it would be nice to provide sample csv \/ json \/ text files to download, so that it's easier for the reader to try to load them (instead: they won't try)\r\n\r\nSure why not. Moreover the csv loader now supports remote files so you could just run the code pass an an URL to the sample csv file.\r\n\r\n> * the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https:\/\/huggingface.co\/docs\/datasets\/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why.\r\n\r\nThis can be used for distributed processing or just to use a percentage of the data. We can definitely give example of use cases\r\n\r\n> * the code example in https:\/\/huggingface.co\/docs\/datasets\/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.\r\n\r\n`training_args` comes from `transformers`, it's a practical way to define all your arguments to train a model. Maybe we can just import it from `transformers` and use it with the default values\r\n\r\n"],"created_at":1626864194000,"updated_at":1627411254000,"closed_at":1627411254000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2690","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2690","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2690.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2690.patch"},"body":"Some comments here:\r\n\r\n- the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https:\/\/huggingface.co\/docs\/datasets\/installation.html + a one-liner that installs all the requirements \/ alternatively a requirements.txt file)\r\n- \"If you\u2019d like to play with the examples, you must install it from source.\" in https:\/\/huggingface.co\/docs\/datasets\/installation.html: it's not clear to me what this means (what are these \"examples\"?)\r\n- in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html: \"or AWS bucket if it\u2019s not already stored in the library\". It's the only place in the doc (aside from the docstring https:\/\/huggingface.co\/docs\/datasets\/package_reference\/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the \"AWS bucket\" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https:\/\/s3.amazonaws.com\/datasets.huggingface.co and\/or https:\/\/huggingface.co\/docs\/datasets\/filesystems.html.\r\n- example in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#manually-downloading-files is obsoleted by https:\/\/github.com\/huggingface\/datasets\/pull\/2326. Also: see https:\/\/github.com\/huggingface\/datasets\/issues\/2691 for a bug on this specific dataset.\r\n- in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#manually-downloading-files the doc says \"After you\u2019ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:\", but the following example does not show how to use `data_dir`\r\n- in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code\/functions\/classes... and I would want a lot more links inside the doc pointing to the API entries.\r\n- in the API reference (docstrings) I would prefer \"SOURCE\" to link to github instead of a copy of the code inside the docs site (eg. https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/load.py#L711 instead of https:\/\/huggingface.co\/docs\/datasets\/_modules\/datasets\/load.html#load_dataset)\r\n- it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https:\/\/github.com\/huggingface\/datasets\/search?q=disable_progress_bar), see https:\/\/huggingface.co\/docs\/datasets\/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https:\/\/huggingface.co\/docs\/datasets\/package_reference\/logging_methods.html)\r\n- in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, \"The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:\", maybe link to https:\/\/en.wikipedia.org\/wiki\/JSON_streaming#Line-delimited_JSON and give it a name (\"line-delimited JSON\"? \"JSON Lines\" as in https:\/\/huggingface.co\/docs\/datasets\/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?)\r\n- in https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html, for the local files sections, it would be nice to provide sample csv \/ json \/ text files to download, so that it's easier for the reader to try to load them (instead: they won't try)\r\n- the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https:\/\/huggingface.co\/docs\/datasets\/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why.\r\n- the code example in https:\/\/huggingface.co\/docs\/datasets\/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2690\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2689","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2689\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2689\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2689\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2689","id":949447104,"node_id":"MDU6SXNzdWU5NDk0NDcxMDQ=","number":2689,"title":"cannot save the dataset to disk after rename_column","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! That's because you are trying to overwrite a file that is already open and being used.\r\nIndeed `foo\/dataset.arrow` is open and used by your `dataset` object.\r\n\r\nWhen you do `rename_column`, the resulting dataset reads the data from the same arrow file.\r\nIn other cases like when using `map` on the other hand, the resulting dataset reads the data from another arrow file that is the result of the map transform.\r\n\r\nTherefore overwriting a dataset after `rename_column` is not possible, but it is possible after `map`, since `rename_column` doesn't switch to using another arrow file (the actual data stay the same).","Ok, thanks for clearing it up :)"],"created_at":1626855220000,"updated_at":1626873064000,"closed_at":1626873064000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIf you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# Sample code to reproduce the bug\r\nIn [1]: from datasets import Dataset, load_from_disk\r\nIn [5]: dataset=Dataset.from_dict({'foo': [0]})\r\nIn [7]: dataset.save_to_disk('foo')\r\nIn [8]: dataset=load_from_disk('foo')\r\nIn [10]: dataset=dataset.rename_column('foo', 'bar')\r\nIn [11]: dataset.save_to_disk('foo')\r\n---------------------------------------------------------------------------\r\nPermissionError Traceback (most recent call last)\r\n in \r\n----> 1 dataset.save_to_disk('foo')\r\n\r\n\/mnt\/beegfs\/projects\/meerqat\/anaconda3\/envs\/meerqat\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in save_to_disk(self, dataset_path\r\n, fs)\r\n 597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths:\r\n 598 raise PermissionError(\r\n--> 599 f\"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself.\"\r\n 600 )\r\n 601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths:\r\n\r\nPermissionError: Tried to overwrite foo\/dataset.arrow but a dataset can't overwrite itself.\r\n```\r\n\r\nN. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`)\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2689\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2688","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2688\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2688\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2688\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2688","id":949182074,"node_id":"MDU6SXNzdWU5NDkxODIwNzQ=","number":2688,"title":"hebrew language codes he and iw should be treated as aliases","user":{"login":"eyaler","id":4436747,"node_id":"MDQ6VXNlcjQ0MzY3NDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4436747?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eyaler","html_url":"https:\/\/github.com\/eyaler","followers_url":"https:\/\/api.github.com\/users\/eyaler\/followers","following_url":"https:\/\/api.github.com\/users\/eyaler\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eyaler\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eyaler\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eyaler\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eyaler\/orgs","repos_url":"https:\/\/api.github.com\/users\/eyaler\/repos","events_url":"https:\/\/api.github.com\/users\/eyaler\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eyaler\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https:\/\/www.tensorflow.org\/datasets\/catalog\/c4).","For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https:\/\/github.com\/huggingface\/datasets\/commit\/38288087b1b02f97586e0346e8f28f4960f1fd37\r\n\r\nOnce the website is updated, mC4 will be listed in https:\/\/huggingface.co\/datasets?filter=languages:he\r\n\r\n"],"created_at":1626822832000,"updated_at":1626885293000,"closed_at":1626885293000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"https:\/\/huggingface.co\/datasets\/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2688\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2687","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2687\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2687\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2687\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2687","id":948890481,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkzNjY1NDI2","number":2687,"title":"Minor documentation fix","user":{"login":"slowwavesleep","id":44175589,"node_id":"MDQ6VXNlcjQ0MTc1NTg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44175589?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/slowwavesleep","html_url":"https:\/\/github.com\/slowwavesleep","followers_url":"https:\/\/api.github.com\/users\/slowwavesleep\/followers","following_url":"https:\/\/api.github.com\/users\/slowwavesleep\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/slowwavesleep\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/slowwavesleep\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/slowwavesleep\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/slowwavesleep\/orgs","repos_url":"https:\/\/api.github.com\/users\/slowwavesleep\/repos","events_url":"https:\/\/api.github.com\/users\/slowwavesleep\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/slowwavesleep\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626803003000,"updated_at":1626872695000,"closed_at":1626872695000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2687","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2687","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2687.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2687.patch"},"body":"Currently, [Writing a dataset loading script](https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. This PR fixes that. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2687\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2686","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2686\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2686\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2686\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2686","id":948811669,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkzNTk4OTE3","number":2686,"title":"Fix bad config ids that name cache directories","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626796845000,"updated_at":1626798435000,"closed_at":1626798435000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2686","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2686","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2686.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2686.patch"},"body":"`data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded.\r\nSince the config_id is used to name the cache directories, this leaded to datasets being regenerated for users.\r\n\r\nI fixed this by ignoring the value of `data_dir` when it's `None` when computing the config_id.\r\nI also added a test to make sure the cache directories are not unexpectedly renamed in the future.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2683","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2686\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2685","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2685\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2685\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2685\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2685","id":948791572,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkzNTgxNTk2","number":2685,"title":"Fix Blog Authorship Corpus dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Normally, I'm expecting errors from the validation of the README file... \ud83d\ude05 ","That is:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_cards.py::test_changed_dataset_card[blog_authorship_corpus]\r\n==== 1 failed, 3182 passed, 2763 skipped, 16 warnings in 201.23s (0:03:21) =====\r\n```","@lhoestq, apart from the dataset card, everything is OK with this PR: I tested it locally."],"created_at":1626795890000,"updated_at":1626873118000,"closed_at":1626873118000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2685","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2685","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2685.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2685.patch"},"body":"This PR:\r\n- Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError`\r\n- Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files\r\n\r\nClose #2679.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2685\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2684","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2684\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2684\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2684\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2684","id":948771753,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkzNTY0MDY4","number":2684,"title":"Print absolute local paths in load_dataset error messages","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626794908000,"updated_at":1626986899000,"closed_at":1626962470000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2684","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2684","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2684.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2684.patch"},"body":"Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https:\/\/github.com\/huggingface\/datasets\/pull\/2500#issuecomment-874891223 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2684\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2683","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2683\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2683\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2683\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2683","id":948721379,"node_id":"MDU6SXNzdWU5NDg3MjEzNzk=","number":2683,"title":"Cache directories changed due to recent changes in how config kwargs are handled","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1626791877000,"updated_at":1626798435000,"closed_at":1626798435000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nc4_builder = load_dataset_builder(\"c4\", \"en\")\r\nprint(c4_builder.cache_dir)\r\n# \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/c4\/en-174d3b7155eb68db\/0.0.0\/...\r\n\r\n# instead of \r\n# \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/c4\/en\/0.0.0\/...\r\n```\r\nThis issue could be annoying since it would simply ignore old cache directories for users, and regenerate datasets\r\n\r\ncc @stas00 this is what you experienced a few days ago\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2683\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2682","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2682\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2682\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2682\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2682","id":948713137,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkzNTE2NjU2","number":2682,"title":"Fix c4 expected files","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626791371000,"updated_at":1626791891000,"closed_at":1626791890000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2682","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2682","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2682.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2682.patch"},"body":"Some files were not registered in the list of expected files to download\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2677","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2682\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2681","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2681\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2681\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2681\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2681","id":948708645,"node_id":"MDU6SXNzdWU5NDg3MDg2NDU=","number":2681,"title":"5 duplicate datasets","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes this was documented in the PR that added this hf->paperswithcode mapping (https:\/\/github.com\/huggingface\/datasets\/pull\/2404) and AFAICT those are slightly distinct datasets so I think it's a wontfix\r\n\r\nFor context on the paperswithcode mapping you can also refer to https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/43 which contains a lot of background discussion ","Thanks for the antecedents. I close."],"created_at":1626791100000,"updated_at":1626795857000,"closed_at":1626795857000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nIn 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are:\r\n\r\n- https:\/\/paperswithcode.com\/dataset\/multinli -> https:\/\/huggingface.co\/datasets\/multi_nli and https:\/\/huggingface.co\/datasets\/multi_nli_mismatch\r\n \r\n \"Capture\r\n\r\n- https:\/\/paperswithcode.com\/dataset\/squad -> https:\/\/huggingface.co\/datasets\/squad and https:\/\/huggingface.co\/datasets\/squad_v2\r\n- https:\/\/paperswithcode.com\/dataset\/narrativeqa -> https:\/\/huggingface.co\/datasets\/narrativeqa and https:\/\/huggingface.co\/datasets\/narrativeqa_manual\r\n- https:\/\/paperswithcode.com\/dataset\/hate-speech-and-offensive-language -> https:\/\/huggingface.co\/datasets\/hate_offensive and https:\/\/huggingface.co\/datasets\/hate_speech_offensive\r\n- https:\/\/paperswithcode.com\/dataset\/newsph-nli -> https:\/\/huggingface.co\/datasets\/newsph and https:\/\/huggingface.co\/datasets\/newsph_nli\r\n\r\nPossible solutions:\r\n- don't fix (it works)\r\n- for each pair of duplicate datasets, remove one, and create an alias to the other.\r\n\r\n## Steps to reproduce the bug\r\n\r\nVisit the Paperswithcode links, and look at the \"Dataset Loaders\" section\r\n\r\n## Expected results\r\n\r\nThere should only be one reference to a Hugging Face dataset loader\r\n\r\n## Actual results\r\n\r\nTwo Hugging Face dataset loaders\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2681\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2680","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2680\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2680\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2680\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2680","id":948649716,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkzNDYyNzY3","number":2680,"title":"feat: \ud83c\udfb8 add paperswithcode id for qasper dataset","user":{"login":"severo","id":1676121,"node_id":"MDQ6VXNlcjE2NzYxMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1676121?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/severo","html_url":"https:\/\/github.com\/severo","followers_url":"https:\/\/api.github.com\/users\/severo\/followers","following_url":"https:\/\/api.github.com\/users\/severo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/severo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/severo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/severo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/severo\/orgs","repos_url":"https:\/\/api.github.com\/users\/severo\/repos","events_url":"https:\/\/api.github.com\/users\/severo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/severo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626787349000,"updated_at":1626789850000,"closed_at":1626789850000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2680","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2680","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2680.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2680.patch"},"body":"The reverse reference exists on paperswithcode:\r\nhttps:\/\/paperswithcode.com\/dataset\/qasper","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2680\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2679","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2679\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2679\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2679\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2679","id":948506638,"node_id":"MDU6SXNzdWU5NDg1MDY2Mzg=","number":2679,"title":"Cannot load the blog_authorship_corpus due to codec errors","user":{"login":"izaskr","id":38069449,"node_id":"MDQ6VXNlcjM4MDY5NDQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38069449?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/izaskr","html_url":"https:\/\/github.com\/izaskr","followers_url":"https:\/\/api.github.com\/users\/izaskr\/followers","following_url":"https:\/\/api.github.com\/users\/izaskr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/izaskr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/izaskr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/izaskr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/izaskr\/orgs","repos_url":"https:\/\/api.github.com\/users\/izaskr\/repos","events_url":"https:\/\/api.github.com\/users\/izaskr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/izaskr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @izaskr, thanks for reporting.\r\n\r\nHowever the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...\r\n\r\nI'm going to have a look at the dataset anyway...","Hi @izaskr, thanks again for having reported this issue.\r\n\r\nAfter investigation, I have created a Pull Request (#2685) to fix several issues with this dataset:\r\n- the `NonMatchingSplitsSizesError`\r\n- the `UnicodeDecodeError`\r\n\r\nOnce the Pull Request merged into master, you will be able to load this dataset if you install `datasets` from our GitHub repository master branch. Otherwise, you will be able to use it after our next release, by updating `datasets`: `pip install -U datasets`.","@albertvillanova \r\nCan you shed light on how this fix works?\r\n\r\nWe're experiencing a similar issue. \r\n\r\nIf we run several runs (eg in a Wandb sweep) the first run \"works\" but then we get `NonMatchingSplitsSizesError`\r\n\r\n| run num | actual train examples # | expected example # | recorded example # |\r\n| ------- | -------------- | ----------------- | -------- |\r\n| 1 | 100 | 100 | 100 |\r\n| 2 | 102 | 100 | 102 |\r\n| 3 | 100 | 100 | 202 | \r\n| 4 | 40 | 100 | 40 |\r\n| 5 | 40 | 100 | 40 |\r\n| 6 | 40 | 100 | 40 | \r\n\r\n\r\nThe second through the nth all crash with \r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=19980970, num_examples=100, dataset_name='cies'), 'recorded': SplitInfo(name='train', num_bytes=40163811, num_examples=202, dataset_name='cies')}]\r\n\r\n```"],"created_at":1626776000000,"updated_at":1626886941000,"closed_at":1626873118000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nA codec error is raised while loading the blog_authorship_corpus. \r\n\r\n## Steps to reproduce the bug\r\n```\r\nfrom datasets import load_dataset\r\nraw_datasets = load_dataset(\"blog_authorship_corpus\")\r\n```\r\n\r\n\r\n## Expected results\r\nLoading the dataset without errors.\r\n\r\n## Actual results\r\nAn error similar to the one below was raised for (what seems like) every XML file.\r\n\/home\/izaskr\/.cache\/huggingface\/datasets\/downloads\/extracted\/7cf52524f6517e168604b41c6719292e8f97abbe8f731e638b13423f4212359a\/blogs\/788358.male.24.Arts.Libra.xml cannot be loaded. Error message: 'utf-8' codec can't decode byte 0xe7 in position 7551: invalid continuation byte\r\n\r\nTraceback (most recent call last): \r\n File \"\", line 1, in \r\n File \"\/home\/izaskr\/anaconda3\/envs\/local_vae_older\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 856, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/izaskr\/anaconda3\/envs\/local_vae_older\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 583, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/izaskr\/anaconda3\/envs\/local_vae_older\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 671, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/home\/izaskr\/anaconda3\/envs\/local_vae_older\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.8\r\n- PyArrow version: 4.0.1\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2679\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2678","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2678\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2678\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2678\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2678","id":948471222,"node_id":"MDU6SXNzdWU5NDg0NzEyMjI=","number":2678,"title":"Import Error in Kaggle notebook","user":{"login":"prikmm","id":47216475,"node_id":"MDQ6VXNlcjQ3MjE2NDc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47216475?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/prikmm","html_url":"https:\/\/github.com\/prikmm","followers_url":"https:\/\/api.github.com\/users\/prikmm\/followers","following_url":"https:\/\/api.github.com\/users\/prikmm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/prikmm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/prikmm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/prikmm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/prikmm\/orgs","repos_url":"https:\/\/api.github.com\/users\/prikmm\/repos","events_url":"https:\/\/api.github.com\/users\/prikmm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/prikmm\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This looks like an issue with PyArrow. Did you try reinstalling it ?","@lhoestq I did, and then let pip handle the installation in `pip import datasets`. I also tried using conda but it gives the same error.\r\n\r\nEdit: pyarrow version on kaggle is 4.0.0, it gets replaced with 4.0.1. So, I don't think uninstalling will change anything.\r\n```\r\nInstall Trace of datasets:\r\n\r\nCollecting datasets\r\n Downloading datasets-1.9.0-py3-none-any.whl (262 kB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 262 kB 834 kB\/s eta 0:00:01\r\nRequirement already satisfied: dill in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (0.3.4)\r\nCollecting pyarrow!=4.0.0,>=1.0.0\r\n Downloading pyarrow-4.0.1-cp37-cp37m-manylinux2014_x86_64.whl (21.8 MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21.8 MB 6.2 MB\/s eta 0:00:01\r\nRequirement already satisfied: importlib-metadata in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (3.4.0)\r\nRequirement already satisfied: huggingface-hub<0.1.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (0.0.8)\r\nRequirement already satisfied: pandas in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (1.2.4)\r\nRequirement already satisfied: requests>=2.19.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (2.25.1)\r\nRequirement already satisfied: fsspec>=2021.05.0 in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (2021.6.1)\r\nRequirement already satisfied: multiprocess in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (0.70.12.2)\r\nRequirement already satisfied: packaging in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (20.9)\r\nCollecting xxhash\r\n Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 243 kB 23.7 MB\/s eta 0:00:01\r\nRequirement already satisfied: numpy>=1.17 in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (1.19.5)\r\nRequirement already satisfied: tqdm>=4.27 in \/opt\/conda\/lib\/python3.7\/site-packages (from datasets) (4.61.1)\r\nRequirement already satisfied: filelock in \/opt\/conda\/lib\/python3.7\/site-packages (from huggingface-hub<0.1.0->datasets) (3.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests>=2.19.0->datasets) (1.26.5)\r\nRequirement already satisfied: idna<3,>=2.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests>=2.19.0->datasets) (2.10)\r\nRequirement already satisfied: certifi>=2017.4.17 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests>=2.19.0->datasets) (2021.5.30)\r\nRequirement already satisfied: chardet<5,>=3.0.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from requests>=2.19.0->datasets) (4.0.0)\r\nRequirement already satisfied: typing-extensions>=3.6.4 in \/opt\/conda\/lib\/python3.7\/site-packages (from importlib-metadata->datasets) (3.7.4.3)\r\nRequirement already satisfied: zipp>=0.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from importlib-metadata->datasets) (3.4.1)\r\nRequirement already satisfied: pyparsing>=2.0.2 in \/opt\/conda\/lib\/python3.7\/site-packages (from packaging->datasets) (2.4.7)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in \/opt\/conda\/lib\/python3.7\/site-packages (from pandas->datasets) (2.8.1)\r\nRequirement already satisfied: pytz>=2017.3 in \/opt\/conda\/lib\/python3.7\/site-packages (from pandas->datasets) (2021.1)\r\nRequirement already satisfied: six>=1.5 in \/opt\/conda\/lib\/python3.7\/site-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.15.0)\r\nInstalling collected packages: xxhash, pyarrow, datasets\r\n Attempting uninstall: pyarrow\r\n Found existing installation: pyarrow 4.0.0\r\n Uninstalling pyarrow-4.0.0:\r\n Successfully uninstalled pyarrow-4.0.0\r\nSuccessfully installed datasets-1.9.0 pyarrow-4.0.1 xxhash-2.0.2\r\nWARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https:\/\/pip.pypa.io\/warnings\/venv\r\n```","You may need to restart your kaggle notebook after installing a newer version of `pyarrow`.\r\n\r\nIf it doesn't work we'll probably have to create an issue on [arrow's JIRA](https:\/\/issues.apache.org\/jira\/projects\/ARROW\/issues\/), and maybe ask kaggle why it could fail","> You may need to restart your kaggle notebook before after installing a newer version of `pyarrow`.\r\n> \r\n> If it doesn't work we'll probably have to create an issue on [arrow's JIRA](https:\/\/issues.apache.org\/jira\/projects\/ARROW\/issues\/), and maybe ask kaggle why it could fail\r\n\r\nIt works after restarting.\r\nMy bad, I forgot to restart the notebook. Sorry for the trouble!"],"created_at":1626773318000,"updated_at":1626875966000,"closed_at":1626872582000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nNot able to import datasets library in kaggle notebooks\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n!pip install datasets\r\nimport datasets\r\n```\r\n\r\n## Expected results\r\nNo such error\r\n\r\n## Actual results\r\n```\r\nImportError Traceback (most recent call last)\r\n in \r\n----> 1 import datasets\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/__init__.py in \r\n 31 )\r\n 32 \r\n---> 33 from .arrow_dataset import Dataset, concatenate_datasets\r\n 34 from .arrow_reader import ArrowReader, ReadInstruction\r\n 35 from .arrow_writer import ArrowWriter\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in \r\n 36 import pandas as pd\r\n 37 import pyarrow as pa\r\n---> 38 import pyarrow.compute as pc\r\n 39 from multiprocess import Pool, RLock\r\n 40 from tqdm.auto import tqdm\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/pyarrow\/compute.py in \r\n 16 # under the License.\r\n 17 \r\n---> 18 from pyarrow._compute import ( # noqa\r\n 19 Function,\r\n 20 FunctionOptions,\r\n\r\nImportError: \/opt\/conda\/lib\/python3.7\/site-packages\/pyarrow\/_compute.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5arrow7compute15KernelSignature8ToStringEv\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Kaggle\r\n- Python version: 3.7.10\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2678\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2677","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2677\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2677\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2677\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2677","id":948429788,"node_id":"MDU6SXNzdWU5NDg0Mjk3ODg=","number":2677,"title":"Error when downloading C4","user":{"login":"Aktsvigun","id":36672861,"node_id":"MDQ6VXNlcjM2NjcyODYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36672861?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Aktsvigun","html_url":"https:\/\/github.com\/Aktsvigun","followers_url":"https:\/\/api.github.com\/users\/Aktsvigun\/followers","following_url":"https:\/\/api.github.com\/users\/Aktsvigun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Aktsvigun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Aktsvigun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Aktsvigun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Aktsvigun\/orgs","repos_url":"https:\/\/api.github.com\/users\/Aktsvigun\/repos","events_url":"https:\/\/api.github.com\/users\/Aktsvigun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Aktsvigun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi Thanks for reporting !\r\nIt looks like these files are not correctly reported in the list of expected files to download, let me fix that ;)","Alright this is fixed now. We'll do a new release soon to make the fix available.\r\n\r\nIn the meantime feel free to simply pass `ignore_verifications=True` to `load_dataset` to skip this error","@lhoestq thank you for such a quick feedback!"],"created_at":1626770250000,"updated_at":1626792091000,"closed_at":1626791890000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:\r\n`datasets.load_dataset('c4', 'en')`\r\n\r\nIs this a bug or do I have some configurations missing on my server? \r\nThanks!\r\n\r\n\r\n\"\u0421\u043d\u0438\u043c\u043e\u043a","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2677\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2676","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2676\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2676\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2676\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2676","id":947734909,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkyNjc2NTg5","number":2676,"title":"Increase json reader block_size automatically","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626706274000,"updated_at":1626717099000,"closed_at":1626717098000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2676","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2676","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2676.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2676.patch"},"body":"Currently some files can't be read with the default parameters of the JSON lines reader.\r\nFor example this one:\r\nhttps:\/\/huggingface.co\/datasets\/thomwolf\/codeparrot\/resolve\/main\/file-000000000006.json.gz\r\n\r\nraises a pyarrow error:\r\n```python\r\nArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)\r\n```\r\n\r\nThe block size that is used is the default one by pyarrow (related to this [jira issue](https:\/\/issues.apache.org\/jira\/browse\/ARROW-9612)).\r\n\r\nTo fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines.\r\n\r\nBy default the value is `chunksize \/\/ 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file.\r\n\r\ncc @thomwolf @albertvillanova ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2676\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2675","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2675\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2675\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2675\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2675","id":947657732,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkyNjEwNTA1","number":2675,"title":"Parallelize ETag requests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626701442000,"updated_at":1626723205000,"closed_at":1626723205000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2675","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2675","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2675.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2675.patch"},"body":"Since https:\/\/github.com\/huggingface\/datasets\/pull\/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed.\r\n\r\nIn this I made the ETag requests parallel using multithreading. There is also a tqdm progress bar that shows up if there are more than 16 data files.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2675\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2674","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2674\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2674\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2674\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2674","id":947338202,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkyMzMzODU3","number":2674,"title":"Fix sacrebleu parameter name","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626678446000,"updated_at":1626682023000,"closed_at":1626682023000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2674","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2674","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2674.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2674.patch"},"body":"DONE:\r\n- Fix parameter name: `smooth` to `smooth_method`.\r\n- Improve kwargs description.\r\n- Align docs on using a metric.\r\n- Add example of passing additional arguments in using metrics.\r\n\r\nRelated to #2669.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2674\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2673","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2673\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2673\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2673\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2673","id":947300008,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkyMzAxMTgw","number":2673,"title":"Fix potential DuplicatedKeysError in SQuAD","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626674880000,"updated_at":1626678483000,"closed_at":1626678483000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2673","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2673","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2673.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2673.patch"},"body":"DONE:\r\n- Fix potential DiplicatedKeysError by ensuring keys are unique.\r\n- Align examples in the docs with SQuAD code.\r\n\r\nWe should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2673\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2672","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2672\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2672\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2672\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2672","id":947294605,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkyMjk2NDQ4","number":2672,"title":"Fix potential DuplicatedKeysError in LibriSpeech","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626674449000,"updated_at":1626676137000,"closed_at":1626676136000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2672","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2672","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2672.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2672.patch"},"body":"DONE:\r\n- Fix unnecessary path join.\r\n- Fix potential DiplicatedKeysError by ensuring keys are unique.\r\n\r\nWe should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2672\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2671","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2671\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2671\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2671\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2671","id":947273875,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkyMjc5MTM0","number":2671,"title":"Mesinesp development and training data sets have been added.","user":{"login":"aslihanuysall","id":32900185,"node_id":"MDQ6VXNlcjMyOTAwMTg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32900185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aslihanuysall","html_url":"https:\/\/github.com\/aslihanuysall","followers_url":"https:\/\/api.github.com\/users\/aslihanuysall\/followers","following_url":"https:\/\/api.github.com\/users\/aslihanuysall\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aslihanuysall\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aslihanuysall\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aslihanuysall\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aslihanuysall\/orgs","repos_url":"https:\/\/api.github.com\/users\/aslihanuysall\/repos","events_url":"https:\/\/api.github.com\/users\/aslihanuysall\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aslihanuysall\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It'll be new pull request with new commits."],"created_at":1626671678000,"updated_at":1626679948000,"closed_at":1626677150000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2671","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2671","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2671.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2671.patch"},"body":"https:\/\/zenodo.org\/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms.\r\nThe Mesinesp (Spanish BioASQ track, see https:\/\/temu.bsc.es\/mesinesp) development set has a total of 750 records.\r\nThe Mesinesp (Spanish BioASQ track, see https:\/\/temu.bsc.es\/mesinesp) training set has a total of 369,368 records. \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2671\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2670","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2670\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2670\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2670\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2670","id":947120709,"node_id":"MDU6SXNzdWU5NDcxMjA3MDk=","number":2670,"title":"Using sharding to parallelize indexing","user":{"login":"ggdupont","id":5583410,"node_id":"MDQ6VXNlcjU1ODM0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5583410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ggdupont","html_url":"https:\/\/github.com\/ggdupont","followers_url":"https:\/\/api.github.com\/users\/ggdupont\/followers","following_url":"https:\/\/api.github.com\/users\/ggdupont\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ggdupont\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ggdupont\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ggdupont\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ggdupont\/orgs","repos_url":"https:\/\/api.github.com\/users\/ggdupont\/repos","events_url":"https:\/\/api.github.com\/users\/ggdupont\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ggdupont\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626643586000,"updated_at":1626643586000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nCreating an elasticsearch index on large dataset could be quite and cannot be parallelized on shard (the index creation is colliding)\r\n\r\n**Describe the solution you'd like**\r\nWhen working on dataset shards, if an index already exists, its mapping should be checked and if compatible, the indexing process should continue with the shard data. \r\n\r\nAdditionally, at the end of the process, the `_indexes` dict should be send back to the original dataset object (from which the shards have been created) to allow to use the index for later filtering on the whole dataset.\r\n\r\n**Describe alternatives you've considered**\r\nEach dataset shard could created independent partial indices. then on the whole dataset level, indices should be all referred in `_indexes` dict and be used in querying through `get_nearest_examples()`. The drawback is that the scores will be computed independently on the partial indices leading to inconsistent values for most scoring based on corpus level statistics (tf\/idf, BM25).\r\n\r\n**Additional context**\r\nThe objectives is to parallelize the index creation to speed-up the process (ie surcharging the ES server which is fine to handle large load) while later enabling search on the whole dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2670\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2669","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2669\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2669\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2669\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2669","id":946982998,"node_id":"MDU6SXNzdWU5NDY5ODI5OTg=","number":2669,"title":"Metric kwargs are not passed to underlying external metric f1_score","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weighted\", \"binary\"} or None, default=\"binary\"`.\r\n\r\nSecond, you should take into account that all additional metric-specific argument should be passed in the method `compute` (and not in the method `load_metric`). You can find more information in our documentation: https:\/\/huggingface.co\/docs\/datasets\/using_metrics.html#computing-the-metric-scores\r\n\r\nSo for example, if you would like to calculate the macro-averaged F1 score, you should use:\r\n```python\r\nimport datasets\r\n\r\nf1 = datasets.load_metric(\"f1\", keep_in_memory=True)\r\nf1.add_batch(predictions=[0,2,3], references=[1, 2, 3])\r\nf1.compute(average=\"macro\")\r\n```","Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself."],"created_at":1626597151000,"updated_at":1626633365000,"closed_at":1626607144000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen I want to use F1 score with average=\"min\", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.f1_score.html) throws an error telling me so.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\nf1 = datasets.load_metric(\"f1\", keep_in_memory=True, average=\"min\")\r\nf1.add_batch(predictions=[0,2,3], references=[1, 2, 3])\r\nf1.compute()\r\n```\r\n\r\n## Expected results\r\nNo error, because `average=\"min\"` should be passed correctly to f1_score in sklearn.\r\n\r\n## Actual results\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\datasets\\metric.py\", line 402, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\metrics\\f1\\82177930a325d4c28342bba0f116d73f6d92fb0c44cd67be32a07c1262b61cfe\\f1.py\", line 97, in _compute\r\n \"f1\": f1_score(\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 63, in inner_f\r\n return f(*args, **kwargs)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\metrics\\_classification.py\", line 1071, in f1_score\r\n return fbeta_score(y_true, y_pred, beta=1, labels=labels,\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 63, in inner_f\r\n return f(*args, **kwargs)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\metrics\\_classification.py\", line 1195, in fbeta_score\r\n _, _, f, _ = precision_recall_fscore_support(y_true, y_pred,\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\utils\\validation.py\", line 63, in inner_f\r\n return f(*args, **kwargs)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\metrics\\_classification.py\", line 1464, in precision_recall_fscore_support\r\n labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n File \"C:\\Users\\bramv\\.virtualenvs\\pipeline-TpEsXVex\\lib\\site-packages\\sklearn\\metrics\\_classification.py\", line 1294, in _check_set_wise_labels\r\n raise ValueError(\"Target is %s but average='binary'. Please \"\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.9.2\r\n- PyArrow version: 4.0.1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2669\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2668","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2668\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2668\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2668\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2668","id":946867622,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxOTY1MTY1","number":2668,"title":"Add Russian SuperGLUE","user":{"login":"slowwavesleep","id":44175589,"node_id":"MDQ6VXNlcjQ0MTc1NTg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44175589?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/slowwavesleep","html_url":"https:\/\/github.com\/slowwavesleep","followers_url":"https:\/\/api.github.com\/users\/slowwavesleep\/followers","following_url":"https:\/\/api.github.com\/users\/slowwavesleep\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/slowwavesleep\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/slowwavesleep\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/slowwavesleep\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/slowwavesleep\/orgs","repos_url":"https:\/\/api.github.com\/users\/slowwavesleep\/repos","events_url":"https:\/\/api.github.com\/users\/slowwavesleep\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/slowwavesleep\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Added the missing label classes and their explanations (to the best of my understanding)","Thanks a lot ! Once the last comment about the label names is addressed we can merge :)"],"created_at":1626543688000,"updated_at":1627559431000,"closed_at":1627559431000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2668","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2668","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2668.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2668.patch"},"body":"Hi,\r\n\r\nThis adds the [Russian SuperGLUE](https:\/\/russiansuperglue.com\/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2668\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2667","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2667\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2667\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2667\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2667","id":946861908,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxOTYwNzc3","number":2667,"title":"Use tqdm from tqdm_utils","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The current CI failure is due to modifications in the dataset script.","Merging since the CI is only failing because of dataset card issues, which is unrelated to this PR"],"created_at":1626541595000,"updated_at":1626716350000,"closed_at":1626715920000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2667","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2667","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2667.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2667.patch"},"body":"This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, there is no easy way to disable progress bars in a multiprocess setting on Windows (patching logging with `datasets.utils.logging.get_verbosity = lambda: datasets.utils.logging.NOTSET` doesn't seem to work as well), so adding support for this is a future goal. Additionally, this PR adds a unit (\"ba\" for batches) to the bar printed by `Dataset.to_json` (this change is motivated by https:\/\/github.com\/huggingface\/datasets\/issues\/2657).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2667\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2666","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2666\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2666\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2666\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2666","id":946825140,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxOTMzMDM1","number":2666,"title":"Adds CodeClippy dataset [WIP]","user":{"login":"arampacha","id":69807323,"node_id":"MDQ6VXNlcjY5ODA3MzIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69807323?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arampacha","html_url":"https:\/\/github.com\/arampacha","followers_url":"https:\/\/api.github.com\/users\/arampacha\/followers","following_url":"https:\/\/api.github.com\/users\/arampacha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arampacha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arampacha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arampacha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arampacha\/orgs","repos_url":"https:\/\/api.github.com\/users\/arampacha\/repos","events_url":"https:\/\/api.github.com\/users\/arampacha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arampacha\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626528724000,"updated_at":1626685794000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2666","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2666","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2666.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2666.patch"},"body":"CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week\r\nhttps:\/\/the-eye.eu\/public\/AI\/training_data\/code_clippy_data\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2666\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2665","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2665\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2665\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2665\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2665","id":946822036,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxOTMwNjky","number":2665,"title":"Adds APPS dataset to the hub [WIP]","user":{"login":"arampacha","id":69807323,"node_id":"MDQ6VXNlcjY5ODA3MzIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69807323?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arampacha","html_url":"https:\/\/github.com\/arampacha","followers_url":"https:\/\/api.github.com\/users\/arampacha\/followers","following_url":"https:\/\/api.github.com\/users\/arampacha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arampacha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arampacha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arampacha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arampacha\/orgs","repos_url":"https:\/\/api.github.com\/users\/arampacha\/repos","events_url":"https:\/\/api.github.com\/users\/arampacha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arampacha\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626527597000,"updated_at":1626544607000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2665","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2665","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2665.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2665.patch"},"body":"A loading script for [APPS dataset](https:\/\/github.com\/hendrycks\/apps) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2665\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2663","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2663\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2663\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2663\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2663","id":946552273,"node_id":"MDU6SXNzdWU5NDY1NTIyNzM=","number":2663,"title":"[`to_json`] add multi-proc sharding support","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @stas00, \r\nI want to work on this issue and I was thinking why don't we use `imap` [in this loop](https:\/\/github.com\/huggingface\/datasets\/blob\/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84\/src\/datasets\/io\/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow table to `json` using multiprocessing. I've a small code snippet for some clarity:\r\n```\r\nresult = list(\r\n pool.imap(self._apply_df, [(offset, batch_size) for offset in range(0, len(self.dataset), batch_size)])\r\n )\r\n```\r\n`_apply_df` is a function which will return `batch.to_pandas().to_json(path_or_buf=None, orient=\"records\", lines=True)` which is basically json version of the batched pyarrow table. Later on we can concatenate it to form json file? \r\n\r\nI think the only downside here is to write file from `imap` output (output would be a list and we'll need to iterate over it and write in a file) which might add a little overhead cost. What do you think about this?","Followed up in https:\/\/github.com\/huggingface\/datasets\/pull\/2747"],"created_at":1626464510000,"updated_at":1631541397000,"closed_at":1631541397000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.\r\n\r\nI implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally in `to_json` via `num_proc` argument. I guess `num_proc` will be the number of shards?\r\n\r\nI think the user will need to use this feature wisely, since too many processes writing to say normal style HD is likely to be slower than one process.\r\n\r\nI'm not sure whether the user should be responsible to concatenate the shards at the end or `datasets`, either way works for my needs.\r\n\r\nThe code I was using:\r\n\r\n```\r\nfrom multiprocessing import cpu_count, Process, Queue\r\n\r\n[...]\r\n\r\nfiltered_dataset = concat_dataset.map(filter_short_documents, batched=True, batch_size=256, num_proc=cpu_count())\r\n\r\nDATASET_NAME = \"oscar\"\r\nSHARDS = 10\r\ndef process_shard(idx):\r\n print(f\"Sharding {idx}\")\r\n ds_shard = filtered_dataset.shard(SHARDS, idx, contiguous=True)\r\n # ds_shard = ds_shard.shuffle() # remove contiguous=True above if shuffling\r\n print(f\"Saving {DATASET_NAME}-{idx}.jsonl\")\r\n ds_shard.to_json(f\"{DATASET_NAME}-{idx}.jsonl\", orient=\"records\", lines=True, force_ascii=False)\r\n\r\nqueue = Queue()\r\nprocesses = [Process(target=process_shard, args=(idx,)) for idx in range(SHARDS)]\r\nfor p in processes:\r\n p.start()\r\n\r\nfor p in processes:\r\n p.join()\r\n```\r\n\r\nThank you!\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2663\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2662","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2662\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2662\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2662\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2662","id":946470815,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxNjM5MjU5","number":2662,"title":"Load Dataset from the Hub (NO DATASET SCRIPT)","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is ready for review now :)\r\n\r\nI would love to have some feedback on the changes in load.py @albertvillanova. There are many changes so if you have questions let me know, especially on the `resolve_data_files` functions and on the changes in `prepare_module`.\r\n\r\nAnd @thomwolf if you want to take a look at the documentation, feel free to share your suggestions :)","I took your comments into account thanks !\r\nAnd I made `aiohttp` a required dependency :)","Just updated the documentation :)\r\n[share_datasets.html](https:\/\/45532-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/share_dataset.html)\r\n\r\nLet me know if you have some comments","Merging this one :) \r\n\r\nWe can try to integrate the changes in the docs to #2718 @stevhliu !","Baked this into the [docs](https:\/\/44335-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/loading.html#hugging-face-hub) already, let me know if there is anything else I should add! :)"],"created_at":1626456118000,"updated_at":1629903181000,"closed_at":1629901088000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2662","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2662","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2662.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2662.patch"},"body":"## Load the data from any Dataset repository on the Hub\r\n\r\nThis PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script.\r\n\r\nAs a user it's now possible to create a repo and upload some csv\/json\/text\/parquet files, and then be able to load the data in one line. Here is an example with the `allenai\/c4` repository that contains a lot of compressed json lines files:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": \"en\/c4-train.*.json.gz\"}\r\nc4 = load_dataset(\"allenai\/c4\", data_files=data_files, split=\"train\", streaming=True)\r\n\r\nprint(c4.n_shards)\r\n# 1024\r\nprint(next(iter(c4)))\r\n# {'text': 'Beginners BBQ Class Takin...'}\r\n```\r\n\r\nBy default it loads all the files, but as shown in the example you can choose the ones you want with unix style patterns.\r\n\r\nOf course it's still possible to use dataset scripts since they offer the most flexibility.\r\n\r\n## Implementation details\r\n\r\nIt uses `huggingface_hub` to list the files in a dataset repository.\r\n\r\nIf you provide a path to a local directory instead of a repository name, it works the same way but it uses `glob`.\r\n\r\nDepending on the data files available, or passed in the `data_files` parameter, one of the available builders will be used among the csv, json, text and parquet builders.\r\n\r\nBecause of this, it's not possible to load both csv and json files at once. In this case you have to load them separately and then concatenate the two datasets for example.\r\n\r\n## TODO\r\n\r\n- [x] tests\r\n- [x] docs\r\n- [x] when huggingface_hub gets a new release, update the CI and the setup.py\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/2629","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2662\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2661","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2661\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2661\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2661\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2661","id":946446967,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxNjE5MzAz","number":2661,"title":"Add SD task for SUPERB","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I make a summary about our discussion with @lewtun and @Narsil on the agreed schema for this dataset and the additional steps required to generate the 2D array labels:\r\n- The labels for this dataset are a 2D array:\r\n Given an example:\r\n ```python\r\n {\"record_id\": record_id, \"file\": file, \"start\": start, \"end\": end, \"speakers\": [...]}\r\n ```\r\n the labels are a 2D array of shape `(num_frames, num_speakers)` where `num_frames = end - start` and `num_speakers = 2`.\r\n- In order to avoid a too large dataset (too large disk space), `datasets` does not store the 2D array label. Instead, we store a compact form:\r\n ```\r\n \"speakers\": [\r\n {\"speaker_id\": speaker_0_id, \"start\": start_0_speaker_0, \"end\": end_0_speaker_0},\r\n {\"speaker_id\": speaker_0_id, \"start\": start_1_speaker_0, \"end\": end_1_speaker_0},\r\n {\"speaker_id\": speaker_1_id, \"start\": start_0_speaker_1, \"end\": end_0_speaker_1},\r\n ],\r\n ```\r\n - Once loaded the dataset, an additional step is required to generate the 2D array label from this compact form\r\n - This additional step should be a modified version of the s3prl method `_get_labeled_speech`:\r\n - Original s3prl `_get_labeled_speech` includes 2 functionalities: reading the audio file and transforming it into an array, and generating the label 2D array; I think we should separate these 2 functionalities\r\n - Original s3prl `_get_labeled_speech` performs 2 steps to generate the labels:\r\n - Transform start\/end seconds (float) into frame numbers (int): I have already done this step to generate the dataset\r\n - Generate the 2D array label from the frame numbers\r\n\r\nI also ping @osanseviero and @lhoestq to include them in the loop.","Here I would like to discuss (and agree) one of the decisions I made, as I'm not completely satisfied with it: to transform the seconds (float) into frame numbers (int) to generate this dataset.\r\n\r\n- A priori, the most natural and general choice would be to preserve the seconds (float), because:\r\n - this is the way the raw data comes from\r\n - the transformation into frame numbers depends on the sample rate, frame_shift and subsampling\r\n\r\nHowever, I finally decided to transform seconds into frame numbers because:\r\n- for SUPERB, sampling rate, frame_shift and subsampling are fixed (`rate = 16_000`, `frame_shift = 160`, `subsampling = 1`)\r\n- it makes easier the post-processing, as labels are generated from sample numbers: labels are a 2D array of shape `(num_frames, num_speakers)`\r\n- the number of examples depends on the number of frames:\r\n - if an example has more than 2_000 frames, then it is split into 2 examples. This is the case for `record_id = \"7859-102521-0017_3983-5371-0014\"`, which has 2_452 frames and it is split into 2 examples:\r\n ```\r\n {\"record_id\": \"7859-102521-0017_3983-5371-0014\", \"start\"= 0, \"end\": 2_000,...},\r\n {\"record_id\": \"7859-102521-0017_3983-5371-0014\", \"start\"= 2_000, \"end\": 2_452,...},\r\n ```\r\n\r\nAs I told you, I'm not totally convinced of this decision, and I would really appreciate your opinion.\r\n\r\ncc: @lewtun @Narsil @osanseviero @lhoestq ","It makes total sense to prepare the data to be in a format that can actually be used for model training and evaluation. That's one of the roles of this lib :)\r\n\r\nSo for me it's ok to use frames as a unit instead of seconds. Just pinging @patrickvonplaten in case he has ever played with such audio tasks and has some advice. For the context: the task is to classify which speaker is speaking, let us know if you are aware of any convenient\/standard format for this.\r\n\r\nAlso I'm not sure why you have to split an example if it's longer that 2,000 frames ?","> Also I'm not sure why you have to split an example if it's longer that 2,000 frames ?\r\n\r\nIt is a convention in SUPERB benchmark.","Note that if we agree to leave the dataset as it is now, 2 additional custom functions must be used:\r\n- one to generate the 2D array labels\r\n- one to load the audio file into an array, but taking into account start\/end to cut the audio\r\n\r\nIs there a way we can give these functions ready to be used? Or should we leave this entirely to the end user? This is not trivial...","You could add an example of usage in the dataset card, as it is done for other audio datasets","@albertvillanova this simple function can be edited simply to add the start\/stop cuts \r\n\r\nhttps:\/\/github.com\/huggingface\/transformers\/blob\/master\/src\/transformers\/pipelines\/automatic_speech_recognition.py#L29 ","Does this function work on windows ?","Windows ? What is it ? (Not sure not able to test, it's directly calling ffmpeg binary, so depending on the setup it could but can't say for sure without testing)\r\n","It's one of the OS we're supposed to support :P (for the better and for the worse)","> Note that if we agree to leave the dataset as it is now, 2 additional custom functions must be used:\r\n> \r\n> * one to generate the 2D array labels\r\n> * one to load the audio file into an array, but taking into account start\/end to cut the audio\r\n> \r\n> Is there a way we can give these functions ready to be used? Or should we leave this entirely to the end user? This is not trivial...\r\n\r\n+1 on providing the necessary functions on the dataset card. aside from that, the current implementation looks great from my perspective!"],"created_at":1626453801000,"updated_at":1628096633000,"closed_at":1628096633000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2661","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2661","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2661.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2661.patch"},"body":"Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https:\/\/arxiv.org\/abs\/2105.01051) and `s3prl` [instructions](https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#sd-speaker-diarization).\r\n\r\nTODO:\r\n- [x] Generate the LibriMix corpus\r\n- [x] Prepare the corpus for diarization\r\n- [x] Upload these files to the superb-data repo\r\n- [x] Transcribe the corresponding s3prl processing of these files into our superb loading script\r\n- [x] README: tags + description sections\r\n- ~~Add DER metric~~ (we leave the DER metric for a follow-up PR)\r\n\r\nRelated to #2619.\r\n\r\nClose #2653.\r\n\r\ncc: @lewtun ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2661\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2660","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2660\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2660\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2660\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2660","id":946316180,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxNTA4NzE0","number":2660,"title":"Move checks from _map_single to map","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq This one has been open for a while. Could you please take a look?","@lhoestq Ready for the final review!","I forgot to update the signature of `DatasetDict.map`, so did that now."],"created_at":1626443613000,"updated_at":1630937543000,"closed_at":1630937543000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2660","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2660","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2660.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2660.patch"},"body":"The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is then wrapped into a list. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2660\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2659","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2659\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2659\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2659\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2659","id":946155407,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkxMzcwNzU3","number":2659,"title":"Allow dataset config kwargs to be None","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626431138000,"updated_at":1626439567000,"closed_at":1626439567000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2659","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2659","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2659.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2659.patch"},"body":"Close https:\/\/github.com\/huggingface\/datasets\/issues\/2658\r\n\r\nThe dataset config kwargs that were set to None we simply ignored.\r\nThis was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the \"csv\" builder that allows to infer to separator.\r\n\r\ncc @SBrandeis ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2659\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2658","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2658\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2658\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2658\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2658","id":946139532,"node_id":"MDU6SXNzdWU5NDYxMzk1MzI=","number":2658,"title":"Can't pass `sep=None` to load_dataset(\"csv\", ...) to infer the separator via pandas.read_csv","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1626429944000,"updated_at":1626439566000,"closed_at":1626439566000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When doing `load_dataset(\"csv\", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=\",\"` instead, which makes it impossible to make the csv loader infer the separator.\r\n\r\nRelated to https:\/\/github.com\/huggingface\/datasets\/pull\/2656\r\n\r\ncc @SBrandeis ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2658\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2657","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2657\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2657\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2657\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2657","id":945822829,"node_id":"MDU6SXNzdWU5NDU4MjI4Mjk=","number":2657,"title":"`to_json` reporting enhancements","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626391938000,"updated_at":1626392033000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"While using `to_json` 2 things came to mind that would have made the experience easier on the user:\r\n\r\n1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps it'd help to have it self-identify like you did for other progress bars recently.\r\n\r\n2. It took me a while to make sense of the reported numbers:\r\n```\r\n 22%|\u2588\u2588\u258f | 1536\/7076 [12:30:57<44:09:42, 28.70s\/it]\r\n```\r\nSo iteration here happens to be 10K samples, and the total is 70M records. But the user does't know that, so the progress bar is perfect, but the numbers it reports are meaningless until one discovers that 1it=10K samples. And one still has to convert these in the head - so it's not quick. Not exactly sure what's the best way to approach this, perhaps it can be part of `desc`? or report M or K, so it'd be built-in if it were to print, e.g.:\r\n```\r\n 22%|\u2588\u2588\u258f | 15360K\/70760K [12:30:57<44:09:42, 28.70s\/it]\r\n```\r\nor \r\n```\r\n 22%|\u2588\u2588\u258f | 15.36M\/70.76M [12:30:57<44:09:42, 28.70s\/it]\r\n```\r\n(while of course remaining friendly to small datasets)\r\n\r\nI forget if tqdm lets you add a magnitude identifier to the running count.\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2657\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2656","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2656\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2656\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2656\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2656","id":945421790,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkwNzUzNjA3","number":2656,"title":"Change `from_csv` default arguments","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is not the default in pandas right ?\r\nWe try to align our CSV loader with the pandas API.\r\n\r\nMoreover according to their documentation, the python parser is used when sep is None, which might not be the fastest one.\r\n\r\nMaybe users could just specify `sep=None` themselves ?\r\nIn this case we should add some documentation about this"],"created_at":1626358146000,"updated_at":1626431006000,"closed_at":1626431006000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2656","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2656","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2656.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2656.patch"},"body":"Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator\r\n\r\nThis PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`:\r\n\r\n```python\r\nDataset.from_csv(\r\n ...,\r\n sep=None\r\n)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2656\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2655","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2655\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2655\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2655\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2655","id":945382723,"node_id":"MDU6SXNzdWU5NDUzODI3MjM=","number":2655,"title":"Allow the selection of multiple columns at once","user":{"login":"Dref360","id":8976546,"node_id":"MDQ6VXNlcjg5NzY1NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8976546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Dref360","html_url":"https:\/\/github.com\/Dref360","followers_url":"https:\/\/api.github.com\/users\/Dref360\/followers","following_url":"https:\/\/api.github.com\/users\/Dref360\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Dref360\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Dref360\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Dref360\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Dref360\/orgs","repos_url":"https:\/\/api.github.com\/users\/Dref360\/repos","events_url":"https:\/\/api.github.com\/users\/Dref360\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Dref360\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I was looking into this and hope you can clarify a point. Your my_dataset variable would be of type DatasetDict which means the alternative you've described (dict comprehension) is what makes sense. \r\nIs there a reason why you wouldn't want to convert my_dataset to a pandas df if you'd like to use it like one? Please let me know if I'm missing something.","Hi! Sorry for the delay.\r\n\r\nIn this case, the dataset would be a `datasets.Dataset` and we want to select multiple columns, the `idx` and `label` columns for example.\r\n\r\nMy issue is that my dataset is too big for memory if I load everything into pandas."],"created_at":1626355845000,"updated_at":1627054857000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nSimilar to pandas, it would be great if we could select multiple columns at once.\r\n\r\n\r\n**Describe the solution you'd like**\r\n```python\r\nmy_dataset = ... # Has columns ['idx', 'sentence', 'label']\r\nidx, label = my_dataset[['idx', 'label']]\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nwe can do `[dataset[col] for col in ('idx', 'label')]`\r\n\r\n**Additional context**\r\nThis is of course very minor.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2655\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2654","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2654\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2654\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2654\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2654","id":945167231,"node_id":"MDU6SXNzdWU5NDUxNjcyMzE=","number":2654,"title":"Give a user feedback if the dataset he loads is streamable or not","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#self-assign","I understand it already raises a `NotImplementedError` exception, eg:\r\n\r\n```\r\n>>> dataset = load_dataset(\"journalists_questions\", name=\"plain_text\", split=\"train\", streaming=True)\r\n\r\n[...]\r\nNotImplementedError: Extraction protocol for file at https:\/\/drive.google.com\/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet\r\n```\r\n"],"created_at":1626340047000,"updated_at":1627902201000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nI would love to know if a `dataset` is with the current implementation streamable or not. \r\n\r\n**Describe the solution you'd like**\r\nWe could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g. if it is an archive. \r\n\r\n**Describe alternatives you've considered**\r\nAdd a new metadata tag for \"streaming\"\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2654\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2653","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2653\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2653\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2653\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2653","id":945102321,"node_id":"MDU6SXNzdWU5NDUxMDIzMjE=","number":2653,"title":"Add SD task for SUPERB","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/7","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/7","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/7\/labels","id":6931350,"node_id":"MDk6TWlsZXN0b25lNjkzMTM1MA==","number":7,"title":"1.11","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":2,"state":"closed","created_at":1625809740000,"updated_at":1630560843000,"due_on":1627628400000,"closed_at":1630560843000},"comments":["Note that this subset requires us to:\r\n\r\n* generate the LibriMix corpus from LibriSpeech\r\n* prepare the corpus for diarization\r\n\r\nAs suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https:\/\/huggingface.co\/datasets\/superb\/superb-data\r\n\r\nThen we can use the URLs for the files to load the data in `superb`'s dataset loading script.\r\n\r\nFor consistency, I suggest we name the folders in `superb-data` in the same way as the configs in the dataset loading script - e.g. use `sd` for speech diarization in both places :)","@lewtun @lhoestq: \r\n\r\nI have already generated the LibriMix corpus and prepared the corpus for diarization. The output is 3 dirs (train, dev, test), each one containing 6 files: reco2dur rttm segments spk2utt utt2spk wav.scp\r\n\r\nNext steps:\r\n- Upload these files to the superb-data repo\r\n- Transcribe the corresponding s3prl processing of these files into our superb loading script\r\n\r\nNote that processing of these files is a bit more intricate than usual datasets: https:\/\/github.com\/s3prl\/s3prl\/blob\/master\/s3prl\/downstream\/diarization\/dataset.py#L233\r\n\r\n"],"created_at":1626335500000,"updated_at":1628096632000,"closed_at":1628096632000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https:\/\/arxiv.org\/abs\/2105.01051) and `s3prl` [instructions](https:\/\/github.com\/s3prl\/s3prl\/tree\/master\/s3prl\/downstream#sd-speaker-diarization).\r\n\r\nSteps:\r\n- [x] Generate the LibriMix corpus\r\n- [x] Prepare the corpus for diarization\r\n- [x] Upload these files to the superb-data repo\r\n- [x] Transcribe the corresponding s3prl processing of these files into our superb loading script\r\n- [ ] README: tags + description sections\r\n\r\nRelated to #2619.\r\n\r\ncc: @lewtun \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2653\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2652","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2652\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2652\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2652\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2652","id":944865924,"node_id":"MDExOlB1bGxSZXF1ZXN0NjkwMjg0MTI4","number":2652,"title":"Fix logging docstring","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626304798000,"updated_at":1626608466000,"closed_at":1626343051000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2652","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2652","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2652.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2652.patch"},"body":"Remove \"no tqdm bars\" from the docstring in the logging module to align it with the changes introduced in #2534.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2652\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2651","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2651\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2651\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2651\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2651","id":944796961,"node_id":"MDU6SXNzdWU5NDQ3OTY5NjE=","number":2651,"title":"Setting log level higher than warning does not suppress progress bar","user":{"login":"Isa-rentacs","id":1147443,"node_id":"MDQ6VXNlcjExNDc0NDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1147443?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Isa-rentacs","html_url":"https:\/\/github.com\/Isa-rentacs","followers_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/followers","following_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/orgs","repos_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/repos","events_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Isa-rentacs\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nyou can suppress progress bars by patching logging as follows:\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda: logging.NOTSET\r\n# map call ...\r\n```","Thank you, it worked :)","See https:\/\/github.com\/huggingface\/datasets\/issues\/2528 for reference","Note also that you can disable the progress bar with\r\n\r\n```python\r\nfrom datasets.utils import disable_progress_bar\r\ndisable_progress_bar()\r\n```\r\n\r\nSee https:\/\/github.com\/huggingface\/datasets\/blob\/8814b393984c1c2e1800ba370de2a9f7c8644908\/src\/datasets\/utils\/tqdm_utils.py#L84"],"created_at":1626296811000,"updated_at":1627045390000,"closed_at":1626320495000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).\r\nAccording to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.\r\n\r\nI also tried to set `DATASETS_VERBOSITY` environment variable to `error` or `critical` but it also didn't work.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\n\r\nfrom datasets.utils.logging import set_verbosity_error\r\n\r\nset_verbosity_error()\r\n\r\ndef dummy_map(batch):\r\n return batch\r\n\r\ncommon_voice_train = datasets.load_dataset(\"common_voice\", \"de\", split=\"train\")\r\ncommon_voice_test = datasets.load_dataset(\"common_voice\", \"de\", split=\"test\")\r\n\r\ncommon_voice_train.map(dummy_map)\r\n```\r\n\r\n## Expected results\r\n- The progress bar for `.map` call won't be shown\r\n\r\n## Actual results\r\n- The progress bar for `.map` is still shown \r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-5.4.0-1045-aws-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.5\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2651\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2650","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2650\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2650\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2650\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2650","id":944672565,"node_id":"MDU6SXNzdWU5NDQ2NzI1NjU=","number":2650,"title":"[load_dataset] shard and parallelize the process","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626285898000,"updated_at":1626285916000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"- Some huge datasets take forever to build the first time. (e.g. oscar\/en) as it's done in a single cpu core.\r\n- If the build crashes, everything done up to that point gets lost\r\n\r\nRequest: Shard the build over multiple arrow files, which would enable:\r\n- much faster build by parallelizing the build process\r\n- if the process crashed, the completed arrow files don't need to be re-built again\r\n\r\nThank you!\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2650\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2649","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2649\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2649\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2649\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2649","id":944651229,"node_id":"MDU6SXNzdWU5NDQ2NTEyMjk=","number":2649,"title":"adding progress bar \/ ETA for `load_dataset`","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626284079000,"updated_at":1626284280000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Please consider:\r\n```\r\nDownloading and preparing dataset oscar\/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache\/oscar\/unshuffled_deduplicated_en\/1.0.0\/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2...\r\nHF google storage unreachable. Downloading and preparing it from source\r\n```\r\nand no indication whatsoever of whether things work well or when it'll be done. It's important to have an estimated completion time for when doing slurm jobs since some instances have a cap on run-time.\r\n\r\nI think for this particular job it sat for 30min in total silence and then after 30min it started generating:\r\n```\r\n897850 examples [07:24, 10286.71 examples\/s]\r\n```\r\nwhich is already great!\r\n\r\nRequest: \r\n1. ETA - knowing how many hours to allocate for a slurm job\r\n2. progress bar - helps to know things are working and aren't stuck and where we are at.\r\n\r\nThank you!\r\n\r\n@lhoestq \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2649\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2648","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2648\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2648\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2648\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2648","id":944484522,"node_id":"MDU6SXNzdWU5NDQ0ODQ1MjI=","number":2648,"title":"Add web_split dataset for Paraphase and Rephrase benchmark","user":{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false},"assignees":[{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#take"],"created_at":1626272676000,"updated_at":1626272772000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe:\r\nFor getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better results on both tests data.\r\n\r\nThis dataset is made from web NLG data.\r\n\r\nAll the dataset related details are provided in the below repository\r\n\r\nGithub link: https:\/\/github.com\/shashiongithub\/Split-and-Rephrase\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2648\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2647","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2647\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2647\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2647\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2647","id":944424941,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg5OTExMzky","number":2647,"title":"Fix anchor in README","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626268964000,"updated_at":1626608478000,"closed_at":1626331847000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2647","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2647","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2647.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2647.patch"},"body":"I forgot to push this fix in #2611, so I'm sending it now. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2647\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2646","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2646\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2646\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2646\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2646","id":944379954,"node_id":"MDU6SXNzdWU5NDQzNzk5NTQ=","number":2646,"title":"downloading of yahoo_answers_topics dataset failed","user":{"login":"vikrant7k","id":66781249,"node_id":"MDQ6VXNlcjY2NzgxMjQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66781249?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vikrant7k","html_url":"https:\/\/github.com\/vikrant7k","followers_url":"https:\/\/api.github.com\/users\/vikrant7k\/followers","following_url":"https:\/\/api.github.com\/users\/vikrant7k\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vikrant7k\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vikrant7k\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vikrant7k\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vikrant7k\/orgs","repos_url":"https:\/\/api.github.com\/users\/vikrant7k\/repos","events_url":"https:\/\/api.github.com\/users\/vikrant7k\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vikrant7k\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https:\/\/github.com\/huggingface\/datasets\/issues\/996 \r\n\r\nFeel free to try again today, now that the quota was reset"],"created_at":1626265865000,"updated_at":1626340516000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset\r\n\r\n## Steps to reproduce the bug\r\n self.dataset = load_dataset(\r\n 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')\r\n# Sample code to reproduce the bug\r\n self.dataset = load_dataset(\r\n 'yahoo_answers_topics', cache_dir=self.config['yahoo_cache_dir'], split='train[:90%]')\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2646\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2645","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2645\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2645\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2645\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2645","id":944374284,"node_id":"MDU6SXNzdWU5NDQzNzQyODQ=","number":2645,"title":"load_dataset processing failed with OS error after downloading a dataset","user":{"login":"fake-warrior8","id":40395156,"node_id":"MDQ6VXNlcjQwMzk1MTU2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40395156?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fake-warrior8","html_url":"https:\/\/github.com\/fake-warrior8","followers_url":"https:\/\/api.github.com\/users\/fake-warrior8\/followers","following_url":"https:\/\/api.github.com\/users\/fake-warrior8\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fake-warrior8\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fake-warrior8\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fake-warrior8\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fake-warrior8\/orgs","repos_url":"https:\/\/api.github.com\/users\/fake-warrior8\/repos","events_url":"https:\/\/api.github.com\/users\/fake-warrior8\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fake-warrior8\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It looks like an issue with pytorch.\r\n\r\nCould you try to run `import torch` and see if it raises an error ?","> Hi ! It looks like an issue with pytorch.\r\n> \r\n> Could you try to run `import torch` and see if it raises an error ?\r\n\r\nIt works. Thank you!"],"created_at":1626265433000,"updated_at":1626341642000,"closed_at":1626341642000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nAfter downloading a dataset like opus100, there is a bug that \r\nOSError: Cannot find data file.\r\nOriginal error:\r\ndlopen: cannot load any more object with static TLS\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nthis_dataset = load_dataset('opus100', 'af-en')\r\n```\r\n\r\n## Expected results\r\nthere is no error when running load_dataset.\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 652, in _download_and_prep\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 989, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/features.py\", line 952, in encode_example\r\n example = cast_to_python_objects(example)\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/features.py\", line 219, in cast_to_python_ob\r\n return _cast_to_python_objects(obj)[0]\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/features.py\", line 165, in _cast_to_python_o\r\n import torch\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/torch\/__init__.py\", line 188, in \r\n _load_global_deps()\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/torch\/__init__.py\", line 141, in _load_global_deps\r\n ctypes.CDLL(lib_path, mode=ctypes.RTLD_GLOBAL)\r\n File \"\/home\/anaconda3\/lib\/python3.6\/ctypes\/__init__.py\", line 348, in __init__\r\n self._handle = _dlopen(self._name, mode)\r\nOSError: dlopen: cannot load any more object with static TLS\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_hub_opus100.py\", line 9, in \r\n this_dataset = load_dataset('opus100', language_pair)\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepa\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/anaconda3\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 658, in _download_and_prep\r\n + str(e)\r\nOSError: Cannot find data file.\r\nOriginal error:\r\ndlopen: cannot load any more object with static TLS\r\n\r\n\r\n## Environment info\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-3.13.0-32-generic-x86_64-with-debian-jessie-sid\r\n- Python version: 3.6.6\r\n- PyArrow version: 3.0.0\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2645\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2644","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2644\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2644\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2644\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2644","id":944254748,"node_id":"MDU6SXNzdWU5NDQyNTQ3NDg=","number":2644,"title":"Batched `map` not allowed to return 0 items","user":{"login":"pcuenca","id":1177582,"node_id":"MDQ6VXNlcjExNzc1ODI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1177582?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pcuenca","html_url":"https:\/\/github.com\/pcuenca","followers_url":"https:\/\/api.github.com\/users\/pcuenca\/followers","following_url":"https:\/\/api.github.com\/users\/pcuenca\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pcuenca\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pcuenca\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pcuenca\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pcuenca\/orgs","repos_url":"https:\/\/api.github.com\/users\/pcuenca\/repos","events_url":"https:\/\/api.github.com\/users\/pcuenca\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pcuenca\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting. Indeed it looks like type inference makes it fail. We should probably just ignore this step until a non-empty batch is passed.","Sounds good! Do you want me to propose a PR? I'm quite busy right now, but if it's not too urgent I could take a look next week.","Sure if you're interested feel free to open a PR :)\r\n\r\nYou can also ping me anytime if you have questions or if I can help !","Sorry to ping you, @lhoestq, did you have a chance to take a look at the proposed PR? Thank you!","Yes and it's all good, thank you :)\r\n\r\nFeel free to close this issue if it's good for you","Everything's good, thanks!"],"created_at":1626256699000,"updated_at":1627311315000,"closed_at":1627311315000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https:\/\/huggingface.co\/docs\/datasets\/processing.html#augmenting-the-dataset), `a batch mapped function can take as input a batch of size N and return a batch of size M where M can be greater or less than N and can even be zero`.\r\n\r\nHowever, when the returned batch has a size of zero (neither item in the batch fulfilled the condition), we get an `index out of bounds` error. I think that `arrow_writer.py` is [trying to infer the returned types using the first element returned](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/arrow_writer.py#L100), but no elements were returned in this case.\r\n\r\nFor this error to happen, I'm returning a dictionary that contains empty lists for the keys I want to keep, see below. If I return an empty dictionary instead (no keys), then a different error eventually occurs.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndef select_rows(examples):\r\n # `key` is a column name that exists in the original dataset\r\n # The following line simulates no matches found, so we return an empty batch\r\n result = {'key': []}\r\n return result\r\n\r\nfiltered_dataset = dataset.map(\r\n select_rows,\r\n remove_columns = dataset.column_names,\r\n batched = True,\r\n num_proc = 1,\r\n desc = \"Selecting rows with images that exist\"\r\n)\r\n```\r\n\r\nThe code above immediately triggers the exception. If we use the following instead:\r\n\r\n```python\r\ndef select_rows(examples):\r\n # `key` is a column name that exists in the original dataset\r\n result = {'key': []} # or defaultdict or whatever\r\n \r\n # code to check for condition and append elements to result\r\n # some_items_found will be set to True if there were any matching elements in the batch\r\n \r\n return result if some_items_found else {}\r\n```\r\n\r\nThen it _seems_ to work, but it eventually fails with some sort of schema error. I believe it may happen when an empty batch is followed by a non-empty one, but haven't set up a test to verify it.\r\n\r\nIn my opinion, returning a dictionary with empty lists and valid column names should be accepted as a valid result with zero items.\r\n\r\n## Expected results\r\nThe dataset would be filtered and only the matching fields would be returned.\r\n\r\n## Actual results\r\nAn exception is encountered, as described. Using a workaround makes it fail further along the line.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.1.dev0\r\n- Platform: Linux-5.4.0-53-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2644\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2643","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2643\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2643\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2643\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2643","id":944220273,"node_id":"MDU6SXNzdWU5NDQyMjAyNzM=","number":2643,"title":"Enum used in map functions will raise a RecursionError with dill.","user":{"login":"jorgeecardona","id":100702,"node_id":"MDQ6VXNlcjEwMDcwMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/100702?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jorgeecardona","html_url":"https:\/\/github.com\/jorgeecardona","followers_url":"https:\/\/api.github.com\/users\/jorgeecardona\/followers","following_url":"https:\/\/api.github.com\/users\/jorgeecardona\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jorgeecardona\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jorgeecardona\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jorgeecardona\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jorgeecardona\/orgs","repos_url":"https:\/\/api.github.com\/users\/jorgeecardona\/repos","events_url":"https:\/\/api.github.com\/users\/jorgeecardona\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jorgeecardona\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm running into this as well. (Thank you so much for reporting @jorgeecardona \u2014 was staring at this massive stack trace and unsure what exactly was wrong!)","Hi ! Thanks for reporting :)\r\n\r\nUntil this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.py\r\nThere is already a suggestion in this message about how to do it:\r\nhttps:\/\/github.com\/uqfoundation\/dill\/issues\/250#issuecomment-852566284\r\n\r\nLet me know if such a workaround could help, and feel free to open a PR if you want to contribute !"],"created_at":1626254168000,"updated_at":1629739417000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nEnums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https:\/\/github.com\/uqfoundation\/dill\/issues\/250#issuecomment-852566284\r\n\r\nIn my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` dataclass as base class and the `HfArgumentParser`. In the same file I use a `ds.map` that tries to pickle the content of the module including the definition of the enum that runs into the dill bug described above.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nfrom enum import Enum\r\n\r\nclass A(Enum):\r\n a = 'a'\r\n\r\ndef main():\r\n a = A.a\r\n \r\n def f(x):\r\n return {} if a == a.a else x\r\n \r\n ds = load_dataset('cnn_dailymail', '3.0.0')['test']\r\n ds = ds.map(f, num_proc=15)\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n## Expected results\r\nThe known problem with dill could be prevented as explained in the link above (workaround.) Since `HFArgumentParser` nicely uses the enum class for choices it makes sense to also deal with this bug under the hood.\r\n\r\n## Actual results\r\n\r\n```python\r\n File \"\/home\/xxxx\/miniconda3\/lib\/python3.8\/site-packages\/dill\/_dill.py\", line 1373, in save_type\r\n pickler.save_reduce(_create_type, (type(obj), obj.__name__,\r\n File \"\/home\/xxxx\/miniconda3\/lib\/python3.8\/pickle.py\", line 690, in save_reduce\r\n save(args)\r\n File \"\/home\/xxxx\/miniconda3\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/xxxx\/miniconda3\/lib\/python3.8\/pickle.py\", line 899, in save_tuple\r\n save(element)\r\n File \"\/home\/xxxx\/miniconda3\/lib\/python3.8\/pickle.py\", line 534, in save\r\n self.framer.commit_frame()\r\n File \"\/home\/xxxx\/miniconda3\/lib\/python3.8\/pickle.py\", line 220, in commit_frame\r\n if f.tell() >= self._FRAME_SIZE_TARGET or force:\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-5.9.0-4-amd64-x86_64-with-glibc2.10\r\n- Python version: 3.8.5\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2643\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2642","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2642\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2642\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2642\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2642","id":944175697,"node_id":"MDU6SXNzdWU5NDQxNzU2OTc=","number":2642,"title":"Support multi-worker with streaming dataset (IterableDataset).","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This is a great idea :)\r\nI think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing. \r\n\r\nRegarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a second step"],"created_at":1626250978000,"updated_at":1626341854000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nThe current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking).\r\n\r\n**Describe the solution you'd like**\r\nIdeally `.map` should support multi-worker like tfds, with `AUTOTUNE`.\r\n\r\n**Describe alternatives you've considered**\r\nA simpler solution is to shard the dataset and process it in parallel with pytorch dataloader. The shard does not need to be of equal size.\r\n* https:\/\/pytorch.org\/docs\/stable\/data.html#torch.utils.data.IterableDataset\r\n\r\n**Additional context**\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2642\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2641","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2641\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2641\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2641\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2641","id":943838085,"node_id":"MDU6SXNzdWU5NDM4MzgwODU=","number":2641,"title":"load_dataset(\"financial_phrasebank\") NonMatchingChecksumError","user":{"login":"courtmckay","id":13956255,"node_id":"MDQ6VXNlcjEzOTU2MjU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13956255?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/courtmckay","html_url":"https:\/\/github.com\/courtmckay","followers_url":"https:\/\/api.github.com\/users\/courtmckay\/followers","following_url":"https:\/\/api.github.com\/users\/courtmckay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/courtmckay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/courtmckay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/courtmckay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/courtmckay\/orgs","repos_url":"https:\/\/api.github.com\/users\/courtmckay\/repos","events_url":"https:\/\/api.github.com\/users\/courtmckay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/courtmckay\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! It's probably because this dataset is stored on google drive and it has a per day quota limit. It should work if you retry, I was able to initiate the download.\r\n\r\nSimilar issue [here](https:\/\/github.com\/huggingface\/datasets\/issues\/2646)","Hi ! Loading the dataset works on my side as well.\r\nFeel free to try again and let us know if it works for you know","Thank you! I've been trying periodically for the past month, and no luck yet with this particular dataset. Just tried again and still hitting the checksum error.\r\n\r\nCode:\r\n\r\n`dataset = load_dataset(\"financial_phrasebank\", \"sentences_allagree\") `\r\n\r\nTraceback:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n in \r\n----> 1 dataset = load_dataset(\"financial_phrasebank\", \"sentences_allagree\")\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 859 ignore_verifications=ignore_verifications,\r\n 860 try_from_hf_gcs=try_from_hf_gcs,\r\n--> 861 use_auth_token=use_auth_token,\r\n 862 )\r\n 863 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 582 if not downloaded_from_gcs:\r\n 583 self._download_and_prepare(\r\n--> 584 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 585 )\r\n 586 # Sync info\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 642 if verify_infos:\r\n 643 verify_checksums(\r\n--> 644 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 645 )\r\n 646 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/www.researchgate.net\/profile\/Pekka_Malo\/publication\/251231364_FinancialPhraseBank-v10\/data\/0c96051eee4fb1d56e000000\/FinancialPhraseBank-v10.zip']\r\n```"],"created_at":1626211309000,"updated_at":1626701170000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nAttempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"financial_phrasebank\", 'sentences_allagree')\r\n```\r\n\r\n## Expected results\r\nI expect to see the financial_phrasebank dataset downloaded successfully\r\n\r\n## Actual results\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/www.researchgate.net\/profile\/Pekka_Malo\/publication\/251231364_FinancialPhraseBank-v10\/data\/0c96051eee4fb1d56e000000\/FinancialPhraseBank-v10.zip']\r\n\r\n## Environment info\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-4.14.232-177.418.amzn2.x86_64-x86_64-with-debian-10.6\r\n- Python version: 3.7.10\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2641\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2640","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2640\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2640\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2640\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2640","id":943591055,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg5MjAxMDkw","number":2640,"title":"Fix docstrings","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626192554000,"updated_at":1626331861000,"closed_at":1626329172000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2640","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2640","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2640.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2640.patch"},"body":"Fix rendering of some docstrings.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2640\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2639","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2639\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2639\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2639\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2639","id":943527463,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg5MTQ3NDE5","number":2639,"title":"Refactor patching to specific submodule","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626188925000,"updated_at":1626195169000,"closed_at":1626195169000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2639","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2639","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2639.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2639.patch"},"body":"Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created.\r\n\r\nIn relation with the initial approach followed in #2631.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2639\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2638","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2638\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2638\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2638\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2638","id":943484913,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg5MTA5NTg1","number":2638,"title":"Streaming for the Json loader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["A note is that I think we should add a few indicator of status (as mentioned by @stas00 in #2649), probably at the (1) downloading, (2) extracting and (3) reading steps. In particular when loading many very large files it's interesting to know a bit where we are in the process.","I tested locally, and the builtin `json` loader is 4x slower than `pyarrow.json`. Thanks for the comment @albertvillanova !\r\n\r\nTherefore I switched back to using `pyarrow.json`, but only on the batch that is read. This way we don't have to deal with its `block_size`, and it only loads in memory one batch at a time."],"created_at":1626187026000,"updated_at":1626451172000,"closed_at":1626451171000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2638","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2638","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2638.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2638.patch"},"body":"It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows.\r\n\r\nMoreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related to #2573).\r\n\r\nSo I switched to using `open` which is extended to support reading from remote file progressively, and I removed the pyarrow json reader which was not practical.\r\nInstead, I'm using the classical `json.loads` from the standard library.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2638\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2637","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2637\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2637\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2637\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2637","id":943290736,"node_id":"MDU6SXNzdWU5NDMyOTA3MzY=","number":2637,"title":"Add the CIDEr metric?","user":{"login":"zuujhyt","id":75845952,"node_id":"MDQ6VXNlcjc1ODQ1OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75845952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zuujhyt","html_url":"https:\/\/github.com\/zuujhyt","followers_url":"https:\/\/api.github.com\/users\/zuujhyt\/followers","following_url":"https:\/\/api.github.com\/users\/zuujhyt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zuujhyt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zuujhyt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zuujhyt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zuujhyt\/orgs","repos_url":"https:\/\/api.github.com\/users\/zuujhyt\/repos","events_url":"https:\/\/api.github.com\/users\/zuujhyt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zuujhyt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626178971000,"updated_at":1626178971000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI find the api in https:\/\/huggingface.co\/metrics quite useful.\r\nI am playing around with video\/image captioning task, where CIDEr is a popular metric.\r\nDo you plan to add this into the HF ```datasets``` library?\r\nThanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2637\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2636","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2636\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2636\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2636\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2636","id":943044514,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg4NzEyMTY4","number":2636,"title":"Streaming for the Pandas loader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626167901000,"updated_at":1626187044000,"closed_at":1626187043000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2636","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2636","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2636.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2636.patch"},"body":"It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example.\r\n\r\nIndeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2636\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2635","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2635\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2635\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2635\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2635","id":943030999,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg4Njk5OTM5","number":2635,"title":"Streaming for the CSV loader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626167338000,"updated_at":1626189578000,"closed_at":1626189577000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2635","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2635","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2635.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2635.patch"},"body":"It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows.\r\n\r\nIndeed, when streaming, `open` is extended to support reading from remote file progressively.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2635\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2634","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2634\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2634\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2634\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2634","id":942805621,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg4NDk2Mzc2","number":2634,"title":"Inject ASR template for lj_speech dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626156294000,"updated_at":1626167109000,"closed_at":1626167109000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2634","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2634","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2634.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2634.patch"},"body":"Related to: #2565, #2633.\r\n\r\ncc: @lewtun ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2634\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2633","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2633\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2633\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2633\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2633","id":942396414,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg4MTMwOTA5","number":2633,"title":"Update ASR tags","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626119911000,"updated_at":1626155126000,"closed_at":1626155113000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2633","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2633","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2633.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2633.patch"},"body":"This PR updates the ASR tags of the 5 datasets added in #2565 following the change of task categories in #2620 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2633\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2632","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2632\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2632\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2632\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2632","id":942293727,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg4MDQyMjcw","number":2632,"title":"add image-classification task template","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome!","Thanks for adding a new task template - great work @nateraw \ud83d\ude80 !"],"created_at":1626111663000,"updated_at":1626191068000,"closed_at":1626190096000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2632","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2632","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2632.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2632.patch"},"body":"Snippet below is the tl;dr, but you can try it out directly here:\r\n\r\n[![Open In Collab](https:\/\/colab.research.google.com\/assets\/colab-badge.svg)](https:\/\/colab.research.google.com\/gist\/nateraw\/005c025d41f0e48ae3d4ee61c0f20b70\/image-classification-task-template-demo.ipynb)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw\/image-folder', data_files='PetImages\/')\r\n# DatasetDict({\r\n# train: Dataset({\r\n# features: ['file', 'labels'],\r\n# num_rows: 23410\r\n# })\r\n# })\r\n\r\nds = ds.prepare_for_task('image-classification')\r\n# DatasetDict({\r\n# train: Dataset({\r\n# features: ['image_file_path', 'labels'],\r\n# num_rows: 23410\r\n# })\r\n# })\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2632\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2631","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2631\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2631\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2631\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2631","id":942242271,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg3OTk3MzM2","number":2631,"title":"Delete extracted files when loading dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sure @stas00, it is still a draft pull request. :)","Yes, I noticed it after reviewing - my apologies.","The problem with this approach is that it also deletes the downloaded files (if they need not be extracted). \ud83d\ude1f ","> The problem with this approach is that it also deletes the downloaded files (if they need not be extracted). worried\r\n\r\nRight! These probably should not be deleted by default, but having an option for those users who are tight on disc space?","> Right! These probably should not be deleted by default, but having an option for those users who are tight on disc space?\r\n\r\nI propose leaving that for another PR, and leave this one handling only with \"extracted\" files. Is it OK for you? :) ","Awesome thanks !\r\nI just have one question: what about image\/audio datasets for which we store the path to the extracted file on the arrow data ?\r\nIn this case the default should be to keep the extracted files.\r\n\r\nSo for now I would just make `keep_extracted=True` by default until we have a way to separate extracted files that can be deleted and extracted files that are actual resources of the dataset.","@lhoestq, current implementation only deletes extracted \"files\", not extracted \"directories\", as it uses: `os.remove(path)`. I'm going to add a filter on files, so that this line does not throw an exception when passed a directory.\r\n\r\nFor audio datasets, the audio files are inside the extracted \"directory\", so they are not deleted.","I'm still more in favor of having `keep_extracted=True` by default:\r\n- When working with a dataset, you call `load_dataset` many times. By default we want to keep objects extracted to not extract them over and over again (it can take a long time). Then once you know what you're doing and you want to optimize disk space, you can do `keep_extracted=False`. Deleting the extracted files by default is a regression that can lead to slow downs for people calling `load_dataset` many times, which is common when experimenting\r\n- This behavior doesn't sound natural as a default behavior. In the rest of the library, things are cached and not removed unless you explicitly say do (`map` caching for example). Moreover the function in the download manager is called `download_and_extract`, not `download_and_extract_and_remove_extracted_files`\r\n\r\nLet me know what you think !","I think the main issue is that after doing some work users typically move on to other datasets and the amount of disc space used keeps on growing. So your logic is very sound and perhaps what's really needed is a cleansweep function that can go through **all** datasets and clean them up to the desired degree:\r\n\r\n- delete all extracted files\r\n- delete all sources\r\n- delete all caches\r\n- delete all caches that haven't been accessed in 6 months\r\n- delete completely old datasets that haven't been accessed in 6 months\r\n- more?\r\n\r\nSo a user can launch a little application, choose what they want to clean up and voila they have just freed up a huge amount of disc space. Makes me think of Ubuntu Tweak's Janitor app - very useful.\r\n\r\nAt the moment, this process of linting is very daunting and error-prone, especially due to all those dirs\/files with hash names.","@stas00 I've had the same idea. Instead of the full-fledged app, a simpler approach would be to add a new command to the CLI.","oh, CLI would be perfect. I didn't mean to request a GUI-one specifically, was just using it as an example.\r\n\r\nOne could even do a crontab to delete old datasets that haven't been accesses in X months.","@lhoestq I totally agree with you. I'm addressing that change.\r\n\r\n@stas00, @mariosasko, that could eventually be addressed in another pull request. The objective of this PR is:\r\n- add an option to pass to `load_dataset`, so that extracted files are deleted\r\n- do this deletion file per file, once the file has been already used to generate the cache Arrow file","I also like the idea of having a CLI tool to help users clean their cache and save disk space, good idea !"],"created_at":1626107973000,"updated_at":1626685699000,"closed_at":1626685699000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2631","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2631","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2631.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2631.patch"},"body":"Close #2481, close #2604, close #2591.\r\n\r\ncc: @stas00, @thomwolf, @BirgerMoell ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2631\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2630","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2630\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2630\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2630\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2630","id":942102956,"node_id":"MDU6SXNzdWU5NDIxMDI5NTY=","number":2630,"title":"Progress bars are not properly rendered in Jupyter notebook","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To add my experience when trying to debug this issue:\r\n\r\nSeems like previously the workaround given [here](https:\/\/github.com\/tqdm\/tqdm\/issues\/485#issuecomment-473338308) worked around this issue. But with the latest version of jupyter\/tqdm I still get terminal warnings that IPython tried to send a message from a forked process.","Hi @mludv, thanks for the hint!!! :) \r\n\r\nWe will definitely take it into account to try to fix this issue... It seems somehow related to `multiprocessing` and `tqdm`..."],"created_at":1626098833000,"updated_at":1626160832000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe progress bars are not Jupyter widgets; regular progress bars appear (like in a terminal).\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nds.map(tokenize, num_proc=10)\r\n```\r\n\r\n## Expected results\r\nJupyter widgets displaying the progress bars.\r\n\r\n## Actual results\r\nSimple plane progress bars.\r\n\r\ncc: Reported by @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2630\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2629","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2629\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2629\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2629\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2629","id":941819205,"node_id":"MDU6SXNzdWU5NDE4MTkyMDU=","number":2629,"title":"Load datasets from the Hub without requiring a dataset script","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) \ud83c\udf89 "],"created_at":1626079517000,"updated_at":1629901088000,"closed_at":1629901088000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As a user I would like to be able to upload my csv\/json\/text\/parquet\/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script.\r\n\r\nMoreover I would like to be able to specify which file goes into which split using the `data_files` argument.\r\n\r\nThis feature should be compatible with private repositories and dataset streaming.\r\n\r\nThis can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv\/json\/text\/parquet\/etc.)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2629\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2628","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2628\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2628\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2628\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2628","id":941676404,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg3NTE0NzQz","number":2628,"title":"Use ETag of remote data files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626066610000,"updated_at":1626098914000,"closed_at":1626079207000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2628","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2628","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2628.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2628.patch"},"body":"Use ETag of remote data files to create config ID.\r\n\r\nRelated to #2616.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2628\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2627","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2627\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2627\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2627\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2627","id":941503349,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg3MzczMDg1","number":2627,"title":"Minor fix tests with Windows paths","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626026148000,"updated_at":1626098927000,"closed_at":1626078890000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2627","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2627","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2627.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2627.patch"},"body":"Minor fix tests with Windows paths.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2627\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2626","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2626\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2626\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2626\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2626","id":941497830,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg3MzY4OTMz","number":2626,"title":"Use correct logger in metrics.py","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1626024150000,"updated_at":1626098934000,"closed_at":1626069269000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2626","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2626","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2626.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2626.patch"},"body":"Fixes #2624 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2626\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2625","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2625\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2625\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2625\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2625","id":941439922,"node_id":"MDU6SXNzdWU5NDE0Mzk5MjI=","number":2625,"title":"\u269b\ufe0f\ud83d\ude07\u2699\ufe0f\ud83d\udd11","user":{"login":"hustlen0mics","id":50596661,"node_id":"MDQ6VXNlcjUwNTk2NjYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50596661?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hustlen0mics","html_url":"https:\/\/github.com\/hustlen0mics","followers_url":"https:\/\/api.github.com\/users\/hustlen0mics\/followers","following_url":"https:\/\/api.github.com\/users\/hustlen0mics\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hustlen0mics\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hustlen0mics\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hustlen0mics\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hustlen0mics\/orgs","repos_url":"https:\/\/api.github.com\/users\/hustlen0mics\/repos","events_url":"https:\/\/api.github.com\/users\/hustlen0mics\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hustlen0mics\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1626005674000,"updated_at":1626069359000,"closed_at":1626069359000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2625\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2624","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2624\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2624\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2624\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2624","id":941318247,"node_id":"MDU6SXNzdWU5NDEzMTgyNDc=","number":2624,"title":"can't set verbosity for `metric.py`","user":{"login":"thomas-happify","id":66082334,"node_id":"MDQ6VXNlcjY2MDgyMzM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66082334?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomas-happify","html_url":"https:\/\/github.com\/thomas-happify","followers_url":"https:\/\/api.github.com\/users\/thomas-happify\/followers","following_url":"https:\/\/api.github.com\/users\/thomas-happify\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomas-happify\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomas-happify\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomas-happify\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomas-happify\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomas-happify\/repos","events_url":"https:\/\/api.github.com\/users\/thomas-happify\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomas-happify\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @thomas-happify for reporting and thanks @mariosasko for the fix."],"created_at":1625948625000,"updated_at":1626069269000,"closed_at":1626069269000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n```\r\n[2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on \/root\/.cache\/huggingface\/metrics\/seqeval\/default\/default_experiment-1-0.arrow.lock\r\n[2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes \/root\/.cache\/huggingface\/metrics\/seqeval\/default\/default_experiment-1-0.arrow.\r\n[2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n[2021-07-10 20:13:11,543][\/conda\/envs\/myenv\/lib\/python3.8\/site-packages\/datasets\/metric.py][INFO] - Removing \/root\/.cache\/huggingface\/metrics\/seqeval\/default\/default_experiment-1-0.arrow\r\n```\r\nAs you can see, `datasets` logging come from different places. \r\n`filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected \r\nHowever, `metric.py` logging comes from `\/conda\/envs\/myenv\/lib\/python3.8\/site-packages\/datasets\/`\r\n\r\nSo when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation. \r\n\r\nI had to do \r\n```\r\nlogging.getLogger(\"\/conda\/envs\/myenv\/lib\/python3.8\/site-packages\/datasets\/metric\").setLevel(logging.ERROR)\r\n``` \r\nto fully mute these messages\r\n\r\n## Expected results\r\nit shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()`\r\n\r\n## Environment info\r\n\r\n- `datasets` version: tried both 1.8.0 & 1.9.0\r\n- Platform: Ubuntu 18.04.5 LTS \r\n- Python version: 3.8.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2624\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2623","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2623\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2623\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2623\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2623","id":941265342,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg3MTk0MjM3","number":2623,"title":"[Metrics] added wiki_split metrics","user":{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Looks all good to me thanks :)\r\nJust did some minor corrections in the docstring"],"created_at":1625928710000,"updated_at":1626272893000,"closed_at":1626129271000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2623","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2623","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2623.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2623.patch"},"body":"Fixes: #2606\r\n\r\nThis pull request adds combine metrics for the wikisplit or English sentence split task\r\n\r\nReviewer: @patrickvonplaten ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2623\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2622","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2622\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2622\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2622\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2622","id":941127785,"node_id":"MDU6SXNzdWU5NDExMjc3ODU=","number":2622,"title":"Integration with AugLy","user":{"login":"Darktex","id":890615,"node_id":"MDQ6VXNlcjg5MDYxNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/890615?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Darktex","html_url":"https:\/\/github.com\/Darktex","followers_url":"https:\/\/api.github.com\/users\/Darktex\/followers","following_url":"https:\/\/api.github.com\/users\/Darktex\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Darktex\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Darktex\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Darktex\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Darktex\/orgs","repos_url":"https:\/\/api.github.com\/users\/Darktex\/repos","events_url":"https:\/\/api.github.com\/users\/Darktex\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Darktex\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nyou can define your own custom formatting with `Dataset.set_transform()` and then run the tokenizer with the batches of augmented data as follows:\r\n```python\r\ndset = load_dataset(\"imdb\", split=\"train\") # Let's say we are working with the IMDB dataset\r\ndset.set_transform(lambda ex: {\"text\": augly_text_augmentation(ex[\"text\"])}, columns=\"text\", output_all_columns=True)\r\ndataloader = torch.utils.data.DataLoader(dset, batch_size=32)\r\nfor epoch in range(5):\r\n for batch in dataloader:\r\n tokenizer_output = tokenizer(batch.pop(\"text\"), padding=True, truncation=True, return_tensors=\"pt\")\r\n batch.update(tokenizer_output)\r\n output = model(**batch)\r\n ...\r\n```"],"created_at":1625875389000,"updated_at":1626023291000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nFacebook recently launched a library, [AugLy](https:\/\/github.com\/facebookresearch\/AugLy) , that has a unified API for augmentations for image, video and text.\r\n\r\nIt would be pretty exciting to have it hooked up to HF libraries so that we can make NLP models robust to misspellings or to punctuation, or emojis etc. Plus, with Transformers supporting more CV use cases, having augmentations support becomes crucial.\r\n\r\n**Describe the solution you'd like**\r\nThe biggest difference between augmentations and preprocessing is that preprocessing happens only once, but you are running augmentations once per epoch. AugLy operates on text directly, so this breaks the typical workflow where we would run the tokenizer once, set format to pt tensors and be ready for the Dataloader.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nOne possible way of implementing these is to make a custom Dataset class where getitem(i) runs the augmentation and the tokenizer every time, though this would slow training down considerably given we wouldn't even run the tokenizer in batches.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2622\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2621","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2621\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2621\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2621\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2621","id":940916446,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2OTE1Mzcw","number":2621,"title":"Use prefix to allow exceed Windows MAX_PATH","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Does this mean the `FileNotFoundError` that avoids infinite loop can be removed?","Yes, I think so...","Or maybe we could leave it in case a relative path exceeds the MAX_PATH limit?"," > Or maybe we could leave it in case a relative path exceeds the MAX_PATH limit?\r\n\r\nWhat about converting relative paths to absolute?","Nice ! Have you had a chance to test it on a windows machine with the max path limit enabled ? Afaik the CI doesn't have the path limit","Sure @lhoestq: I've tested on my machine... And this fixes most of the tests... \ud83d\ude05 "],"created_at":1625848793000,"updated_at":1626449292000,"closed_at":1626449291000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2621","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2621","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2621.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2621.patch"},"body":"By using this prefix, you can exceed the Windows MAX_PATH limit.\r\n\r\nSee: https:\/\/docs.microsoft.com\/en-us\/windows\/win32\/fileio\/naming-a-file?redirectedfrom=MSDN#win32-file-namespaces\r\n\r\nRelated to #2524, #2220.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2621\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2620","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2620\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2620\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2620\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2620","id":940893389,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2ODk3MDky","number":2620,"title":"Add speech processing tasks","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?","> Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?\r\n\r\nYes there's a few - I'll fix them tomorrow :)"],"created_at":1625846849000,"updated_at":1626114779000,"closed_at":1626111122000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2620","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2620","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2620.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2620.patch"},"body":"This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category. \r\n\r\nThe tasks associated with this category are derived from the [SUPERB benchmark](https:\/\/arxiv.org\/abs\/2105.01051), and ASR is included in this set.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2620\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2619","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2619\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2619\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2619\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2619","id":940858236,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2ODY3NDA4","number":2619,"title":"Add ASR task for SUPERB","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["Wait until #2620 is merged before pushing the README tags in this PR","> Thanks!\r\n> \r\n> One question: aren't you adding `task_templates` to the `_info` method (and to the `dataset_infos.json`?\r\n\r\ngreat catch! i've now added the asr task template (along with a mapping from superb task -> template) and updated the `dataset_infos.json` :) ","> Good!\r\n> \r\n> I have a suggested refactoring... Tell me what you think! :)\r\n\r\nyour approach is much more elegant - i've included your suggestions \ud83d\ude4f "],"created_at":1625843985000,"updated_at":1626339358000,"closed_at":1626180018000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2619","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2619","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2619.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2619.patch"},"body":"This PR starts building up the SUPERB benchmark by including the ASR task as described in the [SUPERB paper](https:\/\/arxiv.org\/abs\/2105.01051) and `s3prl` [instructions](https:\/\/github.com\/s3prl\/s3prl\/tree\/v0.2.0\/downstream#asr-automatic-speech-recognition).\r\n\r\nUsage:\r\n\r\n```python\r\nfrom datasets import load_dataset \r\n\r\nasr = load_dataset(\"superb\", \"asr\")\r\n# DatasetDict({\r\n# train: Dataset({\r\n# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],\r\n# num_rows: 28539\r\n# })\r\n# validation: Dataset({\r\n# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],\r\n# num_rows: 2703\r\n# })\r\n# test: Dataset({\r\n# features: ['file', 'text', 'speaker_id', 'chapter_id', 'id'],\r\n# num_rows: 2620\r\n# })\r\n# })\r\n```\r\n\r\nI've used the GLUE benchmark as a guide for filling out the README.\r\n\r\nTo move fast during the evaluation PoC I propose to merge one task at a time, so we can continue building the training \/ evaluation framework in parallel.\r\n\r\nNote: codewise this PR is ready for review - I'll add the missing YAML tags once #2620 is merged :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2619\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2618","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2618\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2618\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2618\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2618","id":940852640,"node_id":"MDU6SXNzdWU5NDA4NTI2NDA=","number":2618,"title":"`filelock.py` Error","user":{"login":"liyucheng09","id":27999909,"node_id":"MDQ6VXNlcjI3OTk5OTA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27999909?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/liyucheng09","html_url":"https:\/\/github.com\/liyucheng09","followers_url":"https:\/\/api.github.com\/users\/liyucheng09\/followers","following_url":"https:\/\/api.github.com\/users\/liyucheng09\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/liyucheng09\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/liyucheng09\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/liyucheng09\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/liyucheng09\/orgs","repos_url":"https:\/\/api.github.com\/users\/liyucheng09\/repos","events_url":"https:\/\/api.github.com\/users\/liyucheng09\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/liyucheng09\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @liyucheng09, thanks for reporting.\r\n\r\nApparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to provide file locks. You should ask your system administrator, or try these commands in the terminal:\r\n```shell\r\nsudo systemctl enable rpc-statd\r\nsudo systemctl start rpc-statd\r\n```"],"created_at":1625843569000,"updated_at":1626070830000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nIt seems that the `filelock.py` went error. \r\n\r\n```\r\n>>> ds=load_dataset('xsum')\r\n\r\n^CTraceback (most recent call last):\r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/utils\/filelock.py\", line 402, in _acquire\r\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\nOSError: [Errno 37] No locks available\r\n```\r\n\r\nAccording to error log, it is OSError, but there is an `except` in the `_acquire` function.\r\n\r\n```\r\n def _acquire(self):\r\n open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC\r\n try:\r\n fd = os.open(self._lock_file, open_mode)\r\n except (IOError, OSError):\r\n pass\r\n else:\r\n self._lock_file_fd = fd\r\n return None\r\n```\r\n\r\nI don't know why it stucked rather than `pass` directly.\r\n\r\nI am not quite familiar with filelock operation, so any help is highly appriciated.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\nds = load_dataset('xsum')\r\n```\r\n\r\n## Expected results\r\nA clear and concise description of the expected results.\r\n\r\n## Actual results\r\n```\r\n>>> ds=load_dataset('xsum')\r\n\r\n^CTraceback (most recent call last):\r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/utils\/filelock.py\", line 402, in _acquire\r\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\nOSError: [Errno 37] No locks available\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 818, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 470, in prepare_module\r\n with FileLock(lock_path):\r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/utils\/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/utils\/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"\/user\/HS502\/yl02706\/.conda\/envs\/lyc\/lib\/python3.6\/site-packages\/datasets\/utils\/filelock.py\", line 402, in _acquire\r\n fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\nKeyboardInterrupt\r\n```\r\n\r\n## Environment info\r\n\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid\r\n- Python version: 3.6.13\r\n- PyArrow version: 4.0.1\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2618\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2617","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2617\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2617\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2617\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2617","id":940846847,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2ODU3NzQz","number":2617,"title":"Fix missing EOL issue in to_json for old versions of pandas","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625843145000,"updated_at":1626098940000,"closed_at":1625844513000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2617","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2617","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2617.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2617.patch"},"body":"Some versions of pandas don't add an EOL at the end of the output of `to_json`.\r\nTherefore users could end up having two samples in the same line\r\n\r\nClose https:\/\/github.com\/huggingface\/datasets\/issues\/2615","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2617\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2616","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2616\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2616\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2616\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2616","id":940799038,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2ODE3NjYz","number":2616,"title":"Support remote data files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["@lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?","> @lhoestq maybe we could also use (if available) the ETag of the remote file in `create_config_id`?\r\n\r\nSure ! We can get the ETag with\r\n```python\r\nheaders = get_authentication_headers_for_url(url, use_auth_token=use_auth_token) # auth for private repos\r\netag = http_head(url, headers=headers).headers.get(\"ETag\")\r\n```\r\n\r\nSince the computation of the `config_id` is done in the `DatasetBuilder.__init__`, then this means that we need to add a new parameter `use_auth_token` in `DatasetBuilder.__init__`\r\n\r\nDoes that sound good ? We can add this in a following PR"],"created_at":1625839658000,"updated_at":1625847221000,"closed_at":1625847221000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2616","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2616","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2616.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2616.patch"},"body":"Add support for (streaming) remote data files:\r\n\r\n```python\r\ndata_files = f\"https:\/\/huggingface.co\/datasets\/{repo_id}\/resolve\/main\/{relative_file_path}\"\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, streaming=True)\r\n```\r\n\r\ncc: @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2616\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2615","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2615\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2615\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2615\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2615","id":940794339,"node_id":"MDU6SXNzdWU5NDA3OTQzMzk=","number":2615,"title":"Jsonlines export error","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting @TevenLeScao! I'm having a look...","(not sure what just happened on the assignations sorry)","For some reason this happens (both `datasets` version are on master) only on Python 3.6 and not Python 3.8.","@TevenLeScao we are using `pandas` to serialize the dataset to JSON Lines. So it must be due to pandas. Could you please check the pandas version causing the issue?","@TevenLeScao I have just checked it: this was a bug in `pandas` and it was fixed in version 1.2: https:\/\/github.com\/pandas-dev\/pandas\/pull\/36898","Thanks ! I'm creating a PR","Well I though it was me who has taken on this issue... \ud83d\ude05 ","Sorry, I was also talking to teven offline so I already had the PR ready before noticing x)","I was also already working in my PR... Nevermind. Next time we should pay attention if there is somebody (self-)assigned to an issue and if he\/she is still working on it before overtaking it... \ud83d\ude04 ","The fix is available on `master` @TevenLeScao , thanks for reporting"],"created_at":1625839325000,"updated_at":1625844547000,"closed_at":1625844513000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen exporting large datasets in jsonlines (c4 in my case) the created file has an error every 9999 lines: the 9999th and 10000th are concatenated, thus breaking the jsonlines format. This sounds like it is related to batching, which is by 10000 by default\r\n\r\n## Steps to reproduce the bug\r\nThis what I'm running:\r\n\r\nin python:\r\n\r\n```\r\nfrom datasets import load_dataset\r\nptb = load_dataset(\"ptb_text_only\")\r\nptb[\"train\"].to_json(\"ptb.jsonl\")\r\n```\r\n\r\nthen out of python:\r\n\r\n```\r\nhead -10000 ptb.jsonl\r\n```\r\n\r\n## Expected results\r\nProperly separated lines\r\n\r\n## Actual results\r\nThe last line is a concatenation of two lines\r\n\r\n## Environment info\r\n\r\n\r\n- `datasets` version: 1.9.1.dev0\r\n- Platform: Linux-5.4.0-1046-gcp-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyArrow version: 4.0.1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2615\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2614","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2614\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2614\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2614\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2614","id":940762427,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2Nzg2NTg3","number":2614,"title":"Convert numpy scalar to python float in Pearsonr output","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625836975000,"updated_at":1626099182000,"closed_at":1625839478000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2614","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2614","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2614.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2614.patch"},"body":"Following of https:\/\/github.com\/huggingface\/datasets\/pull\/2612","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2614\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2613","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2613\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2613\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2613\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2613","id":940759852,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2Nzg0MzY0","number":2613,"title":"Use ndarray.item instead of ndarray.tolist","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625836775000,"updated_at":1626099177000,"closed_at":1625838605000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2613","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2613","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2613.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2613.patch"},"body":"This PR follows up on #2612 to use `numpy.ndarray.item` instead of `numpy.ndarray.tolist` as the latter is somewhat confusing to the developer (even though it works).\r\n\r\nJudging from the `numpy` docs, `ndarray.item` is closer to what we want: https:\/\/numpy.org\/doc\/stable\/reference\/generated\/numpy.ndarray.item.html#numpy-ndarray-item\r\n\r\nPS. Sorry for the duplicate work here. I should have read the numpy docs more carefully in #2612 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2613\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2612","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2612\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2612\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2612\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2612","id":940604512,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2NjUwMjk3","number":2612,"title":"Return Python float instead of numpy.float64 in sklearn metrics","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https:\/\/github.com\/scikit-learn\/scikit-learn\/discussions\/20490","It could be surprising at first to use `tolist()` on numpy scalars but it works ^^","did the same for Pearsonr here: https:\/\/github.com\/huggingface\/datasets\/pull\/2614"],"created_at":1625824089000,"updated_at":1626099173000,"closed_at":1625835834000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2612","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2612","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2612.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2612.patch"},"body":"This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.\r\n\r\nThe reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https:\/\/huggingface.co\/datasets\/autonlp\/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3\/blob\/main\/README.md#L11)) and the `numpy.float64` format produces garbage like:\r\n\r\n```python\r\nimport yaml\r\nfrom datasets import load_metric\r\n\r\nmetric = load_metric(\"accuracy\")\r\nscore = metric.compute(predictions=[0,1], references=[0,1])\r\nprint(yaml.dump(score[\"accuracy\"])) # output below\r\n# !!python\/object\/apply:numpy.core.multiarray.scalar\r\n# - !!python\/object\/apply:numpy.dtype\r\n# args:\r\n# - f8\r\n# - false\r\n# - true\r\n# state: !!python\/tuple\r\n# - 3\r\n# - <\r\n# - null\r\n# - null\r\n# - null\r\n# - -1\r\n# - -1\r\n# - 0\r\n# - !!binary |\r\n# AAAAAAAA8D8=\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2612\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2611","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2611\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2611\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2611\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2611","id":940307053,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2Mzk5MjU3","number":2611,"title":"More consistent naming","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625789357000,"updated_at":1626196399000,"closed_at":1626192510000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2611","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2611","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2611.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2611.patch"},"body":"As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`\ud83e\udd17Datasets` -> `\ud83e\udd17 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2611\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2610","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2610\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2610\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2610\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2610","id":939899829,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg2MDUwMzI5","number":2610,"title":"Add missing WikiANN language tags","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625753281000,"updated_at":1626099136000,"closed_at":1625759044000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2610","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2610","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2610.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2610.patch"},"body":"Add missing language tags for WikiANN datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2610\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2609","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2609\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2609\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2609\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2609","id":939616682,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg1ODA3MTMz","number":2609,"title":"Fix potential DuplicatedKeysError","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["Finally, I'm splitting this PR."],"created_at":1625733484000,"updated_at":1626099196000,"closed_at":1625848928000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2609","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2609","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2609.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2609.patch"},"body":"Fix potential DiplicatedKeysError by ensuring keys are unique.\r\n\r\nWe should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2609\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2608","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2608\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2608\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2608\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2608","id":938897626,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg1MjAwMDYw","number":2608,"title":"Support streaming JSON files ","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625664622000,"updated_at":1626099151000,"closed_at":1625760521000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2608","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2608","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2608.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2608.patch"},"body":"Use open in JSON dataset builder, so that it can be patched with xopen for streaming.\r\n\r\nClose #2607.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2608\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2607","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2607\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2607\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2607\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2607","id":938796902,"node_id":"MDU6SXNzdWU5Mzg3OTY5MDI=","number":2607,"title":"Streaming local gzip compressed JSON line files is not working","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Updating to pyarrow-4.0.1 didn't fix the issue","Here is an exemple dataset with 2 of these compressed JSON files: https:\/\/huggingface.co\/datasets\/thomwolf\/github-python","Hi @thomwolf, thanks for reporting.\r\n\r\nIt seems this might be due to the fact that the JSON Dataset builder uses `pyarrow.json` (`paj.read_json`) to read the data without using the Python standard `open(file,...` (which is the one patched with `xopen` to work in streaming mode).\r\n\r\nThis has to be fixed.","Sorry for reopening this, but I'm having the same issue as @thomwolf when streaming a gzipped JSON Lines file from the hub. Or is that just not possible by definition?\r\nI installed `datasets`in editable mode from source (so probably includes the fix from #2608 ?): \r\n```\r\n>>> datasets.__version__\r\n'1.9.1.dev0'\r\n```\r\n\r\n```\r\n>>> msmarco = datasets.load_dataset(\"webis\/msmarco\", \"corpus\", streaming=True)\r\nUsing custom data configuration corpus-174d3b7155eb68db\r\n>>> msmarco_iter = iter(msmarco['train'])\r\n>>> print(next(msmarco_iter))\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/media\/ssd\/TREC\/msmarco\/datasets\/src\/datasets\/iterable_dataset.py\", line 338, in __iter__\r\n for key, example in self._iter():\r\n File \"\/media\/ssd\/TREC\/msmarco\/datasets\/src\/datasets\/iterable_dataset.py\", line 335, in _iter\r\n yield from ex_iterable\r\n File \"\/media\/ssd\/TREC\/msmarco\/datasets\/src\/datasets\/iterable_dataset.py\", line 78, in __iter__\r\n for key, example in self.generate_examples_fn(**self.kwargs):\r\n File \"\/home\/christopher\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/msmarco\/eb63dff8d83107168e973c7a655a6082d37e08d71b4ac39a0afada479c138745\/msmarco.py\", line 96, in _generate_examples\r\n with gzip.open(file, \"rt\", encoding=\"utf-8\") as f:\r\n File \"\/usr\/lib\/python3.6\/gzip.py\", line 53, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"\/usr\/lib\/python3.6\/gzip.py\", line 163, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'https:\/\/huggingface.co\/datasets\/webis\/msmarco\/resolve\/main\/msmarco_doc_00.gz'\r\n```\r\n\r\nLoading the dataset without streaming set to True, works fine.","Hi ! To make the streaming work, we extend `open` in the dataset builder to work with urls.\r\n\r\nTherefore you just need to use `open` before using `gzip.open`:\r\n```diff\r\n- with gzip.open(file, \"rt\", encoding=\"utf-8\") as f:\r\n+ with gzip.open(open(file, \"rb\"), \"rt\", encoding=\"utf-8\") as f:\r\n```\r\n\r\nYou can see that it is the case for oscar.py and c4.py for example:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/8814b393984c1c2e1800ba370de2a9f7c8644908\/datasets\/oscar\/oscar.py#L358-L358\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/8814b393984c1c2e1800ba370de2a9f7c8644908\/datasets\/c4\/c4.py#L88-L88\r\n\r\n","@lhoestq Sorry I missed that. Thank you Quentin!"],"created_at":1625657793000,"updated_at":1626774619000,"closed_at":1625760521000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nUsing streaming to iterate on local gzip compressed JSON files raise a file not exist error\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nstreamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n\r\nnext(iter(streamed_dataset))\r\n```\r\n\r\n## Actual results\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\n in \r\n----> 1 next(iter(streamed_dataset))\r\n\r\n~\/Documents\/GitHub\/datasets\/src\/datasets\/iterable_dataset.py in __iter__(self)\r\n 336 \r\n 337 def __iter__(self):\r\n--> 338 for key, example in self._iter():\r\n 339 if self.features:\r\n 340 # we encode the example for ClassLabel feature types for example\r\n\r\n~\/Documents\/GitHub\/datasets\/src\/datasets\/iterable_dataset.py in _iter(self)\r\n 333 else:\r\n 334 ex_iterable = self._ex_iterable\r\n--> 335 yield from ex_iterable\r\n 336 \r\n 337 def __iter__(self):\r\n\r\n~\/Documents\/GitHub\/datasets\/src\/datasets\/iterable_dataset.py in __iter__(self)\r\n 76 \r\n 77 def __iter__(self):\r\n---> 78 for key, example in self.generate_examples_fn(**self.kwargs):\r\n 79 yield key, example\r\n 80 \r\n\r\n~\/Documents\/GitHub\/datasets\/src\/datasets\/iterable_dataset.py in wrapper(**kwargs)\r\n 282 def wrapper(**kwargs):\r\n 283 python_formatter = PythonFormatter()\r\n--> 284 for key, table in generate_tables_fn(**kwargs):\r\n 285 batch = python_formatter.format_batch(table)\r\n 286 for i, example in enumerate(_batch_to_examples(batch)):\r\n\r\n~\/Documents\/GitHub\/datasets\/src\/datasets\/packaged_modules\/json\/json.py in _generate_tables(self, files, original_files)\r\n 85 file,\r\n 86 read_options=self.config.pa_read_options,\r\n---> 87 parse_options=self.config.pa_parse_options,\r\n 88 )\r\n 89 except pa.ArrowInvalid as err:\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/_json.pyx in pyarrow._json.read_json()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/_json.pyx in pyarrow._json._get_reader()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/io.pxi in pyarrow.lib.get_input_stream()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/io.pxi in pyarrow.lib.get_native_file()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/io.pxi in pyarrow.lib.OSFile.__cinit__()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/io.pxi in pyarrow.lib.OSFile._open_readable()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n~\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nFileNotFoundError: [Errno 2] Failed to open local file 'gzip:\/\/file-000000000000.json::\/Users\/thomwolf\/github-dataset\/file-000000000000.json.gz'. Detail: [errno 2] No such file or directory\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.9.1.dev0\r\n- Platform: Darwin-19.6.0-x86_64-i386-64bit\r\n- Python version: 3.7.7\r\n- PyArrow version: 1.0.0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2607\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2606","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2606\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2606\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2606\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2606","id":938763684,"node_id":"MDU6SXNzdWU5Mzg3NjM2ODQ=","number":2606,"title":"[Metrics] addition of wiki_split metrics","user":{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2459308248,"node_id":"MDU6TGFiZWwyNDU5MzA4MjQ4","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20request","name":"metric request","color":"d4c5f9","default":false,"description":"Requesting to add a new metric"}],"state":"closed","locked":false,"assignee":{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false},"assignees":[{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#take"],"created_at":1625655364000,"updated_at":1626129271000,"closed_at":1626129271000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nWhile training the model on sentence split the task in English we require to evaluate the trained model on `Exact Match`, `SARI` and `BLEU` score\r\nlike this \r\n![image](https:\/\/user-images.githubusercontent.com\/26653468\/124746876-ff5a3380-df3e-11eb-9a01-4b48db7a6694.png)\r\nWhile training we require metrics which can give all the output\r\n\r\nCurrently, we don't have an exact match for text normalized data\r\n\r\n**Describe the solution you'd like**\r\nA custom metrics for wiki_split that can calculate these three values and provide it in the form of a single dictionary\r\nFor exact match, we can refer to [this](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/src\/transformers\/data\/metrics\/squad_metrics.py) \r\n\r\n**Describe alternatives you've considered**\r\nTwo metrics are already present one more can be added for an exact match then we can run all three metrics in training script\r\n\r\n#self-assign","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2606\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2605","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2605\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2605\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2605\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2605","id":938648164,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg0OTkyODIz","number":2605,"title":"Make any ClientError trigger retry in streaming mode (e.g. ClientOSError)","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625647643000,"updated_at":1626099027000,"closed_at":1625648353000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2605","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2605","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2605.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2605.patch"},"body":"During the FLAX sprint some users have this error when streaming datasets:\r\n```python\r\naiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer\r\n```\r\nThis error must trigger a retry instead of directly crashing\r\n\r\nTherefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError`\r\nIn particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2605\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2604","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2604\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2604\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2604\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2604","id":938602237,"node_id":"MDU6SXNzdWU5Mzg2MDIyMzc=","number":2604,"title":"Add option to delete temporary files (e.g. extracted files) when loading dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["Hi !\r\nIf we want something more general, we could either\r\n1. delete the extracted files after the arrow data generation automatically, or \r\n2. delete each extracted file during the arrow generation right after it has been closed.\r\n\r\nSolution 2 is better to save disk space during the arrow generation. Is it what you had in mind ?\r\n\r\nThe API could look like\r\n```python\r\nload_dataset(..., delete_extracted_files_after_usage=True)\r\n```\r\n\r\nIn terms of implementation, here are some directions we could take for each solution:\r\n1. get the list of the extracted files from the DownloadManager and then delete them after the dataset is processed. This can be implemented in `download_and_prepare` I guess\r\n2. maybe wrap and mock `open` in the builder to make it delete the file when the file is closed.","Also, if I delete the extracted files they need to be re-extracted again instead of loading from the Arrow cache files","I think we already opened an issue about this topic (suggested by @stas00): duplicated of #2481?\r\n\r\nThis is in our TODO list... \ud83d\ude05 ","I think the deletion of each extracted file could be implemented in our CacheManager and ExtractManager (once merged to master: #2295, #2277). \ud83d\ude09 ","Oh yes sorry, I didn't check if this was a duplicate","Nevermind @thomwolf, I just mentioned the other issue so that both appear linked in GitHub and we do not forget to close both once we make the corresponding Pull Request... That was the main reason! \ud83d\ude04 ","Ok yes. I think this is an important feature to be able to use large datasets which are pretty much always compressed files.\r\n\r\nIn particular now this requires to keep the extracted file on the drive if you want to avoid reprocessing the dataset so in my case, this require using always ~400GB of drive instead of just 200GB (which is already significant). \r\n\r\nTwo nice features would be to:\r\n- allow to delete the extracted files without loosing the ability to load the dataset from the cached arrow-file\r\n- streamlined decompression when only the currently read file is extracted - this might require to read the list of files from the extracted archives before processing them?","Here is a sample dataset with 2 such large compressed JSON files for debugging: https:\/\/huggingface.co\/datasets\/thomwolf\/github-python","Note that I'm confirming that with the current master branch of dataset, deleting extracted files (without deleting the arrow cache file) lead to **re-extracting** these files when reloading the dataset instead of directly loading the arrow cache file.","Hi ! That's weird, it doesn't do that on my side (tested on master on my laptop by deleting the `extracted` folder in the download cache directory). You tested with one of the files at https:\/\/huggingface.co\/datasets\/thomwolf\/github-python that you have locally ?","Yes it\u2019s when I load local compressed JSON line files with load_dataset(\u2018json\u2019, data_files=\u2026) ","@thomwolf I'm sorry but I can't reproduce this problem. I'm also using: \r\n```python\r\nds = load_dataset(\"json\", split=\"train\", data_files=data_files, cache_dir=cache_dir)\r\n```\r\nafter having removed the extracted files:\r\n```python\r\nassert sorted((cache_dir \/ \"downloads\" \/ \"extracted\").iterdir()) == []\r\n```\r\n\r\nI get the logging message:\r\n```shell\r\nWARNING datasets.builder:builder.py:531 Reusing dataset json ...\r\n```","Do you confirm the extracted folder stays empty after reloading?","> \r\n> \r\n> Do you confirm the extracted folder stays empty after reloading?\r\n\r\nYes, I have the above mentioned assertion on the emptiness of the extracted folder:\r\n```python\r\nassert sorted((cache_dir \/ \"downloads\" \/ \"extracted\").iterdir()) == []\r\n```\r\n"],"created_at":1625644576000,"updated_at":1626685698000,"closed_at":1626685698000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"I'm loading a dataset constituted of 44 GB of compressed JSON files.\r\n\r\nWhen loading the dataset with the JSON script, extracting the files create about 200 GB of uncompressed files before creating the 180GB of arrow cache tables\r\n\r\nHaving a simple way to delete the extracted files after usage (or even better, to stream extraction\/delete) would be nice to avoid disk cluter.\r\n\r\nI can maybe tackle this one in the JSON script unless you want a more general solution.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2604\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2603","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2603\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2603\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2603\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2603","id":938588149,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg0OTQ0ODcz","number":2603,"title":"Fix DuplicatedKeysError in omp","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625643512000,"updated_at":1626099041000,"closed_at":1625662595000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2603","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2603","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2603.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2603.patch"},"body":"Close #2598.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2603\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2602","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2602\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2602\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2602\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2602","id":938555712,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg0OTE5MjMy","number":2602,"title":"Remove import of transformers","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625641098000,"updated_at":1626099022000,"closed_at":1625646531000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2602","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2602","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2602.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2602.patch"},"body":"When pickling a tokenizer within multiprocessing, check that is instance of transformers PreTrainedTokenizerBase without importing transformers.\r\n\r\nRelated to huggingface\/transformers#12549 and #502.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2602\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2601","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2601\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2601\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2601\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2601","id":938096396,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg0NTQyNjY5","number":2601,"title":"Fix `filter` with multiprocessing in case all samples are discarded","user":{"login":"mxschmdt","id":4904985,"node_id":"MDQ6VXNlcjQ5MDQ5ODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4904985?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mxschmdt","html_url":"https:\/\/github.com\/mxschmdt","followers_url":"https:\/\/api.github.com\/users\/mxschmdt\/followers","following_url":"https:\/\/api.github.com\/users\/mxschmdt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mxschmdt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mxschmdt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mxschmdt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mxschmdt\/orgs","repos_url":"https:\/\/api.github.com\/users\/mxschmdt\/repos","events_url":"https:\/\/api.github.com\/users\/mxschmdt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mxschmdt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625591188000,"updated_at":1626099035000,"closed_at":1625662231000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2601","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2601","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2601.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2601.patch"},"body":"Fixes #2600 \r\n\r\nAlso I moved the check for `num_proc` larger than dataset size added in #2566 up so that multiprocessing is not used with one process.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2601\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2600","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2600\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2600\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2600\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2600","id":938086745,"node_id":"MDU6SXNzdWU5MzgwODY3NDU=","number":2600,"title":"Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded","user":{"login":"mxschmdt","id":4904985,"node_id":"MDQ6VXNlcjQ5MDQ5ODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4904985?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mxschmdt","html_url":"https:\/\/github.com\/mxschmdt","followers_url":"https:\/\/api.github.com\/users\/mxschmdt\/followers","following_url":"https:\/\/api.github.com\/users\/mxschmdt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mxschmdt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mxschmdt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mxschmdt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mxschmdt\/orgs","repos_url":"https:\/\/api.github.com\/users\/mxschmdt\/repos","events_url":"https:\/\/api.github.com\/users\/mxschmdt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mxschmdt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625590405000,"updated_at":1625662231000,"closed_at":1625662231000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIf `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import Dataset\r\ndata = Dataset.from_dict({'id': [0,1]})\r\ndata.filter(lambda x: False, num_proc=2)\r\n```\r\n\r\n## Expected results\r\nAn empty table should be returned without crashing.\r\n\r\n## Actual results\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 185, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/fingerprint.py\", line 397, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 2143, in filter\r\n return self.map(\r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1738, in map\r\n result = concatenate_datasets(transformed_shards)\r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 3267, in concatenate_datasets\r\n table = concat_tables(tables_to_concat, axis=axis)\r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/table.py\", line 853, in concat_tables\r\n return ConcatenationTable.from_tables(tables, axis=axis)\r\n File \"\/home\/user\/venv\/lib\/python3.8\/site-packages\/datasets\/table.py\", line 713, in from_tables\r\n blocks = to_blocks(tables[0])\r\nIndexError: list index out of range\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.9.0\r\n- Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5\r\n- Python version: 3.8.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2600\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2599","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2599\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2599\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2599\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2599","id":937980229,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg0NDQ2MTYx","number":2599,"title":"Update processing.rst with other export formats","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625583038000,"updated_at":1626099016000,"closed_at":1625645148000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2599","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2599","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2599.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2599.patch"},"body":"Add other supported export formats than CSV in the docs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2599\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2598","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2598\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2598\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2598\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2598","id":937930632,"node_id":"MDU6SXNzdWU5Mzc5MzA2MzI=","number":2598,"title":"Unable to download omp dataset","user":{"login":"erikadistefano","id":25797960,"node_id":"MDQ6VXNlcjI1Nzk3OTYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25797960?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/erikadistefano","html_url":"https:\/\/github.com\/erikadistefano","followers_url":"https:\/\/api.github.com\/users\/erikadistefano\/followers","following_url":"https:\/\/api.github.com\/users\/erikadistefano\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/erikadistefano\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/erikadistefano\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/erikadistefano\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/erikadistefano\/orgs","repos_url":"https:\/\/api.github.com\/users\/erikadistefano\/repos","events_url":"https:\/\/api.github.com\/users\/erikadistefano\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/erikadistefano\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @erikadistefano , thanks for reporting the issue.\r\n\r\nI have created a Pull Request that should fix it. \r\n\r\nOnce merged into master, feel free to update your installed `datasets` library (either by installing it from our GitHub master branch or waiting until our next release) to be able to load omp dataset."],"created_at":1625580052000,"updated_at":1625662595000,"closed_at":1625662595000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe omp dataset cannot be downloaded because of a DuplicatedKeysError\r\n\r\n## Steps to reproduce the bug\r\nfrom datasets import load_dataset\r\nomp = load_dataset('omp', 'posts_labeled')\r\nprint(omp)\r\n\r\n## Expected results\r\nThis code should download the omp dataset and print the dictionary\r\n\r\n## Actual results\r\nDownloading and preparing dataset omp\/posts_labeled (download: 1.27 MiB, generated: 13.31 MiB, post-processed: Unknown size, total: 14.58 MiB) to \/home\/erika_distefano\/.cache\/huggingface\/datasets\/omp\/posts_labeled\/1.1.0\/2fe5b067be3bff1d4588d5b0cbb9b5b22ae1b9d5b026a8ff572cd389f862735b...\r\n0 examples [00:00, ? examples\/s]2021-07-06 09:43:55.868815: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.11.0\r\nTraceback (most recent call last): \r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 990, in _prepare_split\r\n writer.write(example, key)\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/arrow_writer.py\", line 338, in write\r\n self.check_duplicate_keys()\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/arrow_writer.py\", line 349, in check_duplicate_keys\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 3326\r\nKeys should be unique and deterministic in nature\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"hf_datasets.py\", line 32, in \r\n omp = load_dataset('omp', 'posts_labeled')\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 992, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/arrow_writer.py\", line 409, in finalize\r\n self.check_duplicate_keys()\r\n File \"\/home\/erika_distefano\/.local\/lib\/python3.6\/site-packages\/datasets\/arrow_writer.py\", line 349, in check_duplicate_keys\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 3326\r\nKeys should be unique and deterministic in nature\r\n\r\n## Environment info\r\n- `datasets` version: 1.8.0\r\n- Platform: Ubuntu 18.04.4 LTS\r\n- Python version: 3.6.9\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2598\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2597","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2597\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2597\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2597\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2597","id":937917770,"node_id":"MDExOlB1bGxSZXF1ZXN0Njg0Mzk0MDIz","number":2597,"title":"Remove redundant prepare_module","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2851292821,"node_id":"MDU6TGFiZWwyODUxMjkyODIx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/refactoring","name":"refactoring","color":"B67A40","default":false,"description":"Restructuring existing code without changing its external behavior"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625579265000,"updated_at":1626099052000,"closed_at":1625662906000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2597","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2597","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2597.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2597.patch"},"body":"I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2597\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2596","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2596\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2596\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2596\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2596","id":937598914,"node_id":"MDU6SXNzdWU5Mzc1OTg5MTQ=","number":2596,"title":"Transformer Class on dataset","user":{"login":"arita37","id":18707623,"node_id":"MDQ6VXNlcjE4NzA3NjIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18707623?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arita37","html_url":"https:\/\/github.com\/arita37","followers_url":"https:\/\/api.github.com\/users\/arita37\/followers","following_url":"https:\/\/api.github.com\/users\/arita37\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arita37\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arita37\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arita37\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arita37\/orgs","repos_url":"https:\/\/api.github.com\/users\/arita37\/repos","events_url":"https:\/\/api.github.com\/users\/arita37\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arita37\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Do you have an example in mind that shows how this could be useful ?","Example:\n\nMerge 2 datasets into one datasets\n\nLabel extraction from dataset\n\ndataset(text, label)\n \u2014> dataset(text, newlabel)\n\nTextCleaning.\n\n\nFor image dataset, \nTransformation are easier (ie linear algebra).\n\n\n\n\n\n\n> On Jul 6, 2021, at 17:39, Quentin Lhoest ***@***.***> wrote:\n> \n> \ufeff\n> Hi ! Do you have an example in mind that shows how this could be useful ?\n> \n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n","There are already a few transformations that you can apply on a dataset using methods like `dataset.map()`.\r\nYou can find examples in the documentation here:\r\nhttps:\/\/huggingface.co\/docs\/datasets\/processing.html\r\n\r\nYou can merge two datasets with `concatenate_datasets()` or do label extraction with `dataset.map()` for example","Ok, sure.\n\nThanks for pointing on functional part.\nMy question is more\n\u201cPhilosophical\u201d\/Design perspective.\n\nThere are 2 perspetive:\n Add transformation methods to \n Dataset Class\n\n\n OR Create a Transformer Class\n which operates on Dataset Class.\n\nT(Dataset) \u2014> Dataset\n\ndatasetnew = MyTransform.transform(dataset)\ndatasetNew.save(path)\n\n\nWhat would be the difficulty\nof implementing a Transformer Class\noperating at dataset level ?\n\n\nthanks\n\n\n\n\n\n\n\n\n\n> On Jul 6, 2021, at 22:00, Quentin Lhoest ***@***.***> wrote:\n> \n> \ufeff\n> There are already a few transformations that you can apply on a dataset using methods like dataset.map().\n> You can find examples in the documentation here:\n> https:\/\/huggingface.co\/docs\/datasets\/processing.html\n> \n> You can merge two datasets with concatenate_datasets() or do label extraction with dataset.map() for example\n> \n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n","I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\r\n\r\nI guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)","Thanks for reply.\n\nWhat would be the constraints\nto have\nDataset \u2014> Dataset consistency ?\n\nMain issue would be\nlarger than memory dataset and\nserialization on disk.\n\nTechnically,\none still process at atomic level\nand try to wrap the full results\ninto Dataset\u2026. (!)\n\nWhat would you think ?\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 16:51, Quentin Lhoest ***@***.***> wrote:\n> \n> \ufeff\n> I can imagine that this would be a useful API to implement processing pipelines as transforms. They could be used to perform higher level transforms compared to the atomic transforms allowed by methods like map, filter, etc.\n> \n> I guess if you find any transform that could be useful for text dataset processing, image dataset processing etc. we could definitely start having such transforms :)\n> \n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n","We can be pretty flexible and not impose any constraints for transforms.\r\n\r\nMoreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like `map` work in a batched fashion to not fill up your RAM. So this shouldn't be an issue","Ok thanks.\n\nBut, Dataset has various flavors.\nIn current design of Dataset,\n how the serialization on disk is done (?)\n\n\nThe main issue is serialization \nof newdataset= Transform(Dataset)\n (ie thats why am referring to Out Of memory dataset\u2026):\n\n Should be part of Transform or part of dataset ?\n\n\n\n\nMaybe, not, since the output is aimed to feed model in memory (?)\n\n\n\n\n\n\n\n\n> On Jul 7, 2021, at 18:04, Quentin Lhoest ***@***.***> wrote:\n> \n> \ufeff\n> We can be pretty flexible and not impose any constraints for transforms.\n> \n> Moreover, this library is designed to support datasets bigger than memory. The datasets are loaded from the disk via memory mapping, without filling up RAM. Even processing functions like map work in a batched fashion to not fill up your RAM. So this shouldn't be an issue\n> \n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n","I'm not sure I understand, could you elaborate a bit more please ?\r\n\r\nEach dataset is a wrapper of a PyArrow Table that contains all the data. The table is loaded from an arrow file on the disk.\r\nWe have an ArrowWriter and ArrowReader class to write\/read arrow tables on disk or in in-memory buffers."],"created_at":1625556435000,"updated_at":1625732525000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Just wondering if you have intenttion to create\r\n\r\nTransformerClass :\r\n dataset --> dataset\r\n\r\nand make determnistic transformation (ie not fit).\r\n\r\n\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2596\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2595","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2595\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2595\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2595\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2595","id":937483120,"node_id":"MDU6SXNzdWU5Mzc0ODMxMjA=","number":2595,"title":"ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets","user":{"login":"profsatwinder","id":41314912,"node_id":"MDQ6VXNlcjQxMzE0OTEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41314912?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/profsatwinder","html_url":"https:\/\/github.com\/profsatwinder","followers_url":"https:\/\/api.github.com\/users\/profsatwinder\/followers","following_url":"https:\/\/api.github.com\/users\/profsatwinder\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/profsatwinder\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/profsatwinder\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/profsatwinder\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/profsatwinder\/orgs","repos_url":"https:\/\/api.github.com\/users\/profsatwinder\/repos","events_url":"https:\/\/api.github.com\/users\/profsatwinder\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/profsatwinder\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.","@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. "],"created_at":1625541655000,"updated_at":1625551189000,"closed_at":1625551189000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Error traceback:\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\n in ()\r\n 1 from datasets import load_dataset, load_metric\r\n 2 \r\n----> 3 common_voice_train = load_dataset(\"common_voice\", \"pa-IN\", split=\"train+validation\")\r\n 4 common_voice_test = load_dataset(\"common_voice\", \"pa-IN\", split=\"test\")\r\n\r\n9 frames\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/common_voice\/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b\/common_voice.py in ()\r\n 19 \r\n 20 import datasets\r\n---> 21 from datasets.tasks import AutomaticSpeechRecognition\r\n 22 \r\n 23 \r\n\r\nModuleNotFoundError: No module named 'datasets.tasks'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2595\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2594","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2594\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2594\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2594\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2594","id":937294772,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzODc0NjIz","number":2594,"title":"Fix BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625509450000,"updated_at":1625547578000,"closed_at":1625547578000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2594","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2594","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2594.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2594.patch"},"body":"Fix BibTeX entry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2594\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2593","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2593\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2593\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2593\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2593","id":937242137,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzODMwMjcy","number":2593,"title":"Support pandas 1.3.0 read_csv","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625503204000,"updated_at":1625505254000,"closed_at":1625505254000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2593","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2593","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2593.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2593.patch"},"body":"Workaround for this issue in pandas 1.3.0 : https:\/\/github.com\/pandas-dev\/pandas\/issues\/42387\r\n\r\nThe csv reader raises an error:\r\n```python\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pandas\/io\/parsers\/readers.py in _refine_defaults_read(dialect, delimiter, delim_whitespace, engine, sep, error_bad_lines, warn_bad_lines, on_bad_lines, names, prefix, defaults)\r\n 1304 \r\n 1305 if names is not lib.no_default and prefix is not lib.no_default:\r\n-> 1306 raise ValueError(\"Specified named and prefix; you can only specify one.\")\r\n 1307 \r\n 1308 kwds[\"names\"] = None if names is lib.no_default else names\r\n\r\nValueError: Specified named and prefix; you can only specify one.\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2593\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2592","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2592\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2592\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2592\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2592","id":937060559,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzNjc2MjA4","number":2592,"title":"Add c4.noclean infos","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625489500000,"updated_at":1625490953000,"closed_at":1625490952000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2592","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2592","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2592.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2592.patch"},"body":"Adding the data files checksums and the dataset size of the c4.noclean configuration of the C4 dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2592\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2591","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2591\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2591\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2591\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2591","id":936957975,"node_id":"MDU6SXNzdWU5MzY5NTc5NzU=","number":2591,"title":"Cached dataset overflowing disk space","user":{"login":"BirgerMoell","id":1704131,"node_id":"MDQ6VXNlcjE3MDQxMzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1704131?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BirgerMoell","html_url":"https:\/\/github.com\/BirgerMoell","followers_url":"https:\/\/api.github.com\/users\/BirgerMoell\/followers","following_url":"https:\/\/api.github.com\/users\/BirgerMoell\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BirgerMoell\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BirgerMoell\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BirgerMoell\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BirgerMoell\/orgs","repos_url":"https:\/\/api.github.com\/users\/BirgerMoell\/repos","events_url":"https:\/\/api.github.com\/users\/BirgerMoell\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BirgerMoell\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I'm transferring this issue over to `datasets`","I'm using the datasets concatenate dataset to combine the datasets and then train.\r\ntrain_dataset = concatenate_datasets([dataset1, dataset2, common_voice_train])\r\n\r\n","Hi @BirgerMoell.\r\n\r\nYou have several options:\r\n- to set caching to be stored on a different path location, other than the default one (`~\/.cache\/huggingface\/datasets`):\r\n - either setting the environment variable `HF_DATASETS_CACHE` with the path to the new cache location\r\n - or by passing it with the parameter `cache_dir` when loading each of the datasets: `dataset = load_dataset(..., cache_dir=your_new_location)`\r\n\r\n You can get all the information in the docs: https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#cache-directory\r\n- I wouldn't recommend disabling caching, because current implementation generates cache files anyway, although in a temporary directory and they are deleted when the session closes. See details here: https:\/\/huggingface.co\/docs\/datasets\/processing.html#enable-or-disable-caching\r\n- You could alternatively load the datasets in streaming mode. This is a new feature which allows loading the datasets without downloading the entire files. More information here: https:\/\/huggingface.co\/docs\/datasets\/dataset_streaming.html","Hi @BirgerMoell,\r\n\r\nWe are planning to add a new feature to datasets, which could be interesting in your case: Add the option to delete temporary files (decompressed files) from the cache directory (see: #2481, #2604).\r\n\r\nWe will ping you once this feature is implemented, so that the size of your cache directory will be considerably reduced."],"created_at":1625481799000,"updated_at":1626685699000,"closed_at":1626685699000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm training a Swedish Wav2vec2 model on a Linux GPU and having issues that the huggingface cached dataset folder is completely filling up my disk space (I'm training on a dataset of around 500 gb).\r\n\r\nThe cache folder is 500gb (and now my disk space is full).\r\n\r\nIs there a way to toggle caching or set the caching to be stored on a different device (I have another drive with 4 tb that could hold the caching files).\r\n\r\nThis might not technically be a bug, but I was unsure and I felt that the bug was the closest one.\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/birger\/miniconda3\/envs\/wav2vec2\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"\/home\/birger\/miniconda3\/envs\/wav2vec2\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 186, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/birger\/miniconda3\/envs\/wav2vec2\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 397, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/birger\/miniconda3\/envs\/wav2vec2\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1983, in _map_single\r\n writer.finalize()\r\n File \"\/home\/birger\/miniconda3\/envs\/wav2vec2\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 418, in finalize\r\n self.pa_writer.close()\r\n File \"pyarrow\/ipc.pxi\", line 402, in pyarrow.lib._CRecordBatchWriter.close\r\n File \"pyarrow\/error.pxi\", line 97, in pyarrow.lib.check_status\r\nOSError: [Errno 28] Error writing bytes to file. Detail: [errno 28] No space left on device\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2591\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2590","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2590\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2590\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2590\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2590","id":936954348,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzNTg1MDg2","number":2590,"title":"Add language tags","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625481597000,"updated_at":1625482728000,"closed_at":1625482728000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2590","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2590","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2590.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2590.patch"},"body":"This PR adds some missing language tags needed for ASR datasets in #2565 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2590\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2589","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2589\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2589\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2589\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2589","id":936825060,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzNDc0OTQ0","number":2589,"title":"Support multilabel metrics","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["Hi ! Thanks for the fix :)\r\n\r\nIf I understand correctly, `OptionalSequence` doesn't have an associated arrow type that we know in advance unlike the other feature types, because it depends on the type of the examples.\r\n\r\nFor example, I tested this and it raises an error:\r\n```python\r\nimport datasets as ds\r\nimport pyarrow as pa\r\n\r\nfeatures = ds.Features({\"a\": ds.features.OptionalSequence(ds.Value(\"int32\"))})\r\nbatch = {\"a\": [[0]]}\r\n\r\nwriter = ds.ArrowWriter(features=features, stream=pa.BufferOutputStream())\r\nwriter.write_batch(batch)\r\n# ArrowInvalid: Could not convert [0] with type list: tried to convert to int\r\n```\r\nThis error happens because `features.type` is `StructType(struct)`.\r\n\r\nAnother way to add support for multilabel would be to have several configurations for these metrics. By default it would set the features without sequences, and for the multi label configuration it would use features with sequences. Let me know what you think","Hi @lhoestq, thanks for your feedback :)\r\n\r\nDefinitely, your suggested approach is simpler. I am going to refactor all my PR unless we could envision some other use cases where an OptionalSequence might be convenient, but for now I can't think of any..."],"created_at":1625473165000,"updated_at":1626099130000,"closed_at":1625733615000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2589","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2589","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2589.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2589.patch"},"body":"Currently, multilabel metrics are not supported because `predictions` and `references` are defined as `Value(\"int32\")`.\r\n\r\nThis PR creates a new feature type `OptionalSequence` which can act as either `Value(\"int32\")` or `Sequence(Value(\"int32\"))`, depending on the data passed.\r\n\r\n\r\nClose #2554.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2589\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2588","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2588\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2588\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2588\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2588","id":936795541,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzNDQ5Njky","number":2588,"title":"Fix test_is_small_dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625471186000,"updated_at":1626099011000,"closed_at":1625591370000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2588","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2588","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2588.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2588.patch"},"body":"Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2588\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2587","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2587\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2587\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2587\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2587","id":936771339,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzNDI5NjQy","number":2587,"title":"Add aiohttp to tests extras require","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625469241000,"updated_at":1625475878000,"closed_at":1625475878000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2587","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2587","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2587.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2587.patch"},"body":"Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies.\r\n\r\nOur CI test suite should be exhaustive and test all the library functionalities.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2587\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2586","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2586\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2586\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2586\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2586","id":936747588,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgzNDEwMDU3","number":2586,"title":"Fix misalignment in SQuAD","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625467340000,"updated_at":1626099070000,"closed_at":1625663931000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2586","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2586","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2586.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2586.patch"},"body":"Fix misalignment between:\r\n- the answer text and\r\n- the answer_start within the context\r\n\r\nby keeping original leading blank spaces in the context.\r\n\r\nFix #2585.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2586\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2585","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2585\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2585\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2585\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2585","id":936484419,"node_id":"MDU6SXNzdWU5MzY0ODQ0MTk=","number":2585,"title":"sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index","user":{"login":"mmajurski","id":9354454,"node_id":"MDQ6VXNlcjkzNTQ0NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9354454?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mmajurski","html_url":"https:\/\/github.com\/mmajurski","followers_url":"https:\/\/api.github.com\/users\/mmajurski\/followers","following_url":"https:\/\/api.github.com\/users\/mmajurski\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mmajurski\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mmajurski\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mmajurski\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mmajurski\/orgs","repos_url":"https:\/\/api.github.com\/users\/mmajurski\/repos","events_url":"https:\/\/api.github.com\/users\/mmajurski\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mmajurski\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @mmajurski, thanks for reporting this issue.\r\n\r\nIndeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces.\r\n\r\nI'm going to fix our script so that all leading blank spaces in the source dataset are kept, and there is no misalignment between the answer text and the answer_start within the context.","If you are going to be altering the data cleaning from the source Squad dataset, here is one thing to consider.\r\nThere are occasional double spaces separating words which it might be nice to get rid of. \r\n\r\nEither way, thank you."],"created_at":1625413189000,"updated_at":1625663931000,"closed_at":1625663931000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start'].\r\n\r\nFor example:\r\nid = '56d1f453e7d4791d009025bd'\r\nanswers = {'text': ['Pure Land'], 'answer_start': [146]}\r\nHowever the actual text in context at location 146 is 'ure Land,'\r\nWhich is an off-by-one error from the correct answer.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\n\r\ndef check_context_answer_alignment(example):\r\n for a_idx in range(len(example['answers']['text'])):\r\n # check raw dataset for answer consistency between context and answer\r\n answer_text = example['answers']['text'][a_idx]\r\n a_st_idx = example['answers']['answer_start'][a_idx]\r\n a_end_idx = a_st_idx + len(example['answers']['text'][a_idx])\r\n answer_text_from_context = example['context'][a_st_idx:a_end_idx]\r\n if answer_text != answer_text_from_context:\r\n #print(example['id'])\r\n return False\r\n return True\r\n\r\ndataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True)\r\n\r\nstart_len = len(dataset)\r\ndataset = dataset.filter(check_context_answer_alignment,\r\n num_proc=1,\r\n keep_in_memory=True)\r\nend_len = len(dataset)\r\nprint('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len))\r\n```\r\n\r\n## Expected results\r\nThis code should result in 0 rows being filtered out from the dataset.\r\n\r\n## Actual results\r\nThis filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location.\r\n\r\nThis code will reproduce the problem and produce the following count:\r\n\"258 instances contain mis-alignment between the answer text and answer index.\"\r\n\r\n## Environment info\r\nSteps to rebuilt the Conda environment:\r\n```\r\n# create a virtual environment to stuff all these packages into\r\nconda create -n round8 python=3.8 -y\r\n\r\n# activate the virtual environment\r\nconda activate round8\r\n\r\n# install pytorch (best done through conda to handle cuda dependencies)\r\nconda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia\r\n\r\npip install jsonpickle transformers datasets matplotlib\r\n```\r\n\r\nOS: Ubuntu 20.04\r\nPython 3.8\r\n\r\nResult of `conda env export`:\r\n```\r\nname: round8\r\nchannels:\r\n - pytorch-lts\r\n - nvidia\r\n - defaults\r\ndependencies:\r\n - _libgcc_mutex=0.1=main\r\n - _openmp_mutex=4.5=1_gnu\r\n - blas=1.0=mkl\r\n - brotlipy=0.7.0=py38h27cfd23_1003\r\n - bzip2=1.0.8=h7b6447c_0\r\n - ca-certificates=2021.5.25=h06a4308_1\r\n - certifi=2021.5.30=py38h06a4308_0\r\n - cffi=1.14.5=py38h261ae71_0\r\n - chardet=4.0.0=py38h06a4308_1003\r\n - cryptography=3.4.7=py38hd23ed53_0\r\n - cudatoolkit=11.1.74=h6bb024c_0\r\n - ffmpeg=4.2.2=h20bf706_0\r\n - freetype=2.10.4=h5ab3b9f_0\r\n - gmp=6.2.1=h2531618_2\r\n - gnutls=3.6.15=he1e5248_0\r\n - idna=2.10=pyhd3eb1b0_0\r\n - intel-openmp=2021.2.0=h06a4308_610\r\n - jpeg=9b=h024ee3a_2\r\n - lame=3.100=h7b6447c_0\r\n - lcms2=2.12=h3be6417_0\r\n - ld_impl_linux-64=2.35.1=h7274673_9\r\n - libffi=3.3=he6710b0_2\r\n - libgcc-ng=9.3.0=h5101ec6_17\r\n - libgomp=9.3.0=h5101ec6_17\r\n - libidn2=2.3.1=h27cfd23_0\r\n - libopus=1.3.1=h7b6447c_0\r\n - libpng=1.6.37=hbc83047_0\r\n - libstdcxx-ng=9.3.0=hd4cf53a_17\r\n - libtasn1=4.16.0=h27cfd23_0\r\n - libtiff=4.2.0=h85742a9_0\r\n - libunistring=0.9.10=h27cfd23_0\r\n - libuv=1.40.0=h7b6447c_0\r\n - libvpx=1.7.0=h439df22_0\r\n - libwebp-base=1.2.0=h27cfd23_0\r\n - lz4-c=1.9.3=h2531618_0\r\n - mkl=2021.2.0=h06a4308_296\r\n - mkl-service=2.3.0=py38h27cfd23_1\r\n - mkl_fft=1.3.0=py38h42c9631_2\r\n - mkl_random=1.2.1=py38ha9443f7_2\r\n - ncurses=6.2=he6710b0_1\r\n - nettle=3.7.3=hbbd107a_1\r\n - ninja=1.10.2=hff7bd54_1\r\n - numpy=1.20.2=py38h2d18471_0\r\n - numpy-base=1.20.2=py38hfae3a4d_0\r\n - olefile=0.46=py_0\r\n - openh264=2.1.0=hd408876_0\r\n - openssl=1.1.1k=h27cfd23_0\r\n - pillow=8.2.0=py38he98fc37_0\r\n - pip=21.1.2=py38h06a4308_0\r\n - pycparser=2.20=py_2\r\n - pyopenssl=20.0.1=pyhd3eb1b0_1\r\n - pysocks=1.7.1=py38h06a4308_0\r\n - python=3.8.10=h12debd9_8\r\n - pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0\r\n - readline=8.1=h27cfd23_0\r\n - requests=2.25.1=pyhd3eb1b0_0\r\n - setuptools=52.0.0=py38h06a4308_0\r\n - six=1.16.0=pyhd3eb1b0_0\r\n - sqlite=3.35.4=hdfb4753_0\r\n - tk=8.6.10=hbc83047_0\r\n - torchtext=0.9.1=py38\r\n - torchvision=0.9.1=py38_cu111\r\n - typing_extensions=3.7.4.3=pyha847dfd_0\r\n - urllib3=1.26.4=pyhd3eb1b0_0\r\n - wheel=0.36.2=pyhd3eb1b0_0\r\n - x264=1!157.20191217=h7b6447c_0\r\n - xz=5.2.5=h7b6447c_0\r\n - zlib=1.2.11=h7b6447c_3\r\n - zstd=1.4.9=haebb681_0\r\n - pip:\r\n - click==8.0.1\r\n - cycler==0.10.0\r\n - datasets==1.8.0\r\n - dill==0.3.4\r\n - filelock==3.0.12\r\n - fsspec==2021.6.0\r\n - huggingface-hub==0.0.8\r\n - joblib==1.0.1\r\n - jsonpickle==2.0.0\r\n - kiwisolver==1.3.1\r\n - matplotlib==3.4.2\r\n - multiprocess==0.70.12.2\r\n - packaging==20.9\r\n - pandas==1.2.4\r\n - pyarrow==3.0.0\r\n - pyparsing==2.4.7\r\n - python-dateutil==2.8.1\r\n - pytz==2021.1\r\n - regex==2021.4.4\r\n - sacremoses==0.0.45\r\n - tokenizers==0.10.3\r\n - tqdm==4.49.0\r\n - transformers==4.6.1\r\n - xxhash==2.0.2\r\nprefix: \/home\/mmajurski\/anaconda3\/envs\/round8\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2585\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2584","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2584\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2584\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2584\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2584","id":936049736,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgyODY2Njc1","number":2584,"title":"wi_locness: reference latest leaderboard on codalab","user":{"login":"aseifert","id":4944799,"node_id":"MDQ6VXNlcjQ5NDQ3OTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aseifert","html_url":"https:\/\/github.com\/aseifert","followers_url":"https:\/\/api.github.com\/users\/aseifert\/followers","following_url":"https:\/\/api.github.com\/users\/aseifert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aseifert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aseifert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aseifert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aseifert\/orgs","repos_url":"https:\/\/api.github.com\/users\/aseifert\/repos","events_url":"https:\/\/api.github.com\/users\/aseifert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aseifert\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625257582000,"updated_at":1625475974000,"closed_at":1625475974000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2584","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2584","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2584.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2584.patch"},"body":"The dataset's author asked me to put this codalab link into the dataset's README.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2584\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2583","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2583\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2583\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2583\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2583","id":936034976,"node_id":"MDU6SXNzdWU5MzYwMzQ5NzY=","number":2583,"title":"Error iteration over IterableDataset using Torch DataLoader","user":{"login":"LeenaShekhar","id":12227436,"node_id":"MDQ6VXNlcjEyMjI3NDM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12227436?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LeenaShekhar","html_url":"https:\/\/github.com\/LeenaShekhar","followers_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/followers","following_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/orgs","repos_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/repos","events_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LeenaShekhar\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This is because you first need to format the dataset for pytorch:\r\n\r\n```python\r\n>>> import torch\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n>>> torch_iterable_dataset = dataset.with_format(\"torch\")\r\n>>> assert isinstance(torch_iterable_dataset, torch.utils.data.IterableDataset)\r\n>>> dataloader = torch.utils.data.DataLoader(torch_iterable_dataset, batch_size=4)\r\n>>> next(iter(dataloader))\r\n{'id': tensor([0, 1, 2, 3]), 'text': ['Mtendere Village was inspired...]}\r\n```\r\n\r\nThis is because the pytorch dataloader expects a subclass of `torch.utils.data.IterableDataset`. Since you can't pass an arbitrary iterable to a pytorch dataloader, you first need to build an object that inherits from `torch.utils.data.IterableDataset` using `with_format(\"torch\")` for example.\r\n","Thank you for that and the example! \r\n\r\nWhat you said makes total sense; I just somehow missed that and assumed HF IterableDataset was a subclass of Torch IterableDataset. "],"created_at":1625255758000,"updated_at":1626771885000,"closed_at":1625528903000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case when I look at the dataloader.sampler class I get torch.utils.data.sampler.SequentialSampler while the latter one gives torch.utils.data.dataloader._InfiniteConstantSampler. \r\n\r\nI am not sure if this is how it is meant to be used, but that's what seemed reasonable to me. \r\n\r\n## Steps to reproduce the bug\r\n\r\n1. Does not work.\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n>>> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\n>>> dataloader.sampler\r\n\r\n>>> for batch in dataloader:\r\n... print(batch)\r\n```\r\n\r\n2. Works.\r\n```python\r\nimport torch\r\nfrom torch.utils.data import Dataset, IterableDataset, DataLoader\r\nclass CustomIterableDataset(IterableDataset):\r\n 'Characterizes a dataset for PyTorch'\r\n def __init__(self, data):\r\n 'Initialization'\r\n self.data = data\r\n\r\n\r\n def __iter__(self):\r\n return iter(self.data)\r\n\r\n\r\ndata = list(range(12))\r\ndataset = CustomIterableDataset(data)\r\ndataloader = DataLoader(dataset, batch_size=4)\r\nprint(\"dataloader: \", dataloader.sampler)\r\nfor batch in dataloader:\r\n print(batch)\r\n```\r\n\r\n## Expected results\r\nTo get batches of data with the batch size as 4. Output from the latter one (2) though Datasource is different here so actual data is different.\r\ndataloader: \r\ntensor([0, 1, 2, 3])\r\ntensor([4, 5, 6, 7])\r\ntensor([ 8, 9, 10, 11])\r\n\r\n## Actual results\r\n\r\n\r\n...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/data\/leshekha\/lib\/HFDatasets\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"\/data\/leshekha\/lib\/HFDatasets\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 474, in _next_data\r\n index = self._next_index() # may raise StopIteration\r\n File \"\/data\/leshekha\/lib\/HFDatasets\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py\", line 427, in _next_index\r\n return next(self._sampler_iter) # may raise StopIteration\r\n File \"\/data\/leshekha\/lib\/HFDatasets\/lib\/python3.6\/site-packages\/torch\/utils\/data\/sampler.py\", line 227, in __iter__\r\n for idx in self.sampler:\r\n File \"\/data\/leshekha\/lib\/HFDatasets\/lib\/python3.6\/site-packages\/torch\/utils\/data\/sampler.py\", line 67, in __iter__\r\n return iter(range(len(self.data_source)))\r\nTypeError: object of type 'IterableDataset' has no len()\r\n\r\n## Environment info\r\n\r\n- `datasets` version: '1.8.1.dev0'\r\n- Platform: Linux\r\n- Python version: Python 3.6.8\r\n- PyArrow version: '3.0.0'\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2583\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2582","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2582\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2582\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2582\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2582","id":935859104,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgyNzAzNzg3","number":2582,"title":"Add skip and take","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq looks good. I tried with https:\/\/huggingface.co\/datasets\/vblagoje\/wikipedia_snippets_streamed and it worked nicely. I would add more unit tests for edge cases. What happens if the n is larger than the total number of samples? Just to make sure these cases are handled properly. ","Yup I'll add the tests thanks ;)\r\n\r\nMoreover, I just noticed something in your wiki snippets code. FYI you're using `++passage_counter ` at https:\/\/huggingface.co\/datasets\/vblagoje\/wikipedia_snippets_streamed\/blob\/main\/wikipedia_snippets_streamed.py#L102 but in python this doesn't increment the value @vblagoje ","Thanks @lhoestq - not easy to convert after 10+ years of Java"],"created_at":1625238619000,"updated_at":1625501200000,"closed_at":1625501199000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2582","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2582","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2582.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2582.patch"},"body":"As discussed in https:\/\/github.com\/huggingface\/datasets\/pull\/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets.\r\n\r\nYou can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()`\r\n\r\nOne implementation detail:\r\n\r\nUsing `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer.\r\nI would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip.\r\nI think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation\r\n\r\ncc @vblagoje @lewtun ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2582\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2581","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2581\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2581\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2581\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2581","id":935783588,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgyNjQwMDY4","number":2581,"title":"Faster search_batch for ElasticsearchIndex due to threading","user":{"login":"mwrzalik","id":1376337,"node_id":"MDQ6VXNlcjEzNzYzMzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1376337?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mwrzalik","html_url":"https:\/\/github.com\/mwrzalik","followers_url":"https:\/\/api.github.com\/users\/mwrzalik\/followers","following_url":"https:\/\/api.github.com\/users\/mwrzalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mwrzalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mwrzalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mwrzalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mwrzalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/mwrzalik\/repos","events_url":"https:\/\/api.github.com\/users\/mwrzalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mwrzalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":[],"created_at":1625233327000,"updated_at":1626099226000,"closed_at":1626083571000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2581","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2581","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2581.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2581.patch"},"body":"Hey, \r\nI think it makes sense to perform search_batch threaded, so ES can perform search in parallel.\r\nCheers!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2581\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2580","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2580\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2580\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2580\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2580","id":935767421,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgyNjI2MTkz","number":2580,"title":"Fix Counter import","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625232108000,"updated_at":1625236667000,"closed_at":1625236666000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2580","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2580","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2580.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2580.patch"},"body":"Import from `collections` instead of `typing`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2580\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2579","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2579\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2579\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2579\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2579","id":935486894,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgyMzkyNjYx","number":2579,"title":"Fix BibTeX entry","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625209840000,"updated_at":1625211224000,"closed_at":1625211224000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2579","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2579","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2579.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2579.patch"},"body":"Add missing contributor to BibTeX entry.\r\n\r\ncc: @abhishekkrthakur @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2579\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2578","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2578\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2578\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2578\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2578","id":935187497,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgyMTQ0OTY2","number":2578,"title":"Support Zstandard compressed files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> What if people want to run some tests without having zstandard ?\r\n> Usually what we do is add a decorator @require_zstandard for example\r\n\r\n@lhoestq I think I'm missing something here...\r\n\r\nTests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users of the lib. Users of the lib just `pip install datasets` and no tests are delivered with the lib (`tests` directory is outside the `src` code dir). \r\n\r\nOn the contrary, developers (contributors) of the lib do need to be able to run tests (TDD). And because of that, they are required to install datasets differently: `pip install -e .[dev]`, so that all required developing (and testing) dependencies are properly installed (included `zstandard`).\r\n\r\nApart from `zsatandard`, there are many other dev\/test required dependencies for running tests, and we do not have a `@require_toto` for each and every of these dependencies in our tests: \r\n- `pytest` and `absl-py` (they are not dependencies in install_requires, but only in TEST_REQUIRE extras_require), \r\n- `boto3` (in test_filesystem.py), \r\n- `seqeval` (in test_metric_common.py), \r\n- `bs4` (used by eli5 and tested in test_hf_gcp.py)\r\n- ...\r\n\r\nSo IMHO, to run tests you should previously install datasets with dev or tests dependencies: either `pip install -e .[dev]` or `pip install -e .[tests]` (the latter to be used in CI testing-only part of the development cycle). And the tests should be written accordingly, assuming all tests dependencies are installed.","Hi !\r\nI was saying that because the other dependencies you mentioned are only required for _some_ tests. While here zstd is required for _all_ tests since it's imported in the conftest.py\r\nFeel free to keep it as it is right now, or maybe move the fixture to test_file_utils.py to allow users without zstd to run tests for their builders, dataset card etc. without issues","Thank you ! I think we can merge now","@lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type","> @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type\r\n\r\njust for reference, i tried to stream one of the `.zst` files from [the pile](https:\/\/the-eye.eu\/public\/AI\/pile\/) using\r\n\r\n```python\r\ndata_files = [\"https:\/\/the-eye.eu\/public\/AI\/pile\/train\/00.jsonl.zst\"]\r\nstreamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n```\r\n\r\nand got the following error:\r\n\r\n```\r\nUsing custom data configuration default-4e71acadc389c254\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n\/tmp\/ipykernel_1187680\/10848115.py in \r\n 1 data_files = [\"https:\/\/the-eye.eu\/public\/AI\/pile\/train\/00.jsonl.zst\"]\r\n 2 \r\n----> 3 streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n 4 \r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 835 # this extends the open and os.path.join functions for data streaming\r\n 836 extend_module_for_streaming(builder_instance.__module__, use_auth_token=use_auth_token)\r\n--> 837 return builder_instance.as_streaming_dataset(\r\n 838 split=split,\r\n 839 use_auth_token=use_auth_token,\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)\r\n 922 data_dir=self.config.data_dir,\r\n 923 )\r\n--> 924 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 925 # By default, return all splits\r\n 926 if split is None:\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/packaged_modules\/json\/json.py in _split_generators(self, dl_manager)\r\n 50 if not self.config.data_files:\r\n 51 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 52 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 53 if isinstance(data_files, (str, list, tuple)):\r\n 54 files = data_files\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in download_and_extract(self, url_or_urls)\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n--> 142 return self.extract(self.download(url_or_urls))\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in extract(self, path_or_paths)\r\n 115 \r\n 116 def extract(self, path_or_paths):\r\n--> 117 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n 118 return urlpaths\r\n 119 \r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 202 num_proc = 1\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 204 mapped = [\r\n 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in (.0)\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 204 mapped = [\r\n--> 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n 207 ]\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in _single_map_nested(args)\r\n 141 # Singleton first to spare some computation\r\n 142 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 143 return function(data_struct)\r\n 144 \r\n 145 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in _extract(self, urlpath)\r\n 119 \r\n 120 def _extract(self, urlpath):\r\n--> 121 protocol = self._get_extraction_protocol(urlpath)\r\n 122 if protocol is None:\r\n 123 # no extraction\r\n\r\n~\/miniconda3\/envs\/hf\/lib\/python3.8\/site-packages\/datasets\/utils\/streaming_download_manager.py in _get_extraction_protocol(self, urlpath)\r\n 137 elif path.endswith(\".zip\"):\r\n 138 return \"zip\"\r\n--> 139 raise NotImplementedError(f\"Extraction protocol for file at {urlpath} is not implemented yet\")\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n\r\nNotImplementedError: Extraction protocol for file at https:\/\/the-eye.eu\/public\/AI\/pile\/train\/00.jsonl.zst is not implemented yet\r\n```\r\n\r\ni'm not sure whether @Shashi456 is referring to a fundamental limitation with \"streaming\" zstandard compression files or simply that we need to support the protocol in the streaming api of `datasets`\r\n\r\n","@lewtun our streaming mode patches the Python `open` function. I could have a look tomorrow if it is easily implementable for this case.","@lewtun, I have tested and yes, it is easily implementable. I've created a draft Pull Request with an implementation proposal: #2786.","thanks a lot @albertvillanova - now i can stream the pile :)"],"created_at":1625170954000,"updated_at":1628693184000,"closed_at":1625482227000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2578","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2578","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2578.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2578.patch"},"body":"Close #2572.\r\n\r\ncc: @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2578\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2576","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2576\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2576\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2576\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2576","id":934986761,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgxOTc5MTA1","number":2576,"title":"Add mC4","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625154685000,"updated_at":1625237456000,"closed_at":1625237455000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2576","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2576","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2576.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2576.patch"},"body":"AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https:\/\/huggingface.co\/datasets\/allenai\/c4\r\nThanks a lot to them !\r\n\r\nIn this PR I added the mC4 dataset builder. It supports 108 languages\r\n\r\nYou can load it with\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nen_mc4 = load_dataset(\"mc4\", \"en\")\r\nfr_mc4 = load_dataset(\"mc4\", \"fr\")\r\nen_and_fr_mc4 = load_dataset(\"mc4\", languages=[\"en\", \"fr\"])\r\n```\r\n\r\nIt also supports streaming, if you don't want to download hundreds of GB of data:\r\n```python\r\nen_mc4 = load_dataset(\"mc4\", \"en\", streaming=True)\r\n```\r\n\r\nRegarding the dataset_infos.json, I will add them once I have them.\r\n\r\nAlso we can work on the dataset card at that will be at https:\/\/huggingface.co\/datasets\/mc4\r\nFor now I just added a link to https:\/\/huggingface.co\/datasets\/allenai\/c4 as well as a few sections","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2576\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2575","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2575\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2575\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2575\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2575","id":934876496,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgxODg0OTgy","number":2575,"title":"Add C4","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625147888000,"updated_at":1625237423000,"closed_at":1625237423000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2575","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2575","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2575.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2575.patch"},"body":"The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets.\r\nHowever AllenAI is now hosting the processed C4 dataset in this repo: https:\/\/huggingface.co\/datasets\/allenai\/c4\r\nThanks a lot to them for their amazing work !\r\n\r\nIn this PR I changed the script to download and prepare the data directly from this repo.\r\nIt has 4 variants: en, en.noblocklist, en.noclean, realnewslike\r\n\r\nYou can load it with\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nc4 = load_dataset(\"c4\", \"en\")\r\n```\r\n\r\nIt also supports streaming, if you don't want to download hundreds of GB of data:\r\n```python\r\nc4 = load_dataset(\"c4\", \"en\", streaming=True)\r\n```\r\n\r\nRegarding the dataset_infos.json, I haven't added the infos for en.noclean. I will add them once I have them.\r\n\r\nAlso we can work on the dataset card at https:\/\/huggingface.co\/datasets\/c4\r\nFor now I just added a link to https:\/\/huggingface.co\/datasets\/allenai\/c4 as well as a few sections","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2575\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2574","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2574\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2574\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2574\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2574","id":934632378,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgxNjczMzYy","number":2574,"title":"Add streaming in load a dataset docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625131973000,"updated_at":1625148742000,"closed_at":1625148741000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2574","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2574","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2574.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2574.patch"},"body":"Mention dataset streaming on the \"loading a dataset\" page of the documentation","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2574\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2573","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2573\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2573\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2573\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2573","id":934584745,"node_id":"MDU6SXNzdWU5MzQ1ODQ3NDU=","number":2573,"title":"Finding right block-size with JSON loading difficult for user","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user"],"created_at":1625129315000,"updated_at":1625166653000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As reported by @thomwolf, while loading a JSON Lines file with \"json\" loading script, he gets\r\n> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2573\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2572","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2572\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2572\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2572\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2572","id":934573767,"node_id":"MDU6SXNzdWU5MzQ1NzM3Njc=","number":2572,"title":"Support Zstandard compressed files","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1625128624000,"updated_at":1625482227000,"closed_at":1625482227000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Add support for Zstandard compressed files: https:\/\/facebook.github.io\/zstd\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2572\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2571","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2571\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2571\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2571\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2571","id":933791018,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgwOTQ2NzQ1","number":2571,"title":"Filter expected warning log from transformers","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think the failing test has nothing to do with my PR..."],"created_at":1625064499000,"updated_at":1625198897000,"closed_at":1625198897000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2571","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2571","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2571.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2571.patch"},"body":"Close #2569.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2571\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2570","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2570\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2570\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2570\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2570","id":933402521,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgwNjEzNzc0","number":2570,"title":"Minor fix docs format for bertscore","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1625038932000,"updated_at":1625067061000,"closed_at":1625067061000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2570","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2570","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2570.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2570.patch"},"body":"Minor fix docs format for bertscore:\r\n- link to README\r\n- format of KWARGS_DESCRIPTION","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2570\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2569","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2569\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2569\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2569\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2569","id":933015797,"node_id":"MDU6SXNzdWU5MzMwMTU3OTc=","number":2569,"title":"Weights of model checkpoint not initialized for RobertaModel for Bertscore","user":{"login":"suzyahyah","id":2980993,"node_id":"MDQ6VXNlcjI5ODA5OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2980993?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/suzyahyah","html_url":"https:\/\/github.com\/suzyahyah","followers_url":"https:\/\/api.github.com\/users\/suzyahyah\/followers","following_url":"https:\/\/api.github.com\/users\/suzyahyah\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/suzyahyah\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/suzyahyah\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/suzyahyah\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/suzyahyah\/orgs","repos_url":"https:\/\/api.github.com\/users\/suzyahyah\/repos","events_url":"https:\/\/api.github.com\/users\/suzyahyah\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/suzyahyah\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @suzyahyah, thanks for reporting.\r\n\r\nThe message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:\r\n```\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.layer_norm.weight']\r\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nIn this case, this behavior IS expected and you can safely ignore the warning message.\r\n\r\nThe reason is that you are just using RoBERTa to get the contextual embeddings of the input sentences\/tokens, thus leaving away its head layer, whose weights are ignored.\r\n\r\nFeel free to reopen this issue if you need further explanations.","Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case)."],"created_at":1624992923000,"updated_at":1625123339000,"closed_at":1625038549000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When applying bertscore out of the box, \r\n\r\n```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']```\r\n\r\nFollowing the typical usage from https:\/\/huggingface.co\/docs\/datasets\/loading_metrics.html\r\n\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric('bertscore')\r\n\r\n# Example of typical usage\r\nfor batch in dataset:\r\n inputs, references = batch\r\n predictions = model(inputs)\r\n metric.add_batch(predictions=predictions, references=references)\r\nscore = metric.compute(lang=\"en\")\r\n#score = metric.compute(model_type=\"roberta-large\") # gives the same error\r\n```\r\n\r\nI am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https:\/\/github.com\/Tiiiger\/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo.... \r\n\r\n## Environment info\r\n- `datasets` version: 1.7.0\r\n- Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27\r\n- Python version: 3.9.5\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2569\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2568","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2568\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2568\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2568\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2568","id":932934795,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgwMjE5MDU2","number":2568,"title":"Add interleave_datasets for map-style datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624987164000,"updated_at":1625132014000,"closed_at":1625132013000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2568","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2568","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2568.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2568.patch"},"body":"### Add interleave_datasets for map-style datasets\r\n\r\nAdd support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`.\r\nIt was only supporting iterable datasets (i.e. `IterableDataset` objects).\r\n\r\n### Implementation details\r\n\r\nIt works by concatenating the datasets and then re-order the indices to make the new dataset.\r\n\r\n### TODO\r\n- [x] tests\r\n- [x] docs\r\n\r\nClose #2563 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2568\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2567","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2567\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2567\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2567\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2567","id":932933536,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgwMjE3OTY3","number":2567,"title":"Add ASR task and new languages to resources","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624987081000,"updated_at":1625132543000,"closed_at":1625132529000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2567","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2567","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2567.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2567.patch"},"body":"This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`.\r\n\r\nNote: I used the [Papers with Code list](https:\/\/www.paperswithcode.com\/area\/speech\/speech-recognition) as inspiration for the ASR subtasks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2567\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2566","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2566\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2566\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2566\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2566","id":932804725,"node_id":"MDExOlB1bGxSZXF1ZXN0NjgwMTA2NzM0","number":2566,"title":"fix Dataset.map when num_procs > num rows","user":{"login":"connor-mccarthy","id":55268212,"node_id":"MDQ6VXNlcjU1MjY4MjEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55268212?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/connor-mccarthy","html_url":"https:\/\/github.com\/connor-mccarthy","followers_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/followers","following_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/orgs","repos_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/repos","events_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624979227000,"updated_at":1625130673000,"closed_at":1625130673000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2566","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2566","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2566.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2566.patch"},"body":"closes #2470\r\n\r\n## Testing notes\r\nTo run updated tests:\r\n```sh\r\npytest tests\/test_arrow_dataset.py -k \"BaseDatasetTest and test_map_multiprocessing\" -s\r\n```\r\nWith Python code (to view warning):\r\n```python\r\nfrom datasets import Dataset\r\n\r\n\r\ndataset = Dataset.from_dict({\"x\": [\"sample\"]})\r\nprint(len(dataset))\r\ndataset.map(lambda x: x, num_proc=10)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2566\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2565","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2565\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2565\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2565\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2565","id":932445439,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc5Nzg3NTI4","number":2565,"title":"Inject templates for ASR datasets","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Wait until #2567 is merged so we can benefit from the tagger :)","thanks for the feedback @lhoestq! i've added the new language codes and this PR should be ready for a merge :)"],"created_at":1624960921000,"updated_at":1625495186000,"closed_at":1625495186000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2565","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2565","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2565.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2565.patch"},"body":"This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where \"common\" is defined by the number of models trained on them.\r\n\r\nI also fixed a bunch of the tags in the READMEs \ud83d\ude0e ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2565\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2564","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2564\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2564\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2564\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2564","id":932389639,"node_id":"MDU6SXNzdWU5MzIzODk2Mzk=","number":2564,"title":"concatenate_datasets for iterable datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1624957181000,"updated_at":1624957181000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently `concatenate_datasets` only works for map-style `Dataset`.\r\n\r\nIt would be nice to have it work for `IterableDataset` objects as well.\r\n\r\nIt would simply chain the iterables of the iterable datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2564\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2563","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2563\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2563\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2563\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2563","id":932387639,"node_id":"MDU6SXNzdWU5MzIzODc2Mzk=","number":2563,"title":"interleave_datasets for map-style datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1624957044000,"updated_at":1625132013000,"closed_at":1625132013000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently the `interleave_datasets` functions only works for `IterableDataset`.\r\nLet's make it work for map-style `Dataset` objects as well.\r\n\r\nIt would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2563\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2562","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2562\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2562\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2562\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2562","id":932333436,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc5NjkyMjQ2","number":2562,"title":"Minor fix in loading metrics docs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624953311000,"updated_at":1624987282000,"closed_at":1624987282000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2562","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2562","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2562.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2562.patch"},"body":"Make some minor fixes in \"Loading metrics\" docs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2562\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2561","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2561\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2561\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2561\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2561","id":932321725,"node_id":"MDU6SXNzdWU5MzIzMjE3MjU=","number":2561,"title":"Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`","user":{"login":"apsdehal","id":3616806,"node_id":"MDQ6VXNlcjM2MTY4MDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apsdehal","html_url":"https:\/\/github.com\/apsdehal","followers_url":"https:\/\/api.github.com\/users\/apsdehal\/followers","following_url":"https:\/\/api.github.com\/users\/apsdehal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apsdehal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apsdehal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apsdehal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apsdehal\/orgs","repos_url":"https:\/\/api.github.com\/users\/apsdehal\/repos","events_url":"https:\/\/api.github.com\/users\/apsdehal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apsdehal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I just tried to reproduce what you said:\r\n- create a local builder class\r\n- use `load_dataset`\r\n- update the builder class code\r\n- use `load_dataset` again (with or without `ignore_verifications=True`)\r\nAnd it creates a new cache, as expected.\r\n\r\nWhat modifications did you do to your builder's code ?","Hi @lhoestq. Thanks for your reply. I just did minor modifications for which it should not regenerate cache (for e.g. Adding a print statement). Overall, regardless of cache miss, there should be an explicit option to allow reuse of existing cache if author knows cache shouldn't be affected.","The cache is based on the hash of the dataset builder's code, so changing the code makes it recompute the cache.\r\n\r\nYou could still rename the cache directory of your previous computation to the new expected cache directory if you want to avoid having to recompute it and if you're sure that it would generate the exact same result.\r\n\r\nThe verifications are data integrity verifications: it checks the checksums of the downloaded files, as well as the size of the generated splits.","Hi @apsdehal,\r\n\r\nIf you decide to follow @lhoestq's suggestion to rename the cache directory of your previous computation to the new expected cache directory, you can do the following to get the name of the new expected cache directory once #2500 is merged:\r\n```python\r\nfrom datasets import load_dataset_builder\r\ndataset_builder = load_dataset_builder(\"path\/to\/your\/dataset\")\r\nprint(dataset_builder.cache_dir)\r\n```\r\n\r\nThis way, you don't have to recompute the hash of the dataset script yourself each time you modify the script."],"created_at":1624952583000,"updated_at":1625057724000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIf i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets.\r\n\r\n## Steps to reproduce the bug\r\n\r\n- Create a local dataset builder class\r\n- load the local builder class file using `load_dataset` and let the cache build\r\n- update the file's content\r\n- The cache should rebuilt.\r\n\r\n## Expected results\r\n\r\nWith `ignore_verifications=True`, `load_dataset` should pick up existing cache.\r\n\r\n## Actual results\r\n\r\nCreates new cache.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.7\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2561\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2560","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2560\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2560\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2560\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2560","id":932143634,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc5NTMyODk4","number":2560,"title":"fix Dataset.map when num_procs > num rows","user":{"login":"connor-mccarthy","id":55268212,"node_id":"MDQ6VXNlcjU1MjY4MjEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55268212?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/connor-mccarthy","html_url":"https:\/\/github.com\/connor-mccarthy","followers_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/followers","following_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/orgs","repos_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/repos","events_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for fixing this :)\r\n\r\nLooks like you have tons of changes due to code formatting.\r\nWe're using `black` for this, with a custom line length. To run our code formatting, you just need to run\r\n```\r\nmake style\r\n```\r\n\r\nThen for the windows error in the CI, I'm looking into it. It's probably just a file that isn't properly closed","CI is all green now ! Thanks :)\r\n\r\nThere are still many code formatting changes in your PR - probably due to the first commit you did.\r\nTo avoid conflicts with future PRs it would be nice to only have the changes related to the `num_proc` warning, and not have all those code formatting changes,\r\n\r\nCould you try remove those code formatting changes ?\r\n\r\nIf it's easier for you, you can make a new branch from `master` if needed","Thanks, @lhoestq! Apologies for the half-baked commits yesterday! I wasn\u2019t able to step back in to resolve those CI issues until this morning.\r\n\r\nAlso, I\u2019m surprised that `make style` isn\u2019t resolving the formatting changes. I\u2019m a bit stumped on that, so I\u2019m going to re-apply on a new branch and open a PR as you suggested."],"created_at":1624933451000,"updated_at":1624978818000,"closed_at":1624978411000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2560","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2560","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2560.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2560.patch"},"body":"closes #2470\r\n\r\n## Testing notes\r\nTo run updated tests:\r\n```sh\r\npytest tests\/test_arrow_dataset.py -k \"BaseDatasetTest and test_map_multiprocessing\" -s\r\n```\r\nWith Python code (to view warning):\r\n```python\r\nfrom datasets import Dataset\r\n\r\n\r\ndataset = Dataset.from_dict({\"x\": [\"sample\"]})\r\nprint(len(dataset))\r\ndataset.map(lambda x: x, num_proc=10)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2560\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2559","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2559\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2559\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2559\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2559","id":931849724,"node_id":"MDU6SXNzdWU5MzE4NDk3MjQ=","number":2559,"title":"Memory usage consistently increases when processing a dataset with `.map`","user":{"login":"apsdehal","id":3616806,"node_id":"MDQ6VXNlcjM2MTY4MDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3616806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/apsdehal","html_url":"https:\/\/github.com\/apsdehal","followers_url":"https:\/\/api.github.com\/users\/apsdehal\/followers","following_url":"https:\/\/api.github.com\/users\/apsdehal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/apsdehal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/apsdehal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/apsdehal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/apsdehal\/orgs","repos_url":"https:\/\/api.github.com\/users\/apsdehal\/repos","events_url":"https:\/\/api.github.com\/users\/apsdehal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/apsdehal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Can you share the function you pass to `map` ?\r\nI know you mentioned it would be hard to share some code but this would really help to understand what happened"],"created_at":1624905118000,"updated_at":1624956180000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nI have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch size but that doesn't seem to help.\r\n\r\n## Steps to reproduce the bug\r\n\r\nProviding code as it is would be hard. I can provide a MVP if that helps.\r\n\r\n## Expected results\r\n\r\nMemory usage should become consistent after some time following the launch of processing.\r\n\r\n## Actual results\r\n\r\nMemory usage keeps on increasing.\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.7\r\n- PyArrow version: 3.0.0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2559\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2558","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2558\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2558\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2558\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2558","id":931736647,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc5MTg0Njk1","number":2558,"title":"Update: WebNLG - update checksums","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624896997000,"updated_at":1624900997000,"closed_at":1624900996000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2558","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2558","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2558.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2558.patch"},"body":"The master branch changed so I computed the new checksums.\r\n\r\nI also pinned a specific revision so that it doesn't happen again in the future.\r\n\r\nFix https:\/\/github.com\/huggingface\/datasets\/issues\/2553","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2558\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2557","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2557\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2557\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2557\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2557","id":931633823,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc5MDk4ODg3","number":2557,"title":"Fix `fever` keys","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624890422000,"updated_at":1624896690000,"closed_at":1624896689000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2557","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2557","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2557.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2557.patch"},"body":"The keys has duplicates since they were reset to 0 after each file.\r\n\r\nI fixed it by taking into account the file index as well.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2557\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2556","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2556\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2556\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2556\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2556","id":931595872,"node_id":"MDU6SXNzdWU5MzE1OTU4NzI=","number":2556,"title":"Better DuplicateKeysError error to help the user debug the issue","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624888257000,"updated_at":1624888257000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys.\r\n\r\nThe current one is\r\n```python\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 48\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\nand we could have something that guides the user to debugging the issue:\r\n```python\r\nDuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\nPlease fix the dataset script at \r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2556\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2555","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2555\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2555\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2555\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2555","id":931585485,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc5MDU4ODM3","number":2555,"title":"Fix code_search_net keys","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fix #2552."],"created_at":1624887623000,"updated_at":1630571083000,"closed_at":1624889435000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2555","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2555","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2555.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2555.patch"},"body":"There were duplicate keys in the `code_search_net` dataset, as reported in https:\/\/github.com\/huggingface\/datasets\/issues\/2552\r\n\r\nI fixed the keys (it was an addition of the file and row indices, which was causing collisions)\r\n\r\nFix #2552.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2555\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2554","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2554\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2554\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2554\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2554","id":931453855,"node_id":"MDU6SXNzdWU5MzE0NTM4NTU=","number":2554,"title":"Multilabel metrics not supported","user":{"login":"GuillemGSubies","id":37592763,"node_id":"MDQ6VXNlcjM3NTkyNzYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37592763?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/GuillemGSubies","html_url":"https:\/\/github.com\/GuillemGSubies","followers_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/followers","following_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/orgs","repos_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/repos","events_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @GuillemGSubies, thanks for reporting.\r\n\r\nI have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.","Looks nice, thank you very much! \ud83d\ude80 "],"created_at":1624878586000,"updated_at":1625733615000,"closed_at":1625733615000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When I try to use a metric like F1 macro I get the following error:\r\n\r\n```\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'list'\r\n```\r\nThere is an explicit casting here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075\/src\/datasets\/features.py#L274\r\n\r\nAnd looks like this is because here\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075\/metrics\/f1\/f1.py#L88\r\n\r\nthe features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work:\r\n\r\n```python\r\nclass F1(datasets.Metric):\r\n def _info(self):\r\n return datasets.MetricInfo(\r\n description=_DESCRIPTION,\r\n citation=_CITATION,\r\n inputs_description=_KWARGS_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"predictions\": datasets.Sequence(datasets.Value(\"int32\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"int32\")),\r\n }\r\n ),\r\n reference_urls=[\"https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.f1_score.html\"],\r\n )\r\n\r\n def _compute(self, predictions, references, labels=None, pos_label=1, average=\"binary\", sample_weight=None):\r\n return {\r\n \"f1\": f1_score(\r\n references,\r\n predictions,\r\n labels=labels,\r\n pos_label=pos_label,\r\n average=average,\r\n sample_weight=sample_weight,\r\n ),\r\n }\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2554\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2553","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2553\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2553\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2553\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2553","id":931365926,"node_id":"MDU6SXNzdWU5MzEzNjU5MjY=","number":2553,"title":"load_dataset(\"web_nlg\") NonMatchingChecksumError","user":{"login":"alexandrethm","id":33730312,"node_id":"MDQ6VXNlcjMzNzMwMzEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33730312?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexandrethm","html_url":"https:\/\/github.com\/alexandrethm","followers_url":"https:\/\/api.github.com\/users\/alexandrethm\/followers","following_url":"https:\/\/api.github.com\/users\/alexandrethm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexandrethm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexandrethm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexandrethm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexandrethm\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexandrethm\/repos","events_url":"https:\/\/api.github.com\/users\/alexandrethm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexandrethm\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today.\r\nI just pushed a fix at #2558 - this shouldn't happen anymore in the future.","This is fixed on `master` now :)\r\nWe'll do a new release soon !"],"created_at":1624872406000,"updated_at":1624901019000,"closed_at":1624900996000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError.\r\n\r\n## Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('web_nlg', name=\"release_v3.0_en\", split=\"dev\")\r\n```\r\n\r\nGives\r\n\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/gitlab.com\/shimorina\/webnlg-dataset\/-\/archive\/master\/webnlg-dataset-master.zip']\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.8.0\r\n- Platform: macOS-11.3.1-x86_64-i386-64bit\r\n- Python version: 3.9.4\r\n- PyArrow version: 3.0.0\r\n\r\nAlso tested on Linux, with python 3.6.8","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2553\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2552","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2552\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2552\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2552\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2552","id":931354687,"node_id":"MDU6SXNzdWU5MzEzNTQ2ODc=","number":2552,"title":"Keys should be unique error on code_search_net","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Two questions:\r\n- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?\r\n- I don't really understand why the id is duplicated in the code of `code_search_net`, how can I debug this actually?","Thanks for reporting. There was indeed an issue with the keys. The key was the addition of the file id and row id, which resulted in collisions. I just opened a PR to fix this at https:\/\/github.com\/huggingface\/datasets\/pull\/2555\r\n\r\nTo help users debug this kind of errors we could try to show a message like this\r\n```python\r\nDuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\nPlease fix the dataset script at \r\n```\r\n\r\nThis way users who what to look for if they want to debug this issue. I opened an issue to track this: https:\/\/github.com\/huggingface\/datasets\/issues\/2556","and are we sure there are not a lot of datasets which are now broken with this change?","Thanks to the dummy data, we know for sure that most of them work as expected.\r\n`code_search_net` wasn't caught because the dummy data only have one dummy data file while the dataset script can actually load several of them using `os.listdir`. Let me take a look at all the other datasets that use `os.listdir` to see if the keys are alright","I found one issue on `fever` (PR here: https:\/\/github.com\/huggingface\/datasets\/pull\/2557)\r\nAll the other ones seem fine :)","Hi! Got same error when loading other dataset:\r\n```python3\r\nload_dataset('wikicorpus', 'raw_en')\r\n```\r\n\r\ntb:\r\n```pytb\r\n---------------------------------------------------------------------------\r\nDuplicatedKeysError Traceback (most recent call last)\r\n\/opt\/conda\/lib\/python3.8\/site-packages\/datasets\/builder.py in _prepare_split(self, split_generator)\r\n 1109 example = self.info.features.encode_example(record)\r\n-> 1110 writer.write(example, key)\r\n 1111 finally:\r\n\r\n\/opt\/conda\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py in write(self, example, key, writer_batch_size)\r\n 341 if self._check_duplicates:\r\n--> 342 self.check_duplicate_keys()\r\n 343 # Re-intializing to empty list for next batch\r\n\r\n\/opt\/conda\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py in check_duplicate_keys(self)\r\n 352 if hash in tmp_record:\r\n--> 353 raise DuplicatedKeysError(key)\r\n 354 else:\r\n\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 519\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\nVersion: datasets==1.11.0","Fixed by #2555.","The wikicorpus issue has been fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/2844\r\n\r\nWe'll do a new release of `datasets` soon :)"],"created_at":1624871720000,"updated_at":1630937310000,"closed_at":1630571129000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nLoading `code_search_net` seems not possible at the moment.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n>>> load_dataset('code_search_net')\r\nDownloading: 8.50kB [00:00, 3.09MB\/s] \r\nDownloading: 19.1kB [00:00, 10.1MB\/s] \r\nNo config specified, defaulting to: code_search_net\/all\r\nDownloading and preparing dataset code_search_net\/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to \/Users\/thomwolf\/.cache\/huggingface\/datasets\/code_search_net\/all\/1.0.0\/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a...\r\nTraceback (most recent call last): \r\n File \"\/Users\/thomwolf\/Documents\/GitHub\/datasets\/src\/datasets\/builder.py\", line 1067, in _prepare_split\r\n writer.write(example, key)\r\n File \"\/Users\/thomwolf\/Documents\/GitHub\/datasets\/src\/datasets\/arrow_writer.py\", line 343, in write\r\n self.check_duplicate_keys()\r\n File \"\/Users\/thomwolf\/Documents\/GitHub\/datasets\/src\/datasets\/arrow_writer.py\", line 354, in check_duplicate_keys\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 48\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.8.1.dev0\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n- Python version: 3.8.5\r\n- PyArrow version: 2.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2552\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2551","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2551\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2551\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2551\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2551","id":930967978,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc4NTQzMjg1","number":2551,"title":"Fix FileSystems documentation","user":{"login":"connor-mccarthy","id":55268212,"node_id":"MDQ6VXNlcjU1MjY4MjEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55268212?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/connor-mccarthy","html_url":"https:\/\/github.com\/connor-mccarthy","followers_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/followers","following_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/orgs","repos_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/repos","events_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/connor-mccarthy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624810722000,"updated_at":1624885795000,"closed_at":1624885794000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2551","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2551","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2551.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2551.patch"},"body":"### What this fixes:\r\nThis PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https:\/\/huggingface.co\/docs\/datasets\/filesystems.html)).\r\n\r\n### What were the issues?\r\nWhen I originally tried implementing the code examples I faced several bugs attributed to:\r\n\r\n- out of date [botocore](https:\/\/github.com\/boto\/botocore) call signatures\r\n- capitalization errors in the `S3FileSystem` class name (written as `S3Filesystem` in one place)\r\n- call signature errors for the `S3FileSystem` class constructor (uses parameter `sessions` instead of `session` in some places) (see [`s3fs`](https:\/\/s3fs.readthedocs.io\/en\/latest\/api.html#s3fs.core.S3FileSystem) for where this constructor signature is defined)\r\n\r\n### Testing\/reviewing notes\r\nInstructions for generating the documentation locally: [here](https:\/\/github.com\/huggingface\/datasets\/tree\/master\/docs#generating-the-documentation).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2551\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2550","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2550\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2550\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2550\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2550","id":930951287,"node_id":"MDU6SXNzdWU5MzA5NTEyODc=","number":2550,"title":"Allow for incremental cumulative metric updates in a distributed setup","user":{"login":"eladsegal","id":13485709,"node_id":"MDQ6VXNlcjEzNDg1NzA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13485709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eladsegal","html_url":"https:\/\/github.com\/eladsegal","followers_url":"https:\/\/api.github.com\/users\/eladsegal\/followers","following_url":"https:\/\/api.github.com\/users\/eladsegal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eladsegal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eladsegal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eladsegal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eladsegal\/orgs","repos_url":"https:\/\/api.github.com\/users\/eladsegal\/repos","events_url":"https:\/\/api.github.com\/users\/eladsegal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eladsegal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624806058000,"updated_at":1624814189000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Currently, using a metric allows for one of the following:\r\n- Per example\/batch metrics\r\n- Cumulative metrics over the whole data\r\n\r\nWhat I'd like is to have an efficient way to get cumulative metrics over the examples\/batches added so far, in order to display it as part of the progress bar during training\/evaluation.\r\n\r\nSince most metrics are just an average of per-example metrics (which aren't?), an efficient calculation can be done as follows:\r\n`((score_cumulative * n_cumulative) + (score_new * n_new)) \/ (n_cumulative+ n_new)`\r\nwhere `n` and `score` refer to number of examples and metric score, `cumulative` refers to the cumulative metric and `new` refers to the addition of new examples.\r\n\r\nIf you don't want to add this capability in the library, a simple solution exists so users can do it themselves:\r\nIt is easy to implement for a single process setup, but in a distributed one there is no way to get the correct `n_new`.\r\nThe solution for this is to return the number of examples that was used to compute the metrics in `.compute()` by adding the following line here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/5a3221785311d0ce86c2785b765e86bd6997d516\/src\/datasets\/metric.py#L402-L403\r\n```\r\noutput[\"number_of_examples\"] = len(predictions)\r\n```\r\nand also remove the log message here so it won't spam:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/3db67f5ff6cbf807b129d2b4d1107af27623b608\/src\/datasets\/metric.py#L411\r\n\r\nIf this change is ok with you, I'll open a pull request.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2550\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2549","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2549\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2549\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2549\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2549","id":929819093,"node_id":"MDU6SXNzdWU5Mjk4MTkwOTM=","number":2549,"title":"Handling unlabeled datasets","user":{"login":"nelson-liu","id":7272031,"node_id":"MDQ6VXNlcjcyNzIwMzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7272031?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nelson-liu","html_url":"https:\/\/github.com\/nelson-liu","followers_url":"https:\/\/api.github.com\/users\/nelson-liu\/followers","following_url":"https:\/\/api.github.com\/users\/nelson-liu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nelson-liu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nelson-liu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nelson-liu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nelson-liu\/orgs","repos_url":"https:\/\/api.github.com\/users\/nelson-liu\/repos","events_url":"https:\/\/api.github.com\/users\/nelson-liu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nelson-liu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https:\/\/huggingface.co\/docs\/datasets\/_modules\/datasets\/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/multi_nli\/multi_nli.py#L62-L77), you can see how the Features were originally specified. \r\n\r\nFeel free to use it as a template, customize it and pass it to `load_dataset` using the parameter `features`.","ah got it, thanks!"],"created_at":1624595543000,"updated_at":1624655277000,"closed_at":1624655276000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi!\r\n\r\nIs there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).\r\n\r\nFor example, I want to use the MNLI dataset reader ( https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/multi_nli\/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `\"label\": data.get(\"gold_label\")`, but got the following error:\r\n\r\n```\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 989, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 953, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 848, in encode_nested_example\r\n k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 848, in \r\n k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 875, in encode_nested_example\r\n return schema.encode_example(obj)\r\n File \"\/home\/nfliu\/miniconda3\/envs\/debias\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 653, in encode_example\r\n if not -1 <= example_data < self.num_classes:\r\nTypeError: '<=' not supported between instances of 'int' and 'NoneType'\r\n```\r\n\r\nWhat's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2549\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2548","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2548\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2548\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2548\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2548","id":929232831,"node_id":"MDU6SXNzdWU5MjkyMzI4MzE=","number":2548,"title":"Field order issue in loading json","user":{"login":"luyug","id":55288513,"node_id":"MDQ6VXNlcjU1Mjg4NTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55288513?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/luyug","html_url":"https:\/\/github.com\/luyug","followers_url":"https:\/\/api.github.com\/users\/luyug\/followers","following_url":"https:\/\/api.github.com\/users\/luyug\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/luyug\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/luyug\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/luyug\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/luyug\/orgs","repos_url":"https:\/\/api.github.com\/users\/luyug\/repos","events_url":"https:\/\/api.github.com\/users\/luyug\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/luyug\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @luyug, thanks for reporting.\r\n\r\nThe good news is that we fixed this issue only 9 days ago: #2507.\r\n\r\nThe patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0.\r\n\r\nFeel free to reopen the issue if the problem persists."],"created_at":1624541393000,"updated_at":1624545403000,"closed_at":1624545245000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe `load_dataset` function expects columns in alphabetical order when loading json files.\r\n\r\nSimilar bug was previously reported for csv in #623 and fixed in #684.\r\n## Steps to reproduce the bug\r\n\r\nFor a json file `j.json`,\r\n```\r\n{\"c\":321, \"a\": 1, \"b\": 2}\r\n```\r\nRunning the following,\r\n```\r\nf= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')})\r\njson_data = datasets.load_dataset('json', data_files='j.json', features=f)\r\n```\r\n\r\n\r\n## Expected results\r\nA successful load.\r\n## Actual results\r\n```\r\nFile \"pyarrow\/table.pxi\", line 1409, in pyarrow.lib.Table.cast\r\nValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c']\r\n```\r\n\r\n## Environment info\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.8\r\n- PyArrow version: 3.0.0\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2548\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2547","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2547\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2547\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2547\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2547","id":929192329,"node_id":"MDU6SXNzdWU5MjkxOTIzMjk=","number":2547,"title":"Dataset load_from_disk is too slow","user":{"login":"alexvaca0","id":35173563,"node_id":"MDQ6VXNlcjM1MTczNTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35173563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexvaca0","html_url":"https:\/\/github.com\/alexvaca0","followers_url":"https:\/\/api.github.com\/users\/alexvaca0\/followers","following_url":"https:\/\/api.github.com\/users\/alexvaca0\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexvaca0\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexvaca0\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexvaca0\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexvaca0\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexvaca0\/repos","events_url":"https:\/\/api.github.com\/users\/alexvaca0\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexvaca0\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It looks like an issue with the virtual disk you are using.\r\n\r\nWe load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).\r\nHowever there happens to be issues with virtual disks (for example on spot instances), for which memory mapping does a pass over the entire file, and this takes a while. We are discussing about this issue here: #2252 \r\n\r\nMemory mapping is something handled by the OS so we can't do much about it, though we're still trying to figure out what's causing this behavior exactly to see what we can do.","Okay, that's exactly my case, with spot instances... Therefore this isn't something we can change in any way to be able to load the dataset faster? I mean, what do you do internally at huggingface for being able to use spot instances with datasets efficiently?","There are no solutions yet unfortunately.\r\nWe're still trying to figure out a way to make the loading instantaneous on such disks, I'll keep you posted"],"created_at":1624538744000,"updated_at":1624632998000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"@lhoestq \r\n## Describe the bug\r\nIt's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example). \r\n\r\n## Steps to reproduce the bug\r\nJust get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset.\r\n\r\n## Expected results\r\nI expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time. \r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Ubuntu 18\r\n- Python version: 3.8\r\n\r\n\r\nI've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2547\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2546","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2546\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2546\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2546\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2546","id":929091689,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc2OTk2MjQ0","number":2546,"title":"Add license to the Cambridge English Write & Improve + LOCNESS dataset card","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624531169000,"updated_at":1624531921000,"closed_at":1624531921000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2546","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2546","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2546.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2546.patch"},"body":"As noticed in https:\/\/github.com\/huggingface\/datasets\/pull\/2539, the licensing information was missing for this dataset.\r\n\r\nI added it and I also filled a few other empty sections.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2546\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2545","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2545\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2545\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2545\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2545","id":929016580,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc2OTMxOTYw","number":2545,"title":"Fix DuplicatedKeysError in drop dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624525839000,"updated_at":1624546628000,"closed_at":1624546628000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2545","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2545","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2545.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2545.patch"},"body":"Close #2542.\r\n\r\ncc: @VictorSanh.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2545\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2544","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2544\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2544\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2544\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2544","id":928900827,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc2ODM1MjYz","number":2544,"title":"Fix logging levels","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624516896000,"updated_at":1624628419000,"closed_at":1624628419000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2544","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2544","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2544.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2544.patch"},"body":"Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info.\r\n\r\nClose #2543.\r\n\r\ncc: @stas00 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2544\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2543","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2543\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2543\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2543\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2543","id":928571915,"node_id":"MDU6SXNzdWU5Mjg1NzE5MTU=","number":2543,"title":"switching some low-level log.info's to log.debug?","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @stas00, thanks for pointing out this issue with logging.\r\n\r\nI agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code."],"created_at":1624476415000,"updated_at":1624628419000,"closed_at":1624628419000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"In https:\/\/github.com\/huggingface\/transformers\/pull\/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components.\r\n\r\nThe trouble is that now we get a ton of these:\r\n\r\n```\r\n06\/23\/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on \/home\/stas\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow.lock\r\n06\/23\/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes \/home\/stas\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow.\r\n06\/23\/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n06\/23\/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on \/home\/stas\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow.lock\r\n```\r\n\r\nMay I suggest that these can be `log.debug` as it's no informative to the user.\r\n\r\nMore examples: these are not informative - too much information:\r\n```\r\n06\/23\/2021 12:14:26 - INFO - datasets.load - Checking \/home\/stas\/.cache\/huggingface\/datasets\/downloads\/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports.\r\n06\/23\/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from \/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a\r\n\r\n```\r\n\r\nWhile these are:\r\n```\r\n06\/23\/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from \/home\/stas\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt16\/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a\r\n06\/23\/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (\/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a)\r\n```\r\n\r\nI also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy.\r\n\r\nBut I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`.\r\n\r\ne.g.:\r\n\r\n```\r\n06\/23\/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (\/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a)\r\n```\r\n\r\nwhy is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`.\r\n\r\nWarnings are typically something that's bordering error or the first thing to check when things don't work as expected.\r\n\r\ninfrequent info is there to inform of the different stages or important events.\r\n\r\nEverything else is debug.\r\n\r\nAt least the way I understand things. \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2543\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2542","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2542\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2542\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2542\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2542","id":928540382,"node_id":"MDU6SXNzdWU5Mjg1NDAzODI=","number":2542,"title":"`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa\/adversarialQA`","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["very much related: https:\/\/github.com\/huggingface\/datasets\/pull\/2333","Hi @VictorSanh, thank you for reporting this issue with duplicated keys.\r\n\r\n- The issue with \"adversarial_qa\" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch.\r\n- I am investigating the issue with `drop`. I'll ping you to keep you informed.","Hi @VictorSanh, the issue is already fixed and merged into master branch and will be included in our next release version 1.9.0.","thank you!"],"created_at":1624473676000,"updated_at":1624657805000,"closed_at":1624546628000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nFailure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"drop\")\r\nload_dataset(\"adversarial_qa\", \"adversarialQA\")\r\n```\r\n\r\n## Expected results\r\nThe examples keys should be unique.\r\n\r\n## Actual results\r\n```bash\r\n>>> load_dataset(\"drop\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset drop\/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to \/home\/hf\/.cache\/huggingface\/datasets\/drop\/default\/0.1.0\/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026...\r\nTraceback (most recent call last): \r\n File \"\", line 1, in \r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 751, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 992, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 409, in finalize\r\n self.check_duplicate_keys()\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 349, in check_duplicate_keys\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.7.0\r\n- Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2542\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2541","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2541\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2541\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2541\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2541","id":928529078,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc2NTIwNDgx","number":2541,"title":"update discofuse link cc @ekQ","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The CI is failing because the dataset tags for `discofuse` are missing. I'm merging this PR since this is unrelated to this PR, but feel free to open another PR to add the tags here if you have some time:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/19408f9fab85c79b966085574cd2da3b90959179\/datasets\/discofuse\/README.md#L1-L5\r\n\r\nThe missing tags are:\r\n```\r\n'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'pretty_name', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```\r\nThanks again !"],"created_at":1624472698000,"updated_at":1624890891000,"closed_at":1624890890000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2541","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2541","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2541.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2541.patch"},"body":"Updating the discofuse link: https:\/\/github.com\/google-research-datasets\/discofuse\/commit\/fd4b120cb3dd19a417e7f3b5432010b574b5eeee","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2541\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2540","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2540\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2540\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2540\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2540","id":928433892,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc2NDM5NTM1","number":2540,"title":"Remove task templates if required features are removed during `Dataset.map`","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624465225000,"updated_at":1624545675000,"closed_at":1624541643000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2540","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2540","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2540.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2540.patch"},"body":"This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# `yelp_polarity` comes with a `TextClassification` template\r\nds = load_dataset(\"yelp_polarity\", split=\"test\")\r\nds\r\n# Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 38000\r\n# })\r\n\r\n# Triggers KeyError: 'label' - oh noes!\r\nds.map(lambda x: {\"inputs\": 0}, remove_columns=ds.column_names)\r\n```\r\n\r\nI wrote a unit test to make sure I could reproduce the error and then patched a fix.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2540\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2539","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2539\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2539\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2539\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2539","id":927952429,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc2MDI5MDY5","number":2539,"title":"remove wi_locness dataset due to licensing issues","user":{"login":"aseifert","id":4944799,"node_id":"MDQ6VXNlcjQ5NDQ3OTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aseifert","html_url":"https:\/\/github.com\/aseifert","followers_url":"https:\/\/api.github.com\/users\/aseifert\/followers","following_url":"https:\/\/api.github.com\/users\/aseifert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aseifert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aseifert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aseifert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aseifert\/orgs","repos_url":"https:\/\/api.github.com\/users\/aseifert\/repos","events_url":"https:\/\/api.github.com\/users\/aseifert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aseifert\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I'm sorry to hear that.\r\nThough we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https:\/\/www.cl.cam.ac.uk\r\n\r\nTherefore I'm not sure what's the issue with licensing. What do you mean exactly ?","I think that the main issue is that the licesenses of the data are not made clear in the huggingface hub \u2013\u00a0other people wrongly assumed that the data was license-free, which resulted in commercial use, which is against the licenses.\r\nIs it possible to add the licenses from the original download to huggingface? that would help clear any confusion (licenses can be found here: https:\/\/www.cl.cam.ac.uk\/research\/nl\/bea2019st\/data\/wi+locness_v2.1.bea19.tar.gz)","Thanks for the clarification @SimonHFL \r\nYou're completely right, we need to show the licenses.\r\nI just added them here: https:\/\/huggingface.co\/datasets\/wi_locness#licensing-information","Hi guys, I'm one of the authors of this dataset. \r\n\r\nTo clarify, we're happy for you to keep the data in the repo on 2 conditions:\r\n1. You don't host the data yourself.\r\n2. You make it clear that anyone who downloads the data via HuggingFace should read and abide by the license. \r\n\r\nI think you've now met these conditions, so we're all good, but I just wanted to make it clear in case there are any issues in the future. Thanks again to @aseifert for bringing this to our attention! :)","Thanks for your message @chrisjbryant :)\r\nI'm closing this PR then.\r\n\r\nAnd thanks for reporting @aseifert"],"created_at":1624433732000,"updated_at":1624632762000,"closed_at":1624632762000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2539","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2539","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2539.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2539.patch"},"body":"It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2539\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2538","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2538\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2538\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2538\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2538","id":927940691,"node_id":"MDU6SXNzdWU5Mjc5NDA2OTE=","number":2538,"title":"Loading partial dataset when debugging","user":{"login":"reachtarunhere","id":9061913,"node_id":"MDQ6VXNlcjkwNjE5MTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9061913?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/reachtarunhere","html_url":"https:\/\/github.com\/reachtarunhere","followers_url":"https:\/\/api.github.com\/users\/reachtarunhere\/followers","following_url":"https:\/\/api.github.com\/users\/reachtarunhere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/reachtarunhere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/reachtarunhere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/reachtarunhere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/reachtarunhere\/orgs","repos_url":"https:\/\/api.github.com\/users\/reachtarunhere\/repos","events_url":"https:\/\/api.github.com\/users\/reachtarunhere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/reachtarunhere\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.\r\nThen when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `\"train[:10]\"`), then it will load the 10 first rows of the train split that you have on disk.\r\n\r\nTherefore, as long as you don't delete your cache, all your calls to `load_dataset` will be very fast. Except the first call that downloads the dataset of course ^^","That\u2019s a use case for the new streaming feature, no?","Hi @reachtarunhere.\r\n\r\nBesides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download\/preprocess\/cache only some specific split(s), which will definitely speed up your workflow.\r\n\r\nIf this feature is interesting for you, I can ping you once it will be merged into the master branch.","Thanks all for responding.\r\n\r\nHey @albertvillanova \r\n\r\nThanks. Yes, I would be interested.\r\n\r\n@lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated downloading so it seems that the cache is working.\r\n\r\nI am not aware of the streaming feature @thomwolf mentioned. So I might need to read up on it.","@reshinthadithyan I use the .select function to have a fraction of indices."],"created_at":1624432792000,"updated_at":1627567833000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). \r\n\r\nEvery time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.\r\n\r\nIs there a way to only load part of the dataset on load_dataset? This would really speed up my workflow.\r\nSomething like a debug mode would really help. Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2538\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2537","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2537\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2537\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2537\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2537","id":927472659,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc1NjI1OTY3","number":2537,"title":"Add Parquet loader + from_parquet and to_parquet","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.\r\n\r\nAlso I still need to add dummy data to test the parquet builder.","I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.\r\n\r\nEverything is ready for review now :)\r\nI reused pretty much the same tests we had for CSV","Done !\r\nNow we're still allowing pyarrow>=1.0.0, but when users want to use parquet features they're asked to update to pyarrow>=3.0.0"],"created_at":1624382903000,"updated_at":1625070663000,"closed_at":1625070658000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2537","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2537","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2537.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2537.patch"},"body":"Continuation of #2247 \r\n\r\nI added a \"parquet\" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.\r\nAs usual, the data are converted to arrow in a batched way to avoid loading everything in memory.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2537\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2536","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2536\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2536\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2536\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2536","id":927338639,"node_id":"MDU6SXNzdWU5MjczMzg2Mzk=","number":2536,"title":"Use `Audio` features for `AutomaticSpeechRecognition` task template","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I'm just retaking and working on #2324. \ud83d\ude09 "],"created_at":1624374441000,"updated_at":1624375011000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'. \r\n\r\nThe solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2536\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2535","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2535\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2535\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2535\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2535","id":927334349,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc1NTA3MTAw","number":2535,"title":"Improve Features docs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624374207000,"updated_at":1624455643000,"closed_at":1624455643000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2535","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2535","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2535.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2535.patch"},"body":"- Fix rendering and cross-references in Features docs\r\n- Add docstrings to Features methods","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2535\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2534","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2534\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2534\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2534\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2534","id":927201435,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc1MzkzODg0","number":2534,"title":"Sync with transformers disabling NOTSET","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?","Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..."],"created_at":1624366461000,"updated_at":1624545767000,"closed_at":1624545767000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2534","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2534","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2534.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2534.patch"},"body":"Close #2528.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2534\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2533","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2533\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2533\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2533\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2533","id":927193264,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc1Mzg2OTMw","number":2533,"title":"Add task template for automatic speech recognition","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@SBrandeis @lhoestq i've integrated your suggestions, so this is ready for another review :)","Merging if it's good for you @lewtun :)"],"created_at":1624365902000,"updated_at":1624464886000,"closed_at":1624463817000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2533","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2533","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2533.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2533.patch"},"body":"This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription.\r\n\r\nUsage:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom datasets.tasks import AutomaticSpeechRecognition\r\n\r\nds = load_dataset(\"timit_asr\", split=\"train[:10]\")\r\n# Dataset({\r\n# features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],\r\n# num_rows: 10\r\n# })\r\n\r\ntask = AutomaticSpeechRecognition(audio_file_column=\"file\", transcription_column=\"text\")\r\nds.prepare_for_task(task)\r\n# Dataset({\r\n# features: ['audio_file', 'transcription'],\r\n# num_rows: 10\r\n# })\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2533\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2532","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2532\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2532\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2532\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2532","id":927063196,"node_id":"MDU6SXNzdWU5MjcwNjMxOTY=","number":2532,"title":"Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?","> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?\r\n\r\nOh, I am sorry\r\nI would reopen the post on huggingface\/transformers"],"created_at":1624356498000,"updated_at":1624425445000,"closed_at":1624425445000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"[This colab notebook](https:\/\/colab.research.google.com\/drive\/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https:\/\/huggingface.co\/transformers\/custom_datasets.html#tok-ner).\r\n\r\nThe pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https:\/\/en.wikipedia.org\/wiki\/Kana_ligature) break the alignment of `return_offsets_mapping`:\r\n![image](https:\/\/user-images.githubusercontent.com\/50871412\/122904371-db192700-d382-11eb-8917-1775db76db69.png)\r\n\r\nWithout the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https:\/\/colab.research.google.com\/drive\/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing)\r\n\r\nIt is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('\u30ff')` return '\u30b3\u30c8'.\r\n\r\nOne workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https:\/\/colab.research.google.com\/drive\/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`.\r\n\r\nI guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this.\r\n\r\np.s.\r\n**I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https:\/\/github.com\/huggingface\/datasets\/pull\/2466)**\r\n`get_dataset `is just a simple wrapping for `load_dataset`\r\nand the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained(\"xlm-roberta-large\")`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2532\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2531","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2531\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2531\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2531\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2531","id":927017924,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc1MjM2MDYz","number":2531,"title":"Fix dev version","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624353430000,"updated_at":1624355230000,"closed_at":1624355229000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2531","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2531","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2531.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2531.patch"},"body":"The dev version that ends in `.dev0` should be greater than the current version.\r\nHowever it happens that `1.8.0 > 1.8.0.dev0` for example.\r\nTherefore we need to use `1.8.1.dev0` for example in this case.\r\n\r\nI updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2531\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2530","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2530\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2530\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2530\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2530","id":927013773,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc1MjMyNDk0","number":2530,"title":"Fixed label parsing in the ProductReviews dataset","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?","Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in .\/datasets\/turkish_product_reviews\/README.md. You can fix that by adding this in the yaml tags:\r\n```yaml\r\npretty_name: Turkish Product Reviews\r\n```\r\n- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file","> Hi ! Thanks for fixing this :)\r\n> \r\n> The CI fails for two reasons:\r\n> \r\n> * the `pretty_name` tag is missing in yaml tags in .\/datasets\/turkish_product_reviews\/README.md. You can fix that by adding this in the yaml tags:\r\n> \r\n> \r\n> ```yaml\r\n> pretty_name: Turkish Product Reviews\r\n> ```\r\n> \r\n> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file\r\n\r\nMany thanks for the quick feedback.\r\nI made the relevant fixes but still got the error :(","> Thanks !\r\n> The CI was failing because of the dataset card that was missing some sections. I fixed that.\r\n> \r\n> It's all good now\r\n\r\nSuper. Thanks for the support."],"created_at":1624353165000,"updated_at":1624366520000,"closed_at":1624366360000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2530","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2530","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2530.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2530.patch"},"body":"Fixed issue with parsing dataset labels. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2530\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2529","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2529\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2529\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2529\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2529","id":926378812,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc0NjkxNjA5","number":2529,"title":"Add summarization template","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice thanks !\r\n> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.\r\n\r\nsure, on it! thanks for the explanations about the `self._to` method :)","@lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collected them in their dedicated test case. (at some point i'll revisit this so we can just use `pytest` natively, but the PR is already getting out-of-scope :))"],"created_at":1624291711000,"updated_at":1624458131000,"closed_at":1624455010000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2529","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2529","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2529.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2529.patch"},"body":"This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between \"extractive\" or \"abstractive\" summarization - both can be handled with this template.\r\n\r\nUsage:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom datasets.tasks import Summarization\r\n\r\nds = load_dataset(\"xsum\", split=\"train\")\r\n# Dataset({\r\n# features: ['document', 'summary', 'id'],\r\n# num_rows: 204045\r\n# })\r\n\r\nsummarization = Summarization(text_column=\"document\", summary_column=\"summary\")\r\nds.prepare_for_task(summarization)\r\n# Dataset({\r\n# features: ['text', 'summary'],\r\n# num_rows: 204045\r\n# })\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2529\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2528","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2528\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2528\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2528\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2528","id":926314656,"node_id":"MDU6SXNzdWU5MjYzMTQ2NTY=","number":2528,"title":"Logging cannot be set to NOTSET similar to transformers","user":{"login":"joshzwiebel","id":34662010,"node_id":"MDQ6VXNlcjM0NjYyMDEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34662010?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joshzwiebel","html_url":"https:\/\/github.com\/joshzwiebel","followers_url":"https:\/\/api.github.com\/users\/joshzwiebel\/followers","following_url":"https:\/\/api.github.com\/users\/joshzwiebel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joshzwiebel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joshzwiebel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joshzwiebel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joshzwiebel\/orgs","repos_url":"https:\/\/api.github.com\/users\/joshzwiebel\/repos","events_url":"https:\/\/api.github.com\/users\/joshzwiebel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joshzwiebel\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`."],"created_at":1624287894000,"updated_at":1624545767000,"closed_at":1624545767000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIn the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https:\/\/github.com\/huggingface\/transformers\/blob\/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b\/src\/transformers\/file_utils.py#L1449) \r\n`disable=bool(logging.get_verbosity() == logging.NOTSET)`\r\nand datasets accomplishes this like [so](https:\/\/github.com\/huggingface\/datasets\/blob\/83554e410e1ab8c6f705cfbb2df7953638ad3ac1\/src\/datasets\/utils\/file_utils.py#L493)\r\n`not_verbose = bool(logger.getEffectiveLevel() > WARNING)`\r\n## Steps to reproduce the bug\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda : logging.NOTSET\r\ndatasets.load_dataset(\"patrickvonplaten\/librispeech_asr_dummy\")\r\n```\r\n\r\n## Expected results\r\nThe code should download and load the dataset as normal without displaying progress bars\r\n\r\n## Actual results\r\n```ImportError Traceback (most recent call last)\r\n in \r\n----> 1 datasets.load_dataset(\"patrickvonplaten\/librispeech_asr_dummy\")\r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)\r\n 713 dataset=True,\r\n 714 return_resolved_file_path=True,\r\n--> 715 use_auth_token=use_auth_token,\r\n 716 )\r\n 717 # Set the base path for downloads as the parent of the script location\r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)\r\n 350 file_path = hf_bucket_url(path, filename=name, dataset=False)\r\n 351 try:\r\n--> 352 local_path = cached_path(file_path, download_config=download_config)\r\n 353 except FileNotFoundError:\r\n 354 raise FileNotFoundError(\r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 289 use_etag=download_config.use_etag,\r\n 290 max_retries=download_config.max_retries,\r\n--> 291 use_auth_token=download_config.use_auth_token,\r\n 292 )\r\n 293 elif os.path.exists(url_or_filename):\r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 668 headers=headers,\r\n 669 cookies=cookies,\r\n--> 670 max_retries=max_retries,\r\n 671 )\r\n 672 \r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)\r\n 493 initial=resume_size,\r\n 494 desc=\"Downloading\",\r\n--> 495 disable=not_verbose,\r\n 496 )\r\n 497 for chunk in response.iter_content(chunk_size=1024):\r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/tqdm\/notebook.py in __init__(self, *args, **kwargs)\r\n 217 total = self.total * unit_scale if self.total else self.total\r\n 218 self.container = self.status_printer(\r\n--> 219 self.fp, total, self.desc, self.ncols)\r\n 220 self.sp = self.display\r\n 221 \r\n\r\n~\/venv\/lib\/python3.7\/site-packages\/tqdm\/notebook.py in status_printer(_, total, desc, ncols)\r\n 95 if IProgress is None: # #187 #451 #558 #872\r\n 96 raise ImportError(\r\n---> 97 \"IProgress not found. Please update jupyter and ipywidgets.\"\r\n 98 \" See https:\/\/ipywidgets.readthedocs.io\/en\/stable\"\r\n 99 \"\/user_install.html\")\r\n\r\nImportError: IProgress not found. Please update jupyter and ipywidgets. See https:\/\/ipywidgets.readthedocs.io\/en\/stable\/user_install.html\r\n```\r\n## Environment info\r\n\r\n- `datasets` version: 1.8.0\r\n- Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\nI am running this code on Deepnote and which important to this issue **does not** support IPywidgets\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2528\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2527","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2527\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2527\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2527\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2527","id":926031525,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc0MzkzNjQ5","number":2527,"title":"Replace bad `n>1M` size tag","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624268555000,"updated_at":1624288010000,"closed_at":1624288009000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2527","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2527","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2527.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2527.patch"},"body":"Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M1M`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2527\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2526","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2526\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2526\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2526\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2526","id":925929228,"node_id":"MDU6SXNzdWU5MjU5MjkyMjg=","number":2526,"title":"Add COCO datasets","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624261712000,"updated_at":1624261712000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** COCO\r\n- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.\r\n- **Paper + website:** https:\/\/cocodataset.org\/#home\r\n- **Data:** https:\/\/cocodataset.org\/#download\r\n- **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2526\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2525","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2525\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2525\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2525\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2525","id":925896358,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc0Mjc5MTgy","number":2525,"title":"Use scikit-learn package rather than sklearn in setup.py","user":{"login":"lesteve","id":1680079,"node_id":"MDQ6VXNlcjE2ODAwNzk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1680079?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lesteve","html_url":"https:\/\/github.com\/lesteve","followers_url":"https:\/\/api.github.com\/users\/lesteve\/followers","following_url":"https:\/\/api.github.com\/users\/lesteve\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lesteve\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lesteve\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lesteve\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lesteve\/orgs","repos_url":"https:\/\/api.github.com\/users\/lesteve\/repos","events_url":"https:\/\/api.github.com\/users\/lesteve\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lesteve\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624259065000,"updated_at":1624269673000,"closed_at":1624265853000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2525","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2525","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2525.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2525.patch"},"body":"The sklearn package is an historical thing and should probably not be used by anyone, see https:\/\/github.com\/scikit-learn\/scikit-learn\/issues\/8215#issuecomment-344679114 for some caveats.\r\n\r\nNote: this affects only TESTS_REQUIRE so I guess only developers not end users.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2525\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2524","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2524\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2524\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2524\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2524","id":925610934,"node_id":"MDExOlB1bGxSZXF1ZXN0Njc0MDQzNzk1","number":2524,"title":"Raise FileNotFoundError in WindowsFileLock","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ?","This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came to my notice) because of its infinite loop that would suppress errors. So instead of suppressing the `FileNotFoundError` that is thrown by `os.open` if the file name is longer than the max allowed path length, this PR reraises it to notify the user."],"created_at":1624199111000,"updated_at":1624874182000,"closed_at":1624870059000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2524","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2524","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2524.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2524.patch"},"body":"Closes #2443 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2524\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2523","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2523\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2523\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2523\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2523","id":925421008,"node_id":"MDU6SXNzdWU5MjU0MjEwMDg=","number":2523,"title":"Fr","user":{"login":"aDrIaNo34500","id":71971234,"node_id":"MDQ6VXNlcjcxOTcxMjM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71971234?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aDrIaNo34500","html_url":"https:\/\/github.com\/aDrIaNo34500","followers_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/followers","following_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/orgs","repos_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/repos","events_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aDrIaNo34500\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624118192000,"updated_at":1624128503000,"closed_at":1624128503000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"__Originally posted by @lewtun in https:\/\/github.com\/huggingface\/datasets\/pull\/2469__","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2523\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2522","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2522\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2522\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2522\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2522","id":925334379,"node_id":"MDU6SXNzdWU5MjUzMzQzNzk=","number":2522,"title":"Documentation Mistakes in Dataset: emotion","user":{"login":"GDGauravDutta","id":62606251,"node_id":"MDQ6VXNlcjYyNjA2MjUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/62606251?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/GDGauravDutta","html_url":"https:\/\/github.com\/GDGauravDutta","followers_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/followers","following_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/orgs","repos_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/repos","events_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/GDGauravDutta\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthis issue has been already reported in the dataset repo (https:\/\/github.com\/dair-ai\/emotion_dataset\/issues\/2), so this is a bug on their side."],"created_at":1624086537000,"updated_at":1624124296000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"As per documentation,\r\nDataset: emotion\r\nHomepage: https:\/\/github.com\/dair-ai\/emotion_dataset\r\n\r\nDataset: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/emotion\/emotion.py\r\n\r\nPermalink: https:\/\/huggingface.co\/datasets\/viewer\/?dataset=emotion\r\n\r\nEmotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper.\r\n\r\nBut when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2522\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2521","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2521\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2521\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2521\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2521","id":925030685,"node_id":"MDExOlB1bGxSZXF1ZXN0NjczNTgxNzQ4","number":2521,"title":"Insert text classification template for Emotion dataset","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624031779000,"updated_at":1624267351000,"closed_at":1624267351000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2521","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2521","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2521.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2521.patch"},"body":"This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2521\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2520","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2520\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2520\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2520\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2520","id":925015004,"node_id":"MDU6SXNzdWU5MjUwMTUwMDQ=","number":2520,"title":"Datasets with tricky task templates","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624030437000,"updated_at":1624031186000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"I'm collecting a list of datasets here that don't follow the \"standard\" taxonomy and require further investigation to implement task templates for.\r\n\r\n## Text classification\r\n\r\n* [hatexplain](https:\/\/huggingface.co\/datasets\/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized.\r\n* [muchocine](https:\/\/huggingface.co\/datasets\/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2520\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2519","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2519\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2519\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2519\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2519","id":924903240,"node_id":"MDExOlB1bGxSZXF1ZXN0NjczNDcyMzYy","number":2519,"title":"Improve performance of pandas arrow extractor","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like this change\r\n```\r\npa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)\r\n```\r\ndoesn't return a Series with the correct type.\r\nThis is related to https:\/\/issues.apache.org\/jira\/browse\/ARROW-9664\r\n\r\nSince the types_mapper isn't taken into account, the ArrayXD types are not converted to the correct pandas extension dtype","@lhoestq I think I found a workaround... \ud83d\ude09 ","For some reason the benchmarks are not run Oo","Anyway, merging.\r\nWe'll see on master how much speed ups we got"],"created_at":1624022681000,"updated_at":1624266366000,"closed_at":1624266366000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2519","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2519","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2519.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2519.patch"},"body":"While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2519\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2518","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2518\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2518\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2518\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2518","id":924654100,"node_id":"MDExOlB1bGxSZXF1ZXN0NjczMjU5Nzg1","number":2518,"title":"Add task templates for tydiqa and xquad","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just tested TydiQA and it works fine :)"],"created_at":1624003594000,"updated_at":1624028477000,"closed_at":1624027833000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2518","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2518","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2518.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2518.patch"},"body":"This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub.\r\n\r\nNotes: \r\n\r\n* I could not test the tydiqa implementation since I don't have enough disk space \ud83d\ude22 . But I am confident the template works :)\r\n* there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434 \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2518\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2517","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2517\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2517\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2517\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2517","id":924643345,"node_id":"MDExOlB1bGxSZXF1ZXN0NjczMjUwODk1","number":2517,"title":"Fix typo in MatthewsCorrelation class name","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1624002786000,"updated_at":1624005835000,"closed_at":1624005835000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2517","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2517","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2517.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2517.patch"},"body":"Close #2513.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2517\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2516","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2516\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2516\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2516\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2516","id":924597470,"node_id":"MDU6SXNzdWU5MjQ1OTc0NzA=","number":2516,"title":"datasets.map pickle issue resulting in invalid mapping function","user":{"login":"david-waterworth","id":5028974,"node_id":"MDQ6VXNlcjUwMjg5NzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5028974?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/david-waterworth","html_url":"https:\/\/github.com\/david-waterworth","followers_url":"https:\/\/api.github.com\/users\/david-waterworth\/followers","following_url":"https:\/\/api.github.com\/users\/david-waterworth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/david-waterworth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/david-waterworth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/david-waterworth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/david-waterworth\/orgs","repos_url":"https:\/\/api.github.com\/users\/david-waterworth\/repos","events_url":"https:\/\/api.github.com\/users\/david-waterworth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/david-waterworth\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.\r\n\r\nWhy do you change an attribute of your tokenizer when `__getstate__` is called ?","@lhoestq because if I try to pickle my custom tokenizer (it contains a pure python pretokenization step in an otherwise rust backed tokenizer) I get\r\n\r\n> Exception: Error while attempting to pickle Tokenizer: Custom PreTokenizer cannot be serialized\r\n\r\nSo I remove the Custom PreTokenizer in `__getstate__` and then restore it in `__setstate__` (since it doesn't contain any state). This is what my `__getstate__` \/ `__setstate__` looks like:\r\n\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = self.__dict__.copy()\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n\r\n def __setstate__(self, d):\r\n \"\"\"\r\n Reinstates pre_tokenizer\r\n \"\"\"\r\n logger.debug(\"Reattaching pre_tokenizer\")\r\n self.__dict__ = d\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\nIf this is the case can you think of another way of avoiding my issue?","Actually, maybe I need to deep copy `self.__dict__`? That way `self` isn't modified. That was my intention and I thought it was working - I'll double-check after the weekend.","Doing a deep copy results in the warning:\r\n\r\n> 06\/20\/2021 16:02:15 - WARNING - datasets.fingerprint - Parameter 'function'= of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n\r\n\r\n```\r\ndef __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = copy.deepcopy(self.__dict__)\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n```","Looks like there is still an object that is not pickable in your `tokenize_function` function.\r\n\r\nYou can test if an object can be pickled and hashed by using \r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nHasher.hash(my_object)\r\n```\r\n\r\nUnder the hood it pickles the object to compute its hash, so it calls `__getstate__` when applicable.","I figured it out, the problem is deep copy itself uses pickle (unless you implement `__deepcopy__`). So when I changed `__getstate__` it started throwing an error.\r\n\r\nI'm sure there's a better way of doing this, but in order to return the `__dict__` without the non-pikelable pre-tokeniser and without modifying self I removed the pre-tokenizers, did a deep copy and then re-generated it.\r\n\r\nIt does work - although I noticed Hasher doesn't call `__hash__` if the object being hashed implements it which I feel it should? If it did I could return a hash of the tokenizers.json file instead.\r\n\r\n```\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n self.backend_tokenizer.pre_tokenizer = tokenizers.pre_tokenizers.Sequence([])\r\n out = copy.deepcopy(self.__dict__) #self.__dict__.copy()\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\n return out\r\n```\r\n","I'm glad you figured something out :)\r\n\r\nRegarding hashing: we're not using hashing for the same purpose as the python `__hash__` purpose (which is in general for dictionary lookups). For example it is allowed for python hashing to not return the same hash across sessions, while our hashing must return the same hashes across sessions for the caching to work properly."],"created_at":1623998846000,"updated_at":1624456069000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` \/ `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts.\r\n\r\nThe following reproduces the issue - most likely I'm missing something\r\n\r\nA simulated tokeniser which can be pickled\r\n\r\n```\r\nclass CustomTokenizer:\r\n def __init__(self):\r\n self.state = \"init\"\r\n\r\n def __getstate__(self):\r\n print(\"__getstate__ called\")\r\n out = self.__dict__.copy()\r\n self.state = \"pickled\"\r\n return out\r\n \r\n def __setstate__(self, d):\r\n print(\"__setstate__ called\")\r\n self.__dict__ = d\r\n self.state = \"restored\"\r\n\r\ntokenizer = CustomTokenizer()\r\n```\r\n\r\nTest that it actually works - prints \"__getstate__ called\" and \"__setstate__ called\"\r\n```\r\nimport pickle\r\nserialized = pickle.dumps(tokenizer)\r\nrestored = pickle.loads(serialized)\r\nassert restored.state == \"restored\"\r\n```\r\n\r\nSimulate a function that tokenises examples, when dataset.map is called, this function \r\n```\r\ndef tokenize_function(examples):\r\n assert tokenizer.state == \"restored\" # this shouldn't fail but it does\r\n output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer\r\n return output\r\n```\r\n\r\nUse map to simulate tokenization\r\n```\r\nimport glob\r\nfrom datasets import load_dataset\r\n\r\nassert tokenizer.state == \"restored\"\r\ntrain_files = glob.glob('train*.csv')\r\nvalidation_files = glob.glob('validation*.csv')\r\ndatasets = load_dataset(\"csv\", data_files=dict(train=train_files, validation=validation_files))\r\n\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n)\r\n```\r\n\r\nWhat's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ \/ __setstate__. I'm not sure if there's another hook I'm supposed to implement as well?\r\n\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n in \r\n 8 tokenized_datasets = datasets.map(\r\n 9 tokenize_function,\r\n---> 10 batched=True,\r\n 11 )\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)\r\n 487 desc=desc,\r\n 488 )\r\n--> 489 for k, dataset in self.items()\r\n 490 }\r\n 491 )\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in (.0)\r\n 487 desc=desc,\r\n 488 )\r\n--> 489 for k, dataset in self.items()\r\n 490 }\r\n 491 )\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1633 fn_kwargs=fn_kwargs,\r\n 1634 new_fingerprint=new_fingerprint,\r\n-> 1635 desc=desc,\r\n 1636 )\r\n 1637 else:\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 184 }\r\n 185 # apply actual function\r\n--> 186 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 187 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 188 # re-apply format to the output\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 395 # Call actual function\r\n 396 \r\n--> 397 out = func(self, *args, **kwargs)\r\n 398 \r\n 399 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)\r\n 1961 indices,\r\n 1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0,\r\n-> 1963 offset=offset,\r\n 1964 )\r\n 1965 except NumExamplesMismatch:\r\n\r\n~\/.pyenv\/versions\/3.7.6\/envs\/xxx\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)\r\n 1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset\r\n 1854 processed_inputs = (\r\n-> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n 1856 )\r\n 1857 if update_data is None:\r\n\r\n in tokenize_function(examples)\r\n 1 def tokenize_function(examples):\r\n----> 2 assert tokenizer.state == \"restored\"\r\n 3 tokenizer(examples)\r\n 4 return examples\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2516\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2515","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2515\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2515\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2515\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2515","id":924435447,"node_id":"MDExOlB1bGxSZXF1ZXN0NjczMDc3NTIx","number":2515,"title":"CRD3 dataset card","user":{"login":"wilsonyhlee","id":1937386,"node_id":"MDQ6VXNlcjE5MzczODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1937386?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wilsonyhlee","html_url":"https:\/\/github.com\/wilsonyhlee","followers_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/followers","following_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/orgs","repos_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/repos","events_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wilsonyhlee\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623975847000,"updated_at":1624270724000,"closed_at":1624270724000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2515","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2515","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2515.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2515.patch"},"body":"This PR adds additional information to the CRD3 dataset card. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2515\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2514","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2514\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2514\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2514\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2514","id":924417172,"node_id":"MDU6SXNzdWU5MjQ0MTcxNzI=","number":2514,"title":"Can datasets remove duplicated rows?","user":{"login":"liuxinglan","id":16516583,"node_id":"MDQ6VXNlcjE2NTE2NTgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16516583?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/liuxinglan","html_url":"https:\/\/github.com\/liuxinglan","followers_url":"https:\/\/api.github.com\/users\/liuxinglan\/followers","following_url":"https:\/\/api.github.com\/users\/liuxinglan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/liuxinglan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/liuxinglan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/liuxinglan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/liuxinglan\/orgs","repos_url":"https:\/\/api.github.com\/users\/liuxinglan\/repos","events_url":"https:\/\/api.github.com\/users\/liuxinglan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/liuxinglan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! For now this is probably the best option.\r\nWe might add a feature like this in the feature as well.\r\n\r\nDo you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\nOtherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting for some cases","Yes, I'd like to work on this feature once I'm done with #2500, but first I have to do some research, and see if the implementation wouldn't be too complex.\r\n\r\nIn the meantime, maybe [this lib](https:\/\/github.com\/TomScheffers\/pyarrow_ops) can help. However, note that this lib operates directly on pyarrow tables and relies only on `hash` to find duplicates (e.g. `-1` and `-2` have the same hash in Python 3, so this lib will treat them as duplicates), which doesn't make much sense.","> Hi ! For now this is probably the best option.\r\n> We might add a feature like this in the feature as well.\r\n> \r\n> Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\n> Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting for some cases\r\n\r\nGreat if this is can be done. Thanks!!\r\n\r\nNot sure if you are asking me. In any case I don't know of any unfortunately :( in practice if data is really large we normally do it with spark (only for info. I understand this is not useful in developing this library..)"],"created_at":1623972938000,"updated_at":1624261024000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\ni find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..\r\n\r\n\r\n**Describe the solution you'd like**\r\nhave a functionality of \" remove duplicated rows\"\r\n\r\n**Describe alternatives you've considered**\r\nconvert dataset to pandas, remove duplicate, and convert back...\r\n\r\n\r\n**Additional context**\r\nno","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2514\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2513","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2513\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2513\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2513\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2513","id":924174413,"node_id":"MDU6SXNzdWU5MjQxNzQ0MTM=","number":2513,"title":"Corelation should be Correlation","user":{"login":"colbym-MM","id":71514164,"node_id":"MDQ6VXNlcjcxNTE0MTY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71514164?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/colbym-MM","html_url":"https:\/\/github.com\/colbym-MM","followers_url":"https:\/\/api.github.com\/users\/colbym-MM\/followers","following_url":"https:\/\/api.github.com\/users\/colbym-MM\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/colbym-MM\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/colbym-MM\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/colbym-MM\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/colbym-MM\/orgs","repos_url":"https:\/\/api.github.com\/users\/colbym-MM\/repos","events_url":"https:\/\/api.github.com\/users\/colbym-MM\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/colbym-MM\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @colbym-MM, thanks for reporting. We are fixing it."],"created_at":1623950928000,"updated_at":1624005835000,"closed_at":1624005835000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62\/metrics\/matthews_correlation\/matthews_correlation.py#L66","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2513\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2512","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2512\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2512\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2512\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2512","id":924069353,"node_id":"MDU6SXNzdWU5MjQwNjkzNTM=","number":2512,"title":"seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict'","user":{"login":"avidale","id":8642136,"node_id":"MDQ6VXNlcjg2NDIxMzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8642136?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/avidale","html_url":"https:\/\/github.com\/avidale","followers_url":"https:\/\/api.github.com\/users\/avidale\/followers","following_url":"https:\/\/api.github.com\/users\/avidale\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/avidale\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/avidale\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/avidale\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/avidale\/orgs","repos_url":"https:\/\/api.github.com\/users\/avidale\/repos","events_url":"https:\/\/api.github.com\/users\/avidale\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/avidale\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry, I was using an old version of sequeval"],"created_at":1623944162000,"updated_at":1623944767000,"closed_at":1623944767000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nseqeval = load_metric(\"seqeval\")\r\nseqeval.compute(predictions=[['A']], references=[['A']])\r\n```\r\n\r\n## Expected results\r\nThe function computes a dict with metrics\r\n\r\n## Actual results\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n in \r\n 1 from datasets import load_dataset, load_metric\r\n 2 seqeval = load_metric(\"seqeval\")\r\n----> 3 seqeval.compute(predictions=[['A']], references=[['A']])\r\n\r\n~\/p3\/lib\/python3.7\/site-packages\/datasets\/metric.py in compute(self, *args, **kwargs)\r\n 396 references = self.data[\"references\"]\r\n 397 with temp_seed(self.seed):\r\n--> 398 output = self._compute(predictions=predictions, references=references, **kwargs)\r\n 399 \r\n 400 if self.buf_writer is not None:\r\n\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/seqeval\/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff\/seqeval.py in _compute(self, predictions, references, suffix)\r\n 95 \r\n 96 def _compute(self, predictions, references, suffix=False):\r\n---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)\r\n 98 report.pop(\"macro avg\")\r\n 99 report.pop(\"weighted avg\")\r\n\r\nTypeError: classification_report() got an unexpected keyword argument 'output_dict'\r\n```\r\n\r\n## Environment info\r\nsklearn=0.24\r\ndatasets=1.1.3\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2512\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2511","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2511\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2511\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2511\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2511","id":923762133,"node_id":"MDU6SXNzdWU5MjM3NjIxMzM=","number":2511,"title":"Add C4","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Update on this: I'm computing the checksums of the data files. It will be available soon","Added in #2575 :)"],"created_at":1623925864000,"updated_at":1625488618000,"closed_at":1625488617000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *C4*\r\n- **Description:** *https:\/\/github.com\/allenai\/allennlp\/discussions\/5056*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/1910.10683*\r\n- **Data:** *https:\/\/huggingface.co\/datasets\/allenai\/c4*\r\n- **Motivation:** *Used a lot for pretraining*\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nShould fix https:\/\/github.com\/huggingface\/datasets\/issues\/1710","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2511\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2510","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2510\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2510\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2510\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2510","id":923735485,"node_id":"MDExOlB1bGxSZXF1ZXN0NjcyNDY3MzY3","number":2510,"title":"Add align_labels_with_mapping to DatasetDict","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623924215000,"updated_at":1623926725000,"closed_at":1623926724000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2510","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2510","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2510.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2510.patch"},"body":"https:\/\/github.com\/huggingface\/datasets\/pull\/2457 added the `Dataset.align_labels_with_mapping` method.\r\nIn this PR I also added `DatasetDict.align_labels_with_mapping`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2510\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2509","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2509\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2509\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2509\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2509","id":922846035,"node_id":"MDExOlB1bGxSZXF1ZXN0NjcxNjcyMzU5","number":2509,"title":"Fix fingerprint when moving cache dir","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Windows, why are you doing this to me ?","Thanks @lhoestq, I'm starting reviewing this PR.","Yea issues on windows are about long paths, not long filenames.\r\nWe can make sure the lock filenames are not too long, but not for the paths","Took your suggestions into account @albertvillanova :)"],"created_at":1623861909000,"updated_at":1624287904000,"closed_at":1624287903000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2509","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2509","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2509.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2509.patch"},"body":"The fingerprint of a dataset changes if the cache directory is moved.\r\nI fixed that by setting the fingerprint to be the hash of:\r\n- the relative cache dir (dataset_name\/version\/config_id)\r\n- the requested split\r\n\r\nClose #2496 \r\n\r\nI had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255.\r\nWe usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2509\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2508","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2508\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2508\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2508\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2508","id":921863173,"node_id":"MDU6SXNzdWU5MjE4NjMxNzM=","number":2508,"title":"Load Image Classification Dataset from Local ","user":{"login":"Jacobsolawetz","id":8428198,"node_id":"MDQ6VXNlcjg0MjgxOTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8428198?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jacobsolawetz","html_url":"https:\/\/github.com\/Jacobsolawetz","followers_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/followers","following_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/repos","events_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jacobsolawetz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"assignees":[{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Is this folder structure a standard, a bit like imagenet ?\r\nIn this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nmy_custom_cifar = load_dataset(\"cifar_like\", data_dir=\"path\/to\/data\/dir\")\r\n```\r\n\r\nLet me know what you think","Yep that would be sweet - closing for now as we found a workaround. ","@lhoestq I think we'll want a generic `image-folder` dataset (same as 'imagenet-like'). This is like `torchvision.datasets.ImageFolder`, and is something vision folks are used to seeing.","Opening this back up, since I'm planning on tackling this. Already posted a quick version of it on my account on the hub.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw\/image-folder', data_files='PetImages\/')\r\n```"],"created_at":1623797013000,"updated_at":1626113034000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nYes - we would like to load an image classification dataset with datasets without having to write a custom data loader.\r\n\r\n**Describe the solution you'd like**\r\n\r\nGiven a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like \"cifar10\".\r\n\r\n**Describe alternatives you've considered**\r\n\r\nImplement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path)\r\n\r\nWrite custom data loader logic\r\n\r\n**Additional context**\r\n\r\nWe're training ViT on custom dataset\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2508\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2507","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2507\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2507\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2507\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2507","id":921441962,"node_id":"MDExOlB1bGxSZXF1ZXN0NjcwNDQ0MDgz","number":2507,"title":"Rearrange JSON field names to match passed features schema field names","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":[],"created_at":1623766202000,"updated_at":1623840469000,"closed_at":1623840469000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2507","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2507","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2507.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2507.patch"},"body":"This PR depends on PR #2453 (which must be merged first).\r\n\r\nClose #2366.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2507\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2506","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2506\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2506\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2506\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2506","id":921435598,"node_id":"MDExOlB1bGxSZXF1ZXN0NjcwNDM4NTgx","number":2506,"title":"Add course banner","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623765834000,"updated_at":1623774336000,"closed_at":1623774335000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2506","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2506","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2506.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2506.patch"},"body":"This PR adds a course banner similar to the one you can now see in the [Transformers repo](https:\/\/github.com\/huggingface\/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2506\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2505","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2505\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2505\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2505\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2505","id":921234797,"node_id":"MDExOlB1bGxSZXF1ZXN0NjcwMjY2NjQy","number":2505,"title":"Make numpy arrow extractor faster","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like we have a nice speed up in some benchmarks. For example:\r\n- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec\r\n- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec","Can we convert this draft to PR @lhoestq ?","Ready for review ! cc @vblagoje","@lhoestq I tried the branch and it works for me. Although performance trace now shows a speedup, the overall pre-training speed up is minimal. But that's on my plate to explore further. ","Thanks for investigating @vblagoje \r\n\r\n@albertvillanova , do you have any comments on this PR ? Otherwise I think we can merge it"],"created_at":1623751892000,"updated_at":1624874019000,"closed_at":1624874018000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2505","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2505","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2505.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2505.patch"},"body":"I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https:\/\/github.com\/huggingface\/datasets\/issues\/2498\r\n\r\nThis could make the numpy\/torch\/tf\/jax formatting faster","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2505\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2503","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2503\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2503\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2503\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2503","id":920636186,"node_id":"MDU6SXNzdWU5MjA2MzYxODY=","number":2503,"title":"SubjQA wrong boolean values in entries","user":{"login":"arnaudstiegler","id":26485052,"node_id":"MDQ6VXNlcjI2NDg1MDUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26485052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arnaudstiegler","html_url":"https:\/\/github.com\/arnaudstiegler","followers_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/followers","following_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/orgs","repos_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/repos","events_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arnaudstiegler\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @arnaudstiegler, thanks for reporting. I'm investigating it.","@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https:\/\/github.com\/megagonlabs\/SubjQA\r\n\r\nWe are going to contact the dataset owners to report this.","I have:\r\n- opened an issue in their repo: https:\/\/github.com\/megagonlabs\/SubjQA\/issues\/3\r\n- written an email to all the paper authors","Please [see my response](https:\/\/github.com\/megagonlabs\/SubjQA\/issues\/3#issuecomment-905160010). There will be a fix in a couple of days."],"created_at":1623692566000,"updated_at":1629863526000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nSubjQA seems to have a boolean that's consistently wrong.\r\n\r\nIt defines:\r\n- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).\r\n- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective)\r\n\r\nHowever, `is_ques_subjective` seems to have wrong values in the entire dataset.\r\n\r\nFor instance, in the example in the dataset card, we have:\r\n- \"question_subj_level\": 2\r\n- \"is_ques_subjective\": false\r\n\r\nHowever, according to the description, the question should be subjective since the `question_subj_level` is below 4\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2503\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2502","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2502\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2502\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2502\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2502","id":920623572,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY5NzQ1MDA5","number":2502,"title":"JAX integration","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623691463000,"updated_at":1624292150000,"closed_at":1624292149000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2502","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2502","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2502.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2502.patch"},"body":"Hi !\r\n\r\nI just added the \"jax\" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow).\r\nIt does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects.\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nd = Dataset.from_dict({\"foo\": [[0., 1., 2.]]})\r\nd = d.with_format(\"jax\")\r\nd[0]\r\n# {'foo': DeviceArray([0., 1., 2.], dtype=float32)}\r\n```\r\n\r\nA few details:\r\n- The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https:\/\/jax.readthedocs.io\/en\/latest\/notebooks\/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default\r\n- AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https:\/\/github.com\/google\/jax\/issues\/4486))\r\n- the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO\r\n\r\nI also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset.\r\n\r\nSince the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax.\r\n\r\nClose #2495","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2502\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2501","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2501\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2501\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2501\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2501","id":920579634,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY5NzA3Nzc0","number":2501,"title":"Add Zenodo metadata file with license","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":[],"created_at":1623688092000,"updated_at":1623689382000,"closed_at":1623689382000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2501","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2501","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2501.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2501.patch"},"body":"This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `\"Apache-2.0\"`, which otherwise by default is `\"other-open\"`.\r\n\r\nClose #2472. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2501\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2500","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2500\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2500\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2500\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2500","id":920471411,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY5NjE2MjQ1","number":2500,"title":"Add load_dataset_builder","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors \u2764\ufe0f :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... \ud83d\ude09 \r\n- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.\r\n\r\nI hope you find these hints useful. \ud83e\udd17 ","@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help.","Ready for the review!\r\n\r\nOne additional change. I've modified the `camelcase_to_snakecase`\/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase(\"__DummyDataset__\")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https:\/\/pypi.org\/project\/inflection\/) library.\r\n","Thank you for adding this feature, @mariosasko - this is really awesome!\r\n\r\nTried with:\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)\"\r\nUsing the latest cached version of the module from \/home\/stas\/.cache\/huggingface\/modules\/datasets_modules\/datasets\r\n\/openwebtext-10k\/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12 \r\n20:22:53 2021) \r\n\r\nsince it couldn't be found locally at openwebtext-10k\/openwebtext-10k.py \r\n\r\nor remotely (FileNotFoundError).\r\n\r\n\/home\/stas\/.cache\/huggingface\/datasets\/openwebtext10k\/plain_text\/1.0.0\/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nThe logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to? \r\n\r\n1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found \r\n2. I'm not sure why it says \"since it couldn't be found locally\" - as it is locally found at the cache folder and again what does \" locally at openwebtext-10k\/openwebtext-10k.py\" mean - i.e. where does it look for it? Is it `.\/openwebtext-10k\/openwebtext-10k.py` it's looking for? or in some specific dir?\r\n\r\nIf the cached version always supersedes any other versions perhaps this is what it should say?\r\n```\r\nfound cached version at xxx, not looking for a local at yyy, not downloading remote at zzz\r\n```","Hi ! Thanks for the comments\r\n\r\nRegarding your last message:\r\nYou must pass `stas\/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.\r\n\r\nWhen you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at .\/openwebtext-10k\/openwebtext-10k.py\r\nHere since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at .\/openwebtext-10k\/openwebtext-10k.py: it raised a FileNotFoundError.\r\nAs a fallback it managed to find the dataset script in your cache and it used this one.","Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:\r\n\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('stas\/openwebtext-10k'); print(b.cache_dir)\"\r\n\/home\/stas\/.cache\/huggingface\/datasets\/openwebtext10k\/plain_text\/1.0.0\/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nNow there is no logger message. Got it!\r\n\r\nOK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: \"incorrect input there is no such dataset as 'openwebtext-10k' at or \" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.\r\n\r\n> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at .\/openwebtext-10k\/openwebtext-10k.py: it raised a FileNotFoundError.\r\n\r\nExcept it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.\r\n\r\nPlus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?\r\n\r\n---------------\r\n\r\nFinally, the logger format is not set up so the user gets a warning w\/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https:\/\/github.com\/huggingface\/datasets\/pull\/2500#issuecomment-874250500\r\n\r\ni.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error."],"created_at":1623680865000,"updated_at":1625789296000,"closed_at":1625481958000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2500","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2500","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2500.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2500.patch"},"body":"Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.\r\n\r\nTODOs:\r\n- [x] Add docstring and entry in the docs\r\n- [x] Add tests\r\n\r\nCloses #2484 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2500\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2499","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2499\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2499\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2499\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2499","id":920413021,"node_id":"MDU6SXNzdWU5MjA0MTMwMjE=","number":2499,"title":" Python Programming Puzzles","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\ud83d\udc40 @TalSchuster","Thanks @VictorSanh!\r\nThere's also a [notebook](https:\/\/aka.ms\/python_puzzles) and [demo](https:\/\/aka.ms\/python_puzzles_study) available now to try out some of the puzzles"],"created_at":1623677238000,"updated_at":1623780854000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Python Programming Puzzles\r\n- **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis\r\n- **Paper:** https:\/\/arxiv.org\/pdf\/2106.05784.pdf\r\n- **Data:** https:\/\/github.com\/microsoft\/PythonProgrammingPuzzles ([Scrolling through the data](https:\/\/github.com\/microsoft\/PythonProgrammingPuzzles\/blob\/main\/problems\/README.md))\r\n- **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs.\r\n\r\nNote: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2499\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2498","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2498\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2498\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2498\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2498","id":920411285,"node_id":"MDU6SXNzdWU5MjA0MTEyODU=","number":2498,"title":"Improve torch formatting performance","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That\u2019s interesting thanks, let\u2019s see what we can do. Can you detail your last sentence? I\u2019m not sure I understand it well.","Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now:\r\n\r\n```python\r\nimport pyarrow as pa # I used pyarrow 3.0.0\r\nimport numpy as np\r\n\r\nn, max_length = 1_000, 512\r\nlow, high, size = 0, 2 << 16, (n, max_length)\r\n\r\ntable = pa.Table.from_pydict({\r\n \"input_ids\": np.random.default_rng(42).integers(low=low, high=high, size=size).tolist()\r\n})\r\n\r\n\r\n%%timeit\r\n_ = table.to_pandas()[\"input_ids\"].to_numpy()\r\n# 1.44 ms \u00b1 80.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\r\n\r\n%%timeit\r\n_ = table[\"input_ids\"].to_pandas().to_numpy()\r\n# 461 \u00b5s \u00b1 14.2 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\r\n\r\n%%timeit\r\n_ = table[\"input_ids\"].to_numpy()\r\n# 317 \u00b5s \u00b1 5.06 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\r\n```\r\n\r\nCurrently the conversion from arrow to numpy is done in the NumpyArrowExtractor here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/d6d0ede9486ffad7944642ca9a326e058b676788\/src\/datasets\/formatting\/formatting.py#L143-L166\r\n\r\nLet's update the NumpyArrowExtractor to call `to_numpy` directly and see how our github benchmarks evolve ?__","Sounds like a plan @lhoestq If you create a PR I'll pick it up and try it out right away! ","@lhoestq I can also prepare the PR, just lmk. ","I\u2019m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features\/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?","I created https:\/\/github.com\/huggingface\/datasets\/pull\/2505 if you want to play with it @vblagoje ","> I\u2019m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features\/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?\r\n\r\n@thomwolf starting from the top, each rectangle represents the cumulative amount of it takes to execute the method call. Therefore, format_batch in torch_formatter.py takes ~20 sec, and the largest portion of that call is taken by to_pandas call and the smaller portion (grey rectangle) by the other method invocation(s) in format_batch (series_to_numpy etc). \r\n\r\nFeatures of the dataset are BERT pre-training model input columns i.e:\r\n```\r\nf = Features({ \r\n \"input_ids\": Sequence(feature=Value(dtype=\"int32\")), \r\n \"attention_mask\": Sequence(feature=Value(dtype=\"int8\")), \r\n \"token_type_ids\": Sequence(feature=Value(dtype=\"int8\")), \r\n \"labels\": Sequence(feature=Value(dtype=\"int32\")), \r\n \"next_sentence_label\": Value(dtype=\"int8\")\r\n})\r\n```\r\n\r\nI'll work with @lhoestq till we get to the bottom of this one. \r\n ","@lhoestq the proposed branch is faster, but overall training speedup is a few percentage points. I couldn't figure out how to include the GitHub branch into setup.py, so I couldn't start NVidia optimized Docker-based pre-training run. But on bare metal, there is a slight improvement. I'll do some more performance traces. ","Hi @vblagoje, to install Datasets from @lhoestq PR reference #2505, you can use:\r\n```shell\r\npip install git+ssh:\/\/git@github.com\/huggingface\/datasets.git@refs\/pull\/2505\/head#egg=datasets\r\n```","Hey @albertvillanova yes thank you, I am aware, I can easily pull it from a terminal command line but then I can't automate docker image builds as dependencies are picked up from setup.py and for some reason setup.py doesn't accept this string format.","@vblagoje in that case, you can add this to your `setup.py`:\r\n```python\r\n install_requires=[\r\n \"datasets @ git+ssh:\/\/git@github.com\/huggingface\/datasets.git@refs\/pull\/2505\/head\",\r\n```","@lhoestq @thomwolf @albertvillanova The new approach is definitely faster, dataloader now takes less than 3% cumulative time (pink rectangle two rectangles to the right of tensor.py backward invocation)\r\n\r\n![Screen Shot 2021-06-16 at 3 05 06 PM](https:\/\/user-images.githubusercontent.com\/458335\/122224432-19de4700-ce82-11eb-982f-d45d4bcc1e41.png)\r\n\r\nWhen we drill down into dataloader next invocation we get:\r\n\r\n![Screen Shot 2021-06-16 at 3 09 56 PM](https:\/\/user-images.githubusercontent.com\/458335\/122224976-a1c45100-ce82-11eb-8d40-59194740d616.png)\r\n\r\nAnd finally format_batch:\r\n\r\n![Screen Shot 2021-06-16 at 3 11 07 PM](https:\/\/user-images.githubusercontent.com\/458335\/122225132-cae4e180-ce82-11eb-8a16-967ab7c1c2aa.png)\r\n\r\n\r\nNot sure this could be further improved but this is definitely a decent step forward.\r\n\r\n","> ```python\r\n> datasets @ git+ssh:\/\/git@github.com\/huggingface\/datasets.git@refs\/pull\/2505\/head\r\n> ```\r\n\r\n@albertvillanova how would I replace datasets dependency in https:\/\/github.com\/huggingface\/transformers\/blob\/master\/setup.py as the above approach is not working. ","@vblagoje I tested my proposed approach before posting it here and it worked for me. \r\n\r\nIs it not working in your case because of the SSH protocol? In that case you could try the same approach but using HTTPS:\r\n```\r\n\"datasets @ git+https:\/\/github.com\/huggingface\/datasets.git@refs\/pull\/2505\/head\",\r\n``` ","Also note the blanks before and after the `@`.","@albertvillanova of course it works. Apologies. I needed to change datasets in all deps references , like [here](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/setup.py#L235) for example. "],"created_at":1623677124000,"updated_at":1624269294000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nIt would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. \r\n\r\nA bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs.\r\n\r\nThe current performance is about 30% slower than NVidia optimized BERT [examples](https:\/\/github.com\/NVIDIA\/DeepLearningExamples\/tree\/master\/PyTorch\/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded. \r\n\r\n**Describe the solution you'd like**\r\nUsing profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call.\r\n![dataloader_next](https:\/\/user-images.githubusercontent.com\/458335\/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png)\r\n\r\nAs you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call. \r\n\r\nDigging a bit deeper into format_batch we can see the following profiler data:\r\n![torch_formatter](https:\/\/user-images.githubusercontent.com\/458335\/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png)\r\n\r\nOnce again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion. \r\n\r\n**Describe alternatives you've considered**\r\nI am not familiar with pyarrow and have not yet considered the alternatives to the current approach. \r\n\r\nMost of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%. \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2498\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2497","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2497\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2497\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2497\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2497","id":920250382,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY5NDI3OTU3","number":2497,"title":"Use default cast for sliced list arrays if pyarrow >= 4","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/1206ffbcd42dda415f6bfb3d5040708f50413c93\/setup.py#L78\r\nCan you confirm @lhoestq ?","@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 \ud83d\ude09 "],"created_at":1623664967000,"updated_at":1623780378000,"closed_at":1623680677000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2497","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2497","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2497.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2497.patch"},"body":"From pyarrow version 4, it is supported to cast sliced lists.\r\n\r\nThis PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4.\r\n\r\nIn relation with PR #2461 and #2490.\r\n\r\ncc: @lhoestq, @abhi1thakur, @SBrandeis","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2497\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2496","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2496\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2496\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2496\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2496","id":920216314,"node_id":"MDU6SXNzdWU5MjAyMTYzMTQ=","number":2496,"title":"Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1623662426000,"updated_at":1624287903000,"closed_at":1624287903000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"`Dataset.map` uses the dataset fingerprint (a hash) for caching.\r\nHowever the fingerprint seems to change when someone moves the cache directory of the dataset.\r\n\r\nThis is because it uses the default fingerprint generation:\r\n1. the dataset path is used to get the fingerprint\r\n2. the modification times of the arrow file is also used to get the fingerprint\r\n\r\nTo fix that we could set the fingerprint of the dataset to be a hash of (, , , ), i.e. a hash of the the cache path relative to the cache directory.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2496\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2495","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2495\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2495\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2495\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2495","id":920170030,"node_id":"MDU6SXNzdWU5MjAxNzAwMzA=","number":2495,"title":"JAX formatting","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1623659527000,"updated_at":1624292149000,"closed_at":1624292149000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2495\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2494","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2494\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2494\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2494\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2494","id":920149183,"node_id":"MDU6SXNzdWU5MjAxNDkxODM=","number":2494,"title":"Improve docs on Enhancing performance","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623658308000,"updated_at":1623658308000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"In the [\"Enhancing performance\"](https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#enhancing-performance) section of docs, add specific use cases:\r\n- How to make datasets the fastest\r\n- How to make datasets take the less RAM\r\n- How to make datasets take the less hard drive mem\r\n\r\ncc: @thomwolf \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2494\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2493","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2493\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2493\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2493\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2493","id":919833281,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY5MDc4OTcw","number":2493,"title":"add tensorflow-macos support","user":{"login":"slayerjain","id":12831254,"node_id":"MDQ6VXNlcjEyODMxMjU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12831254?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/slayerjain","html_url":"https:\/\/github.com\/slayerjain","followers_url":"https:\/\/api.github.com\/users\/slayerjain\/followers","following_url":"https:\/\/api.github.com\/users\/slayerjain\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/slayerjain\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/slayerjain\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/slayerjain\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/slayerjain\/orgs","repos_url":"https:\/\/api.github.com\/users\/slayerjain\/repos","events_url":"https:\/\/api.github.com\/users\/slayerjain\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/slayerjain\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@albertvillanova done!"],"created_at":1623601208000,"updated_at":1623747186000,"closed_at":1623747186000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2493","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2493","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2493.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2493.patch"},"body":"ref - https:\/\/github.com\/huggingface\/datasets\/issues\/2068","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2493\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2492","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2492\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2492\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2492\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2492","id":919718102,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4OTkxODk4","number":2492,"title":"Eduge","user":{"login":"enod","id":6023883,"node_id":"MDQ6VXNlcjYwMjM4ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6023883?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/enod","html_url":"https:\/\/github.com\/enod","followers_url":"https:\/\/api.github.com\/users\/enod\/followers","following_url":"https:\/\/api.github.com\/users\/enod\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/enod\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/enod\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/enod\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/enod\/orgs","repos_url":"https:\/\/api.github.com\/users\/enod\/repos","events_url":"https:\/\/api.github.com\/users\/enod\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/enod\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623561059000,"updated_at":1624355344000,"closed_at":1623840106000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2492","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2492","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2492.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2492.patch"},"body":"Hi, awesome folks behind the huggingface! \r\n\r\nHere is my PR for the text classification dataset in Mongolian.\r\n\r\nPlease do let me know in case you have anything to clarify. \r\n\r\nThanks & Regards,\r\nEnod","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2492\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2491","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2491\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2491\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2491\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2491","id":919714506,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTUw","number":2491,"title":"add eduge classification dataset","user":{"login":"enod","id":6023883,"node_id":"MDQ6VXNlcjYwMjM4ODM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6023883?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/enod","html_url":"https:\/\/github.com\/enod","followers_url":"https:\/\/api.github.com\/users\/enod\/followers","following_url":"https:\/\/api.github.com\/users\/enod\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/enod\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/enod\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/enod\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/enod\/orgs","repos_url":"https:\/\/api.github.com\/users\/enod\/repos","events_url":"https:\/\/api.github.com\/users\/enod\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/enod\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this PR as I'll submit a new one - bug free"],"created_at":1623559021000,"updated_at":1623560808000,"closed_at":1623560798000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2491","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2491","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2491.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2491.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2491\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2490","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2490\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2490\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2490\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2490","id":919571385,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4ODc4NDA3","number":2490,"title":"Allow latest pyarrow version","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["i need some help with this"],"created_at":1623507454000,"updated_at":1625590492000,"closed_at":1623657203000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2490","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2490","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2490.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2490.patch"},"body":"Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0.\r\n\r\nClose #2489.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2490\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2489","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2489\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2489\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2489\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2489","id":919569749,"node_id":"MDU6SXNzdWU5MTk1Njk3NDk=","number":2489,"title":"Allow latest pyarrow version once segfault bug is fixed","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1623506992000,"updated_at":1623657203000,"closed_at":1623657203000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As pointed out by @symeneses (see https:\/\/github.com\/huggingface\/datasets\/pull\/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https:\/\/issues.apache.org\/jira\/browse\/ARROW-12568):\r\n- it was fixed on 3 May 2021\r\n- version 4.0.1 was released on 19 May 2021 with the bug fix","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2489\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2488","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2488\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2488\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2488\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2488","id":919500756,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4ODIwNDA1","number":2488,"title":"Set configurable downloaded datasets path","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":[],"created_at":1623488943000,"updated_at":1623662007000,"closed_at":1623659347000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2488","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2488","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2488.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2488.patch"},"body":"Part of #2480.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2488\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2487","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2487\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2487\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2487\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2487","id":919452407,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4Nzc5Mjk0","number":2487,"title":"Set configurable extracted datasets path","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["Let me push a small fix... \ud83d\ude09 ","Thanks !"],"created_at":1623476849000,"updated_at":1623663017000,"closed_at":1623661376000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2487","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2487","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2487.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2487.patch"},"body":"Part of #2480.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2487\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2486","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2486\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2486\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2486\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2486","id":919174898,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4NTI2Njg3","number":2486,"title":"Add Rico Dataset","user":{"login":"ncoop57","id":7613470,"node_id":"MDQ6VXNlcjc2MTM0NzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7613470?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ncoop57","html_url":"https:\/\/github.com\/ncoop57","followers_url":"https:\/\/api.github.com\/users\/ncoop57\/followers","following_url":"https:\/\/api.github.com\/users\/ncoop57\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ncoop57\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ncoop57\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ncoop57\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ncoop57\/orgs","repos_url":"https:\/\/api.github.com\/users\/ncoop57\/repos","events_url":"https:\/\/api.github.com\/users\/ncoop57\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ncoop57\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for adding this dataset :)\r\n\r\nRegarding your questions:\r\n1. We can have them as different configuration of the `rico` dataset\r\n2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for example. In the future we'll have an Image feature type that will decode the encoded image data on the fly when accessing the examples.\r\n3. Feel free to keep the hierarchies as strings if they don't follow a fixed format\r\n4. You can just return the path\r\n\r\n"],"created_at":1623442661000,"updated_at":1631176166000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2486","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2486","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2486.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2486.patch"},"body":"Hi there!\r\n\r\nI'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib.\r\n\r\n1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset?\r\nYou can see the datasets available for Rico here: http:\/\/interactionmining.org\/rico\r\n\r\n2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset?\r\n\r\n2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image?\r\n\r\n3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently?\r\n\r\n4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string?\r\n\r\nI appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: !","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2486\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2485","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2485\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2485\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2485\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2485","id":919099218,"node_id":"MDU6SXNzdWU5MTkwOTkyMTg=","number":2485,"title":"Implement layered building","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623437665000,"updated_at":1623437665000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As discussed with @stas00 and @lhoestq (see also here https:\/\/github.com\/huggingface\/datasets\/issues\/2481#issuecomment-859712190):\r\n\r\n> My suggestion for this would be to have this enabled by default.\r\n> \r\n> Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:\r\n>\r\n> 1. uncompress a handful of files via a generator enough to generate one arrow file\r\n> 2. process arrow file 1\r\n> 3. delete all the files that went in and aren't needed anymore.\r\n>\r\n> rinse and repeat.\r\n> \r\n> 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project\r\n> 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing\r\n> 3. It would already include deleting temp files this issue is talking about\r\n> \r\n> I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2485\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2484","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2484\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2484\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2484\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2484","id":919092635,"node_id":"MDU6SXNzdWU5MTkwOTI2MzU=","number":2484,"title":"Implement loading a dataset builder","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["#self-assign"],"created_at":1623437242000,"updated_at":1625481957000,"closed_at":1625481957000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As discussed with @stas00 and @lhoestq, this would allow things like:\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\ndataset_name = \"openwebtext\"\r\nbuilder = load_dataset_builder(dataset_name)\r\nprint(builder.cache_dir)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2484\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2483","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2483\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2483\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2483\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2483","id":918871712,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4MjU1Mjg1","number":2483,"title":"Use gc.collect only when needed to avoid slow downs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ","FR"],"created_at":1623424170000,"updated_at":1624044306000,"closed_at":1623425496000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2483","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2483","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2483.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2483.patch"},"body":"In https:\/\/github.com\/huggingface\/datasets\/commit\/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https:\/\/github.com\/huggingface\/datasets\/pull\/2482)\r\n\r\nHowever calling gc.collect too often causes significant slow downs (the CI run time doubled).\r\nSo I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2483\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2482","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2482\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2482\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2482\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2482","id":918846027,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4MjMyMzI5","number":2482,"title":"Allow to use tqdm>=4.50.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623422961000,"updated_at":1623424311000,"closed_at":1623424310000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2482","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2482","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2482.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2482.patch"},"body":"We used to have permission errors on windows whith the latest versions of tqdm (see [here](https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/6365\/workflows\/24f7c960-3176-43a5-9652-7830a23a981e\/jobs\/39232))\r\n\r\nThey were due to open arrow files not properly closed by pyarrow.\r\nSince https:\/\/github.com\/huggingface\/datasets\/commit\/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed.\r\n\r\nclose https:\/\/github.com\/huggingface\/datasets\/issues\/2471\r\n\r\ncc @lewtun ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2482\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2481","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2481\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2481\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2481\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2481","id":918680168,"node_id":"MDU6SXNzdWU5MTg2ODAxNjg=","number":2481,"title":"Delete extracted files to save disk space","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["My suggestion for this would be to have this enabled by default.\r\n\r\nPlus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:\r\n\r\n1. uncompress a handful of files via a generator enough to generate one arrow file\r\n2. process arrow file 1\r\n3. delete all the files that went in and aren't needed anymore.\r\n\r\nrinse and repeat.\r\n\r\n1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project\r\n2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing\r\n3. It would already include deleting temp files this issue is talking about\r\n\r\nI wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders."],"created_at":1623414112000,"updated_at":1626685698000,"closed_at":1626685698000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2481\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2480","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2480\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2480\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2480\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2480","id":918678578,"node_id":"MDU6SXNzdWU5MTg2Nzg1Nzg=","number":2480,"title":"Set download\/extracted paths configurable","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["For example to be able to send uncompressed and temp build files to another volume\/partition, so that the user gets the minimal disk usage on their primary setup - and ends up with just the downloaded compressed data + arrow files, but outsourcing the huge files and building to another partition. e.g. on JZ there is a special partition for fast data, but it's also volatile, so only temp files should go there.\r\n\r\nThink of it as `TMPDIR` so we need the equivalent for `datasets`."],"created_at":1623414024000,"updated_at":1623767029000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions\/drives.\r\n\r\nTODO:\r\n- [x] Set configurable extracted datasets path: #2487\r\n- [x] Set configurable downloaded datasets path: #2488\r\n- [ ] Set configurable \"incomplete\" datasets path?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2480\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2479","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2479\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2479\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2479\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2479","id":918672431,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY4MDc3NTI4","number":2479,"title":"\u274c load_datasets \u274c","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623413676000,"updated_at":1623422785000,"closed_at":1623422785000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2479","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2479","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2479.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2479.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2479\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2478","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2478\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2478\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2478\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2478","id":918507510,"node_id":"MDU6SXNzdWU5MTg1MDc1MTA=","number":2478,"title":"Create release script","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1623404282000,"updated_at":1623404282000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Create a script so that releases can be done automatically (as done in `transformers`).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2478\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2477","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2477\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2477\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2477\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2477","id":918334431,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY3NzczMTY0","number":2477,"title":"Fix docs custom stable version","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["I see that @lhoestq overlooked this PR with his commit 07e2b05. \ud83d\ude22 \r\n\r\nI'm adding a script so that this issue does not happen again.\r\n","For the moment, the script only includes `update_custom_js`, but in a follow-up PR I will include all the required steps to make a package release.","I think we just need to clarify the release process in setup.py instead of adding a script that does the replacement","@lhoestq I really think we should implement a script that performs the release (instead of doing it manually as it is done now), as it is already the case in `transformers`. I will do it in a next PR.\r\n\r\nFor the moment, this PR includes one of the steps of the release script."],"created_at":1623396363000,"updated_at":1623662060000,"closed_at":1623658818000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2477","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2477","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2477.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2477.patch"},"body":"Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2477\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2476","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2476\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2476\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2476\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2476","id":917686662,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY3MTg3OTk1","number":2476,"title":"Add TimeDial","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\nI've pushed the updated README and tags. Let me know if anything is missing\/needs some improvement!\r\n\r\n~PS. I don't know why it's not triggering the build~"],"created_at":1623349987000,"updated_at":1627649874000,"closed_at":1627649874000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2476","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2476","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2476.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2476.patch"},"body":"Dataset: https:\/\/github.com\/google-research-datasets\/TimeDial\r\n\r\nTo-Do: Update README.md and add YAML tags","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2476\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2475","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2475\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2475\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2475\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2475","id":917650882,"node_id":"MDU6SXNzdWU5MTc2NTA4ODI=","number":2475,"title":"Issue in timit_asr database","user":{"login":"hrahamim","id":85702107,"node_id":"MDQ6VXNlcjg1NzAyMTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/85702107?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hrahamim","html_url":"https:\/\/github.com\/hrahamim","followers_url":"https:\/\/api.github.com\/users\/hrahamim\/followers","following_url":"https:\/\/api.github.com\/users\/hrahamim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hrahamim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hrahamim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hrahamim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hrahamim\/orgs","repos_url":"https:\/\/api.github.com\/users\/hrahamim\/repos","events_url":"https:\/\/api.github.com\/users\/hrahamim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hrahamim\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This bug was fixed in #1995. Upgrading datasets to version 1.6 fixes the issue!","Indeed was a fixed bug.\r\nWorks on version 1.8\r\nThanks "],"created_at":1623348329000,"updated_at":1623572030000,"closed_at":1623571993000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows).\r\nI am using the next code line\r\ndataset = load_dataset(\u201ctimit_asr\u201d, split=\u201ctest\u201d).shuffle().select(range(10))\r\n\r\nThe above code result with the same sentence duplicated ten times.\r\nIt also happens when I use the dataset viewer at Streamlit .\r\n\r\n## Steps to reproduce the bug\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\u201ctimit_asr\u201d, split=\u201ctest\u201d).shuffle().select(range(10))\r\ndata = dataset.to_pandas()\r\n\r\n# Sample code to reproduce the bug\r\n```\r\n\r\n## Expected results\r\ntable with different row information\r\n\r\n## Actual results\r\nSpecify the actual results or traceback.\r\n\r\n## Environment info\r\n\r\n\r\n- `datasets` version: 1.4.1 (also occur in the latest version)\r\n- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.8.1+cu102 (False)\r\n- Tensorflow version (GPU?): 1.15.3 (False)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2475\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2474","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2474\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2474\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2474\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2474","id":917622055,"node_id":"MDU6SXNzdWU5MTc2MjIwNTU=","number":2474,"title":"cache_dir parameter for load_from_disk ?","user":{"login":"TaskManager91","id":7063207,"node_id":"MDQ6VXNlcjcwNjMyMDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7063207?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TaskManager91","html_url":"https:\/\/github.com\/TaskManager91","followers_url":"https:\/\/api.github.com\/users\/TaskManager91\/followers","following_url":"https:\/\/api.github.com\/users\/TaskManager91\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TaskManager91\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TaskManager91\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TaskManager91\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TaskManager91\/orgs","repos_url":"https:\/\/api.github.com\/users\/TaskManager91\/repos","events_url":"https:\/\/api.github.com\/users\/TaskManager91\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TaskManager91\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! `load_from_disk` doesn't move the data. If you specify a local path to your mounted drive, then the dataset is going to be loaded directly from the arrow file in this directory. The cache files that result from `map` operations are also stored in the same directory by default.\r\n\r\nHowever note than writing data to your google drive actually fills the VM's disk (see https:\/\/github.com\/huggingface\/datasets\/issues\/643)\r\n\r\nGiven that, I don't think that changing the cache directory changes anything.\r\n\r\nLet me know what you think","Thanks for your answer! I am a little surprised since I just want to read the dataset.\r\n\r\nAfter debugging a bit, I noticed that the VM\u2019s disk fills up when the tables (generator) are converted to a list:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/5ba149773d23369617563d752aca922081277ec2\/src\/datasets\/table.py#L850\r\n\r\nIf I try to iterate through the table\u2019s generator e.g.: \r\n\r\n`length = sum(1 for x in tables)`\r\n\r\nthe VM\u2019s disk fills up as well.\r\n\r\nI\u2019m running out of Ideas \ud83d\ude04 ","Indeed reading the data shouldn't increase the VM's disk. Not sure what google colab does under the hood for that to happen"],"created_at":1623346776000,"updated_at":1623660069000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nWhen using Google Colab big datasets can be an issue, as they won't fit on the VM's disk. Therefore mounting google drive could be a possible solution. Unfortunatly when loading my own dataset by using the _load_from_disk_ function, the data gets cached to the VM's disk:\r\n\r\n`\r\nfrom datasets import load_from_disk\r\n\r\nmyPreprocessedData = load_from_disk(\"\/content\/gdrive\/MyDrive\/ASR_data\/myPreprocessedData\")\r\n\r\n`\r\nI know that chaching on google drive could slow down learning. But at least it would run.\r\n\r\n**Describe the solution you'd like**\r\nAdd cache_Dir parameter to the load_from_disk function.\r\n\r\n**Describe alternatives you've considered**\r\nIt looks like you could write a custom loading script for the load_dataset function. But this seems to be much too complex for my use case. Is there perhaps a template here that uses the load_from_disk function?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2474\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2473","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2473\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2473\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2473\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2473","id":917538629,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY3MDU5MjI5","number":2473,"title":"Add Disfl-QA","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then.","I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)\r\n"],"created_at":1623341880000,"updated_at":1627559779000,"closed_at":1627559778000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2473","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2473","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2473.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2473.patch"},"body":"Dataset: https:\/\/github.com\/google-research-datasets\/disfl-qa\r\n\r\nTo-Do: Update README.md and add YAML tags","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2473\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2472","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2472\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2472\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2472\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2472","id":917463821,"node_id":"MDU6SXNzdWU5MTc0NjM4MjE=","number":2472,"title":"Fix automatic generation of Zenodo DOI","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["I have received a reply from Zenodo support:\r\n> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you.","Other repo maintainers had the same problem with Zenodo. \r\n\r\nThere is an open issue on their GitHub repo: zenodo\/zenodo#2181","I have received the following request from Zenodo support:\r\n> Could you send us the link to the repository as well as the release tag?\r\n\r\nMy reply:\r\n> Sure, here it is:\r\n> - Link to the repository: https:\/\/github.com\/huggingface\/datasets\r\n> - Link to the repository at the release tag: https:\/\/github.com\/huggingface\/datasets\/releases\/tag\/1.8.0\r\n> - Release tag: 1.8.0","Zenodo issue has been fixed. The 1.8.0 release DOI can be found here: https:\/\/zenodo.org\/record\/4946100#.YMd6vKj7RPY"],"created_at":1623338146000,"updated_at":1623689382000,"closed_at":1623689382000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as \"Received\", instead of in green as \"Published\".\r\n\r\nI have contacted Zenodo support to fix this issue.\r\n\r\nTODO:\r\n- [x] Check with Zenodo to fix the issue\r\n- [x] Check BibTeX entry is right","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2472\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2471","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2471\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2471\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2471\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2471","id":917067165,"node_id":"MDU6SXNzdWU5MTcwNjcxNjU=","number":2471,"title":"Fix PermissionError on Windows when using tqdm >=4.50.0","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":[],"created_at":1623313909000,"updated_at":1623424310000,"closed_at":1623424310000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"See: https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/235\/workflows\/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369\/jobs\/1111\r\n\r\n```\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2471\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2470","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2470\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2470\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2470\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2470","id":916724260,"node_id":"MDU6SXNzdWU5MTY3MjQyNjA=","number":2470,"title":"Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.","user":{"login":"mbforbes","id":1170062,"node_id":"MDQ6VXNlcjExNzAwNjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1170062?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mbforbes","html_url":"https:\/\/github.com\/mbforbes","followers_url":"https:\/\/api.github.com\/users\/mbforbes\/followers","following_url":"https:\/\/api.github.com\/users\/mbforbes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mbforbes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mbforbes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mbforbes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mbforbes\/orgs","repos_url":"https:\/\/api.github.com\/users\/mbforbes\/repos","events_url":"https:\/\/api.github.com\/users\/mbforbes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mbforbes\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ?","Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else.","Could you trying reinstalling pyarrow with pip ?\r\nI'm not sure why it would check in your multicurtural-sc directory for source files.","Sure! I tried reinstalling to get latest. pip was mad because it looks like Datasets currently wants <4.0.0 (which is interesting, because apparently I ended up with 4.0.0 already?), but I gave it a shot anyway:\r\n\r\n```bash\r\n$ pip install --upgrade --force-reinstall pyarrow\r\nCollecting pyarrow\r\n Downloading pyarrow-4.0.1-cp39-cp39-manylinux2014_x86_64.whl (21.9 MB)\r\n |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21.9 MB 23.8 MB\/s\r\nCollecting numpy>=1.16.6\r\n Using cached numpy-1.20.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.4 MB)\r\nInstalling collected packages: numpy, pyarrow\r\n Attempting uninstall: numpy\r\n Found existing installation: numpy 1.20.3\r\n Uninstalling numpy-1.20.3:\r\n Successfully uninstalled numpy-1.20.3\r\n Attempting uninstall: pyarrow\r\n Found existing installation: pyarrow 3.0.0\r\n Uninstalling pyarrow-3.0.0:\r\n Successfully uninstalled pyarrow-3.0.0\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ndatasets 1.8.0 requires pyarrow<4.0.0,>=1.0.0, but you have pyarrow 4.0.1 which is incompatible.\r\nSuccessfully installed numpy-1.20.3 pyarrow-4.0.1\r\n```\r\n\r\nTrying it, the same issue:\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/1170062\/121730226-3f470b80-caa4-11eb-85a5-684c44c816da.png)\r\n\r\nI tried installing `\"pyarrow<4.0.0\"`, which gave me 3.0.0. Running, still, same issue.\r\n\r\nI agree it's weird that pyarrow is checking the source code directory for its files. (There is no `pyarrow\/` directory there.) To me, that makes it seem like an issue with how pyarrow is called.\r\n\r\nOut of curiosity, I tried running this with fewer workers to see when the error arises:\r\n\r\n- 1: \u2705\r\n- 2: \u2705\r\n- 4: \u2705\r\n- 8: \u2705\r\n- 10: \u2705\r\n- 11: \u274c \ud83e\udd14\r\n- 12: \u274c\r\n- 16: \u274c\r\n- 32: \u274c\r\n\r\nchecking my datasets:\r\n\r\n```python\r\n>>> datasets\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 389290\r\n })\r\n validation.sc: Dataset({\r\n features: ['text'],\r\n num_rows: 10 # \ud83e\udd14\r\n })\r\n validation.wvs: Dataset({\r\n features: ['text'],\r\n num_rows: 93928\r\n })\r\n})\r\n```\r\n\r\nNew hypothesis: crash if `num_proc` > length of a dataset? \ud83d\ude05\r\n\r\nIf so, this might be totally my fault, as the caller. Could be a docs fix, or maybe this library could do a check to limit `num_proc` for this case?","Good catch ! Not sure why it could raise such a weird issue from pyarrow though\r\nWe should definitely reduce num_proc to the length of the dataset if needed and log a warning.","This has been fixed in #2566, thanks @connor-mccarthy !\r\nWe'll make a new release soon that includes the fix ;)"],"created_at":1623278422000,"updated_at":1625132094000,"closed_at":1625130673000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nCrash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.\r\n\r\nI believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n# this function will be applied with map()\r\ndef tokenize_function(examples):\r\n return tokenizer(\r\n examples[\"text\"],\r\n padding=PaddingStrategy.DO_NOT_PAD,\r\n truncation=True,\r\n )\r\n\r\n# data_files is a Dict[str, str] mapping name -> path\r\ndatasets = load_dataset(\"text\", data_files={...}) \r\n\r\n# this is where the error happens if num_proc = 16,\r\n# but is fine if num_proc = 1\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=num_workers,\r\n)\r\n```\r\n\r\n## Expected results\r\nThe `map()` function succeeds with `num_proc` > 1.\r\n\r\n## Actual results\r\n![image](https:\/\/user-images.githubusercontent.com\/1170062\/121404271-a6cc5200-c910-11eb-8e27-5c893bd04042.png)\r\n![image](https:\/\/user-images.githubusercontent.com\/1170062\/121404362-be0b3f80-c910-11eb-9117-658943029aef.png)\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.6.2\r\n- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31\r\n- Python version: 3.9.5\r\n- PyTorch version (GPU?): 1.8.1+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: Yes, but I think N\/A for this issue\r\n- Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N\/A for this issue\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2470\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2469","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2469\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2469\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2469\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2469","id":916440418,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY2MTA1OTk1","number":2469,"title":"Bump tqdm version","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows \ud83d\ude1e \r\n\r\nit's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets`","Closing since this is now fixed in #2482 "],"created_at":1623259480000,"updated_at":1623423822000,"closed_at":1623423816000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2469","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2469","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2469.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2469.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2469\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2468","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2468\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2468\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2468\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2468","id":916427320,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY2MDk0ODI5","number":2468,"title":"Implement ClassLabel encoding in JSON loader","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["No, nevermind @lhoestq. Thanks to you for your reviews!"],"created_at":1623258534000,"updated_at":1624894794000,"closed_at":1624892735000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2468","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2468","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2468.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2468.patch"},"body":"Close #2365.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2468\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2466","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2466\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2466\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2466\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2466","id":915914098,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY1NjY1MjQy","number":2466,"title":"change udpos features structure","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Let's add the tags in another PR. Thanks again !","Close #2061 , close #2444."],"created_at":1623225811000,"updated_at":1624017309000,"closed_at":1623840097000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2466","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2466","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2466.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2466.patch"},"body":"The structure is change such that each example is a sentence\r\nThe change is done for issues:\r\n#2061 \r\n#2444 \r\n\r\nClose #2061 , close #2444.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2466\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2465","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2465\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2465\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2465\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2465","id":915525071,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY1MzMxMDMz","number":2465,"title":"adding masahaner dataset","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for the review. ","Thanks a lot for the corrections and comments. \r\n\r\nI have resolved point 2. The make style still throws some errors, please see below\r\n\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets\/**\/*.py metrics\r\n\/bin\/sh: 1: black: not found\r\nMakefile:13: recipe for target 'style' failed\r\nmake: *** [style] Error 127\r\n\r\nCan you help to resolve this?","Thank you very much @lhoestq for the help. "],"created_at":1623187225000,"updated_at":1623682745000,"closed_at":1623682745000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2465","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2465","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2465.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2465.patch"},"body":"Adding Masakhane dataset https:\/\/github.com\/masakhane-io\/masakhane-ner \r\n\r\n@lhoestq , can you please review","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2465\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2464","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2464\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2464\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2464\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2464","id":915485601,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY1Mjk1MDE5","number":2464,"title":"fix: adjusting indexing for the labels.","user":{"login":"drugilsberg","id":5406908,"node_id":"MDQ6VXNlcjU0MDY5MDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5406908?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/drugilsberg","html_url":"https:\/\/github.com\/drugilsberg","followers_url":"https:\/\/api.github.com\/users\/drugilsberg\/followers","following_url":"https:\/\/api.github.com\/users\/drugilsberg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/drugilsberg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/drugilsberg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/drugilsberg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/drugilsberg\/orgs","repos_url":"https:\/\/api.github.com\/users\/drugilsberg\/repos","events_url":"https:\/\/api.github.com\/users\/drugilsberg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/drugilsberg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Good catch ! Thanks for fixing it\r\n\r\nMy pleasure\ud83d\ude4f"],"created_at":1623185245000,"updated_at":1623233746000,"closed_at":1623229828000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2464","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2464","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2464.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2464.patch"},"body":"The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES`\r\nAfter this change, the `README.md` now reflects the content of `dataset_infos.json`.\r\n\r\nSigned-off-by: Matteo Manica ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2464\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2463","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2463\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2463\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2463\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2463","id":915454788,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY1MjY3NTA2","number":2463,"title":"Fix proto_qa download link","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623183796000,"updated_at":1623329396000,"closed_at":1623313870000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2463","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2463","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2463.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2463.patch"},"body":"Fixes #2459 \r\n\r\nInstead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2463\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2462","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2462\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2462\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2462\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2462","id":915384613,"node_id":"MDU6SXNzdWU5MTUzODQ2MTM=","number":2462,"title":"Merge DatasetDict and Dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":[],"created_at":1623180124000,"updated_at":1630560812000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As discussed in #2424 and #2437 (please see there for detailed conversation):\r\n- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.\r\n- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses \"typical\" end users. \r\n- A user expects a \"Dataset\" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.\r\n\r\nHere is a proposal for discussion and refined (and potential abandon if it's not good enough):\r\n- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other\r\n- let's disallow the use of integers in split names (probably not a very big breaking change)\r\n- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)\r\n- when you index with strings\/split name you have the same behavior as now (full backward compat)\r\n- let's then also have all the methods of a Dataset on the DatasetDict\r\n\r\nThe end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both.\r\n\r\n\r\nThere are a few things that we could discuss if we want to merge Dataset and DatasetDict:\r\n\r\n1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature\r\n ```\r\n from datasets import load_dataset\r\n\r\n dataset = load_dataset(...)\r\n dataset[\"train\"]\r\n dataset[\"input_ids\"]\r\n ```\r\n2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.\r\n\r\nMoreover regarding your points:\r\n\r\n- integers are not allowed as split names already\r\n- it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset\r\n\r\n\r\ncc: @thomwolf @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2462\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2461","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2461\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2461\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2461\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2461","id":915286150,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY1MTE3MTY4","number":2461,"title":"Support sliced list arrays in cast","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623173927000,"updated_at":1623174984000,"closed_at":1623174983000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2461","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2461","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2461.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2461.patch"},"body":"There is this issue in pyarrow:\r\n```python\r\nimport pyarrow as pa\r\n\r\narr = pa.array([[i * 10] for i in range(4)])\r\narr.cast(pa.list_(pa.int32())) # works\r\n\r\narr = arr.slice(1)\r\narr.cast(pa.list_(pa.int32())) # fails\r\n# ArrowNotImplementedError(\"Casting sliced lists (non-zero offset) not yet implemented\")\r\n```\r\n\r\nHowever in `Dataset.cast` we slice tables to cast their types (it's memory intensive), so we have the same issue.\r\nBecause of this it is currently not possible to cast a Dataset with a Sequence feature type (unless the table is small enough to not be sliced).\r\n\r\nIn this PR I fixed this by resetting the offset of `pyarrow.ListArray` arrays to zero in the table before casting.\r\nI used `pyarrow.compute.subtract` function to update the offsets of the ListArray.\r\n\r\ncc @abhi1thakur @SBrandeis ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2461\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2460","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2460\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2460\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2460\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2460","id":915268536,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY1MTAyMjA4","number":2460,"title":"Revert default in-memory for small datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/4","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/4","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/4\/labels","id":6680642,"node_id":"MDk6TWlsZXN0b25lNjY4MDY0Mg==","number":4,"title":"1.8","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":2,"state":"closed","created_at":1618937356000,"updated_at":1623178297000,"due_on":1623135600000,"closed_at":1623178264000},"comments":["Thank you for this welcome change guys!"],"created_at":1623172463000,"updated_at":1623175454000,"closed_at":1623174943000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2460","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2460","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2460.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2460.patch"},"body":"Close #2458","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2460\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2459","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2459\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2459\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2459\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2459","id":915222015,"node_id":"MDU6SXNzdWU5MTUyMjIwMTU=","number":2459,"title":"`Proto_qa` hosting seems to be broken","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@VictorSanh , I think @mariosasko is already working on it. "],"created_at":1623168992000,"updated_at":1623313869000,"closed_at":1623313869000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. \r\n\r\n@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"proto_qa\")\r\n```\r\n\r\n## Actual results\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 751, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/hf\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/proto_qa\/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e\/proto_qa.py\", line 131, in _split_generators\r\n train_fpath = dl_manager.download(_URLs[self.config.name][\"train\"])\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 199, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 291, in cached_path\r\n use_auth_token=download_config.use_auth_token,\r\n File \"\/home\/hf\/dev\/promptsource\/.venv\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/iesl\/protoqa-data\/master\/data\/train\/protoqa_train.jsonl\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2459\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2458","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2458\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2458\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2458\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2458","id":915199693,"node_id":"MDU6SXNzdWU5MTUxOTk2OTM=","number":2458,"title":"Revert default in-memory for small datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/4","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/4","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/4\/labels","id":6680642,"node_id":"MDk6TWlsZXN0b25lNjY4MDY0Mg==","number":4,"title":"1.8","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":2,"state":"closed","created_at":1618937356000,"updated_at":1623178297000,"due_on":1623135600000,"closed_at":1623178264000},"comments":["cc: @krandiash (pinged in reverted PR)."],"created_at":1623167501000,"updated_at":1623178631000,"closed_at":1623174943000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Users are reporting issues and confusion about setting default in-memory to True for small datasets.\r\n\r\nWe see 2 clear use cases of Datasets:\r\n- the \"canonical\" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation)\r\n- some edge cases (speed benchmarks, interactive\/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done\r\n\r\nAfter discussing with @lhoestq we have agreed to:\r\n- revert this feature (implemented in #2182)\r\n- explain in the docs how to optimize speed\/performance by setting default in-memory\r\n\r\ncc: @stas00 https:\/\/github.com\/huggingface\/datasets\/pull\/2409#issuecomment-856210552","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2458\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2457","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2457\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2457\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2457\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2457","id":915079441,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY0OTQwMzQ0","number":2457,"title":"Add align_labels_with_mapping function","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq i think this is ready for another review \ud83d\ude42 ","@lhoestq thanks for the feedback - it's now integrated :) \r\n\r\ni also added a comment about sorting the input label IDs","Created the PR here: https:\/\/github.com\/huggingface\/datasets\/pull\/2510","> Thanks ! Looks all good now :)\r\n> \r\n> We will also need to have the `DatasetDict.align_labels_with_mapping` method. Let me quickly add it\r\n\r\nthanks a lot! i always forget about `DatasetDict` - will be happy when it's just one \"dataset\" object :)"],"created_at":1623160440000,"updated_at":1623925026000,"closed_at":1623923812000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2457","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2457","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2457.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2457.patch"},"body":"This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself.\r\n\r\nThis will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`.\r\n\r\nAn example where this is needed is if we naively try to evaluate `microsoft\/deberta-base-mnli` on `mnli` because the model config has the following mappings:\r\n\r\n```python\r\n \"id2label\": {\r\n \"0\": \"CONTRADICTION\",\r\n \"1\": \"NEUTRAL\",\r\n \"2\": \"ENTAILMENT\"\r\n },\r\n \"label2id\": {\r\n \"CONTRADICTION\": 0,\r\n \"ENTAILMENT\": 2,\r\n \"NEUTRAL\": 1\r\n }\r\n```\r\n\r\nwhile the `mnli` dataset has the `contradiction` and `neutral` labels swapped:\r\n\r\n```python\r\nid2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'}\r\nlabel2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1}\r\n```\r\n\r\nAs a result, we get a much lower accuracy during evaluation:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers.trainer_utils import EvalPrediction\r\nfrom transformers import AutoModelForSequenceClassification, Trainer\r\n\r\n# load dataset for evaluation\r\nmnli = load_dataset(\"glue\", \"mnli\", split=\"test\")\r\n# load model\r\nmodel_ckpt = \"microsoft\/deberta-base-mnli\"\r\nmodel = AutoModelForSequenceClassification.from_pretrained(checkpoint)\r\n# preprocess, create trainer ...\r\nmnli_enc = ...\r\ntrainer = Trainer(model, args=args, tokenizer=tokenizer)\r\n# generate preds\r\npreds = trainer.predict(mnli_enc)\r\n# preds.label_ids misalinged with model.config => returns wrong accuracy (too low)!\r\ncompute_metrics(EvalPrediction(preds.predictions, preds.label_ids))\r\n```\r\n\r\nThe fix is to use the helper function before running the evaluation to make sure the label IDs are aligned:\r\n\r\n```python\r\nmnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column=\"label\")\r\n# preds now aligned and everyone is happy :)\r\npreds = trainer.predict(mnli_enc_aligned)\r\n```\r\n\r\ncc @thomwolf @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2457\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2456","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2456\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2456\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2456\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2456","id":914709293,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY0NjAwOTk1","number":2456,"title":"Fix cross-reference typos in documentation","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623145514000,"updated_at":1623174097000,"closed_at":1623174096000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2456","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2456","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2456.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2456.patch"},"body":"Fix some minor typos in docs that avoid the creation of cross-reference links.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2456\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2455","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2455\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2455\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2455\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2455","id":914177468,"node_id":"MDExOlB1bGxSZXF1ZXN0NjY0MTEzNjg2","number":2455,"title":"Update version in xor_tydi_qa.py","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for updating the version\r\n\r\n> Should I revert to the old dummy\/1.0.0 or delete it and keep only dummy\/1.1.0?\r\n\r\nFeel free to delete the old dummy data files\r\n"],"created_at":1623119025000,"updated_at":1623684925000,"closed_at":1623684925000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2455","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2455","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2455.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2455.patch"},"body":"Fix #2449\r\n\r\n@lhoestq Should I revert to the old `dummy\/1.0.0` or delete it and keep only `dummy\/1.1.0`?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2455\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2454","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2454\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2454\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2454\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2454","id":913883631,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYzODUyODU1","number":2454,"title":"Rename config and environment variable for in memory max size","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for the rename, @albertvillanova!"],"created_at":1623093668000,"updated_at":1623098626000,"closed_at":1623098626000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2454","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2454","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2454.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2454.patch"},"body":"As discussed in #2409, both config and environment variable have been renamed.\r\n\r\ncc: @stas00, huggingface\/transformers#12056","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2454\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2453","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2453\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2453\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2453\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2453","id":913729258,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYzNzE3NTk2","number":2453,"title":"Keep original features order","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":["The arrow writer was supposing that the columns were always in the sorted order. I just pushed a fix to reorder the arrays accordingly to the schema. It was failing for many datasets like squad","and obviously it broke everything","Feel free to revert my commit. I can investigate this in the coming days","@lhoestq I do not understand when you say:\r\n> It was failing for many datasets like squad\r\n\r\nAll the tests were green after my last commit.","> All the tests were green after my last commit.\r\n\r\nYes but loading the actual squad dataset was failing :\/\r\n"],"created_at":1623083198000,"updated_at":1623780336000,"closed_at":1623771828000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2453","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2453","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2453.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2453.patch"},"body":"When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not.\r\n\r\nI found this issue while working on #2366.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2453\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2452","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2452\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2452\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2452\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2452","id":913603877,"node_id":"MDU6SXNzdWU5MTM2MDM4Nzc=","number":2452,"title":"MRPC test set differences between torch and tensorflow datasets","user":{"login":"FredericOdermatt","id":50372080,"node_id":"MDQ6VXNlcjUwMzcyMDgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50372080?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/FredericOdermatt","html_url":"https:\/\/github.com\/FredericOdermatt","followers_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/followers","following_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/orgs","repos_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/repos","events_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/FredericOdermatt\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Realized that `tensorflow_datasets` is not provided by Huggingface and should therefore raise the issue there."],"created_at":1623075626000,"updated_at":1623076472000,"closed_at":1623076472000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen using `load_dataset(\"glue\", \"mrpc\")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue\/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets.\r\n\r\n## Steps to reproduce the bug\r\n\r\nMinimal working code \r\n```python\r\nfrom datasets import load_dataset\r\nimport tensorflow as tf\r\nimport tensorflow_datasets\r\n\r\n# torch\r\ndataset = load_dataset(\"glue\", \"mrpc\")\r\n# tf\r\ndata = tensorflow_datasets.load('glue\/{}'.format('mrpc'))\r\ndata = list(data['test'].as_numpy_iterator())\r\nfor i in range(40,50):\r\n tf_sentence1 = data[i]['sentence1'].decode(\"utf-8\") \r\n tf_sentence2 = data[i]['sentence2'].decode(\"utf-8\") \r\n\r\n tf_label = data[i]['label']\r\n \r\n index = data[i]['idx']\r\n print('Index {}'.format(index))\r\n torch_sentence1 = dataset['test']['sentence1'][index]\r\n torch_sentence2 = dataset['test']['sentence2'][index]\r\n\r\n torch_label = dataset['test']['label'][index]\r\n print('Tensorflow: \\n\\tSentence1 {}\\n\\tSentence2 {}\\n\\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label))\r\n print('Torch: \\n\\tSentence1 {}\\n\\tSentence2 {}\\n\\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label))\r\n```\r\n\r\nSample output \r\n```\r\nIndex 954\r\nTensorflow: \r\n\tSentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .\r\n\tSentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .\r\n\tLabel -1\r\nTorch: \r\n\tSentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .\r\n\tSentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .\r\n\tLabel 1\r\nIndex 711\r\nTensorflow: \r\n\tSentence1 Others keep records sealed for as little as five years or as much as 30 .\r\n\tSentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .\r\n\tLabel -1\r\nTorch: \r\n\tSentence1 Others keep records sealed for as little as five years or as much as 30 .\r\n\tSentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .\r\n\tLabel 0\r\n```\r\n\r\n## Expected results\r\nI would expect the datasets to be independent of whether I am working with torch or tensorflow.\r\n\r\n## Actual results\r\nTest set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1.\r\n\r\n## Environment info\r\n- `datasets` version: 1.7.0\r\n- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2452\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2451","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2451\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2451\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2451\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2451","id":913263340,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYzMzIwNDY1","number":2451,"title":"Mention that there are no answers in adversarial_qa test set","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1623053637000,"updated_at":1623054854000,"closed_at":1623054853000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2451","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2451","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2451.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2451.patch"},"body":"As mention in issue https:\/\/github.com\/huggingface\/datasets\/issues\/2447, there are no answers in the test set","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2451\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2450","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2450\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2450\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2450\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2450","id":912890291,"node_id":"MDU6SXNzdWU5MTI4OTAyOTE=","number":2450,"title":"BLUE file not found","user":{"login":"mirfan899","id":3822565,"node_id":"MDQ6VXNlcjM4MjI1NjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3822565?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mirfan899","html_url":"https:\/\/github.com\/mirfan899","followers_url":"https:\/\/api.github.com\/users\/mirfan899\/followers","following_url":"https:\/\/api.github.com\/users\/mirfan899\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mirfan899\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mirfan899\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mirfan899\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mirfan899\/orgs","repos_url":"https:\/\/api.github.com\/users\/mirfan899\/repos","events_url":"https:\/\/api.github.com\/users\/mirfan899\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mirfan899\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.\r\nYou can get the full list of metrics [here](https:\/\/github.com\/huggingface\/datasets\/tree\/master\/metrics) or by running\r\n```python\r\nfrom datasets import list_metrics\r\n\r\nprint(list_metrics())\r\n```","Ah, my mistake. Thanks for correcting"],"created_at":1622998914000,"updated_at":1623062775000,"closed_at":1623062775000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I'm having the following issue when I try to load the `blue` metric.\r\n\r\n```shell\r\nimport datasets\r\nmetric = datasets.load_metric('blue')\r\nTraceback (most recent call last):\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 320, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 291, in cached_path\r\n use_auth_token=download_config.use_auth_token,\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.7.0\/metrics\/blue\/blue.py\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 332, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 291, in cached_path\r\n use_auth_token=download_config.use_auth_token,\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/metrics\/blue\/blue.py\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 605, in load_metric\r\n dataset=False,\r\n File \"\/home\/irfan\/environments\/Perplexity_Transformers\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 343, in prepare_module\r\n combined_path, github_file_path\r\nFileNotFoundError: Couldn't find file locally at blue\/blue.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.7.0\/metrics\/blue\/blue.py.\r\nThe file is also not present on the master branch on github.\r\n```\r\nHere is dataset installed version info\r\n```shell\r\npip freeze | grep datasets\r\ndatasets==1.7.0\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2450\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2449","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2449\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2449\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2449\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2449","id":912751752,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYyODg1ODUz","number":2449,"title":"Update `xor_tydi_qa` url to v1.1","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just noticed while \r\n```load_dataset('local_path\/datastes\/xor_tydi_qa')``` works,\r\n```load_dataset('xor_tydi_qa')``` \r\noutputs an error: \r\n`\r\nFileNotFoundError: Couldn't find file at https:\/\/nlp.cs.washington.edu\/xorqa\/XORQA_site\/data\/xor_dev_retrieve_eng_span.jsonl\r\n`\r\n(the old url)\r\n\r\nI tired clearing the cache `.cache\/huggingface\/modules` and `.cache\/huggingface\/datasets`, didn't work.\r\n\r\nAnyone know how to fix this? Thanks.","It seems like the error is not on your end. By default, the lib tries to download the version of the dataset script that matches the version of the lib, and that version of the script is, in your case, broken because the old URL no longer works. Once this PR gets merged, you can wait for the new release or set `script_version` to `\"master\"` in `load_dataset` to get the fixed version of the script.","@mariosasko Thanks! It works now.\r\n\r\nPasting the docstring here for reference.\r\n```\r\n script_version (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load:\r\n\r\n - For canonical datasets in the `huggingface\/datasets` library like \"squad\", the default version of the module is the local version fo the lib.\r\n You can specify a different version from your local version of the lib (e.g. \"master\" or \"1.2.0\") but it might cause compatibility issues.\r\n - For community provided datasets like \"lhoestq\/squad\" that have their own git repository on the Datasets Hub, the default version \"main\" corresponds to the \"main\" branch.\r\n You can specify a different version that the default \"main\" by using a commit sha or a git tag of the dataset repository.\r\n```\r\nBranch name didn't work, but commit sha works.","Regarding the issue you mentioned about the `--ignore_verifications` flag, I think we should actually change the current behavior of the `--save_infos` flag to make it ignore the verifications as well, so that you don't need to specific `--ignore_verifications` in this case.","@lhoestq I realized I forgot to change this:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fdbf5a97d3393f4a91e4cddcabe364029508f7ce\/datasets\/xor_tydi_qa\/xor_tydi_qa.py#L72-L73\r\n\r\nWhat should I do?","Oh indeed. Please open a PR to change this. This should be 1.1.0"],"created_at":1622972698000,"updated_at":1623078981000,"closed_at":1623054664000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2449","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2449","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2449.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2449.patch"},"body":"The dataset is updated and the old url no longer works. So I updated it.\r\n\r\nI faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`).\r\n> And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag.\r\nhttps:\/\/github.com\/huggingface\/datasets\/issues\/2076#issuecomment-803904366","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2449\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2448","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2448\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2448\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2448\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2448","id":912360109,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYyNTI2NjA3","number":2448,"title":"Fix flores download link","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622914224000,"updated_at":1623182578000,"closed_at":1623053905000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2448","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2448","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2448.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2448.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2448\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2447","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2447\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2447\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2447\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2447","id":912299527,"node_id":"MDU6SXNzdWU5MTIyOTk1Mjc=","number":2447,"title":"dataset adversarial_qa has no answers in the \"test\" set","user":{"login":"bjascob","id":22728060,"node_id":"MDQ6VXNlcjIyNzI4MDYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22728060?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bjascob","html_url":"https:\/\/github.com\/bjascob","followers_url":"https:\/\/api.github.com\/users\/bjascob\/followers","following_url":"https:\/\/api.github.com\/users\/bjascob\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bjascob\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bjascob\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bjascob\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bjascob\/orgs","repos_url":"https:\/\/api.github.com\/users\/bjascob\/repos","events_url":"https:\/\/api.github.com\/users\/bjascob\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bjascob\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I'm pretty sure that the answers are not made available for the test set on purpose because it is part of the DynaBench benchmark, for which you can submit your predictions on the website.\r\nIn any case we should mention this in the dataset card of this dataset.","Makes sense, but not intuitive for someone searching through the datasets. Thanks for adding the note to clarify."],"created_at":1622905058000,"updated_at":1623064387000,"closed_at":1623064387000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta')\r\n\r\n## Steps to reproduce the bug\r\n```\r\nfrom datasets import load_dataset\r\nexamples = load_dataset('adversarial_qa', 'adversarialQA', script_version=\"master\")['test']\r\nprint('Loaded {:,} examples'.format(len(examples)))\r\nhas_answers = 0\r\nfor e in examples:\r\n if e['answers']['text']:\r\n has_answers += 1\r\nprint('{:,} have answers'.format(has_answers))\r\n>>> Loaded 3,000 examples\r\n>>> 0 have answers\r\n\r\nexamples = load_dataset('adversarial_qa', 'adversarialQA', script_version=\"master\")['validation']\r\n<...code above...>\r\n>>> Loaded 3,000 examples\r\n>>> 3,000 have answers\r\n```\r\n\r\n## Expected results\r\nIf 'test' is a valid dataset, it should have answers. Also note that all of the 'train' and 'validation' sets have answers, there are no \"no answer\" questions with this set (not sure if this is correct or not).\r\n\r\n## Environment info\r\n- `datasets` version: 1.7.0\r\n- Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.5\r\n- PyArrow version: 1.0.0\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2447\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2446","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2446\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2446\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2446\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2446","id":911635399,"node_id":"MDU6SXNzdWU5MTE2MzUzOTk=","number":2446,"title":"`yelp_polarity` is broken","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["```\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/streamlit\/script_runner.py\", line 332, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 233, in \r\n configs = get_confs(option)\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 604, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 588, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 148, in get_confs\r\n builder_cls = nlp.load.import_main_class(module_path[0], dataset=True)\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 85, in import_main_class\r\n module = importlib.import_module(module_path)\r\nFile \"\/usr\/lib\/python3.7\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nFile \"\", line 1006, in _gcd_import\r\nFile \"\", line 983, in _find_and_load\r\nFile \"\", line 967, in _find_and_load_unlocked\r\nFile \"\", line 677, in _load_unlocked\r\nFile \"\", line 728, in exec_module\r\nFile \"\", line 219, in _call_with_frames_removed\r\nFile \"\/home\/sasha\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/yelp_polarity\/a770787b2526bdcbfc29ac2d9beb8e820fbc15a03afd3ebc4fb9d8529de57544\/yelp_polarity.py\", line 36, in \r\n from datasets.tasks import TextClassification\r\n```","Solved by updating the `nlpviewer`"],"created_at":1622821469000,"updated_at":1622833007000,"closed_at":1622833007000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/22514219\/120828150-c4a35b00-c58e-11eb-8083-a537cee4dbb3.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2446\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2445","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2445\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2445\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2445\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2445","id":911577578,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYxODMzMTky","number":2445,"title":"Fix broken URLs for bn_hate_speech and covid_tweets_japanese","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! To fix the CI you just have to rename the dummy data file in the dummy_data.zip files","thanks for the tip with the dummy data - all fixed now!"],"created_at":1622818415000,"updated_at":1622828386000,"closed_at":1622828385000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2445","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2445","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2445.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2445.patch"},"body":"Closes #2388 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2445\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2444","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2444\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2444\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2444\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2444","id":911297139,"node_id":"MDU6SXNzdWU5MTEyOTcxMzk=","number":2444,"title":"Sentence Boundaries missing in Dataset: xtreme \/ udpos","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nThis is a known issue. More info on this issue can be found in #2061. If you are looking for an open-source contribution, there are step-by-step instructions in the linked issue that you can follow to fix it.","Closed by #2466."],"created_at":1622797826000,"updated_at":1624017223000,"closed_at":1624017223000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I was browsing through annotation guidelines, as suggested by the datasets introduction.\r\n\r\nThe guidlines saids \"There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed.\" in the [Sentence Boundaries and Comments section](https:\/\/universaldependencies.org\/format.html#sentence-boundaries-and-comments)\r\n\r\nBut the sentence boundaries seems not to be represented by huggingface datasets features well. I found out that multiple sentence are concatenated together as a 1D array, without any delimiter.\r\n\r\nPAN-x, which is another token classification subset from xtreme do represent the sentence boundary using a 2D array.\r\n\r\nYou may compare in PAN-x.en and udpos.English in the explorer:\r\n https:\/\/huggingface.co\/datasets\/viewer\/?dataset=xtreme","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2444\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2443","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2443\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2443\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2443\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2443","id":909983574,"node_id":"MDU6SXNzdWU5MDk5ODM1NzQ=","number":2443,"title":"Some tests hang on Windows","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! That would be nice indeed to at least have a warning, since we don't handle the max path length limit.\r\nAlso if we could have an error instead of an infinite loop I'm sure windows users will appreciate that","Unfortunately, I know this problem very well... \ud83d\ude05 \r\n\r\nI remember having proposed to throw an error instead of hanging in an infinite loop #2220: 60c7d1b6b71469599a27147a08100f594e7a3f84, 8c8ab60018b00463edf1eca500e434ff061546fc \r\nbut @lhoestq told me:\r\n> Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps:\/\/github.com\/benediktschmitt\/py-filelock\r\n> \r\n> Unless we have proper tests for this, I wouldn't recommend to change it\r\n\r\nI opened an Issue requesting a warning\/error at startup for that case: #2224","@albertvillanova Thanks for additional info on this issue.\r\n\r\nYes, I think the best option is to throw an error instead of suppressing it in a loop. I've considered 2 more options, but I don't really like them:\r\n1. create a temporary file with a filename longer than 255 characters on import; if this fails, long paths are not enabled and raise a warning. I'm not sure about this approach because I don't like the idea of creating a temporary file on import for this purpose.\r\n2. check if long paths are enabled with [this code](https:\/\/stackoverflow.com\/a\/46546731\/14095927). As mentioned in the comment, this code relies on an undocumented function and Win10-specific."],"created_at":1622680050000,"updated_at":1624870059000,"closed_at":1624870059000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues\/PRs. IMO throwing an error is too harsh, but maybe we can emit a warning in the top-level `__init__.py ` on startup if long paths are not enabled.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2443\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2442","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2442\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2442\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2442\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2442","id":909677029,"node_id":"MDExOlB1bGxSZXF1ZXN0NjYwMjE1ODY1","number":2442,"title":"add english language tags for ~100 datasets","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags"],"created_at":1622651096000,"updated_at":1622800300000,"closed_at":1622800299000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2442","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2442","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2442.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2442.patch"},"body":"As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.\r\n\r\nNote that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English...","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2442\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2441","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2441\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2441\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2441\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2441","id":908554713,"node_id":"MDU6SXNzdWU5MDg1NTQ3MTM=","number":2441,"title":"DuplicatedKeysError on personal dataset","user":{"login":"lucaguarro","id":22605313,"node_id":"MDQ6VXNlcjIyNjA1MzEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22605313?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lucaguarro","html_url":"https:\/\/github.com\/lucaguarro","followers_url":"https:\/\/api.github.com\/users\/lucaguarro\/followers","following_url":"https:\/\/api.github.com\/users\/lucaguarro\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lucaguarro\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lucaguarro\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lucaguarro\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lucaguarro\/orgs","repos_url":"https:\/\/api.github.com\/users\/lucaguarro\/repos","events_url":"https:\/\/api.github.com\/users\/lucaguarro\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lucaguarro\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r\nYou can fix that by making sure that your keys are unique.\r\n\r\nFor example if you use a counter to define the key of each example, make sure that your counter is not reset to 0 in during examples generation (between two open files for examples).\r\n\r\nLet me know if you have other questions :)","Yup, I indeed was generating duplicate keys. Fixed it and now it's working."],"created_at":1622570381000,"updated_at":1622850603000,"closed_at":1622850603000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nEver since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script.\r\nError returned when running this line: `dataset = load_dataset('\/content\/drive\/MyDrive\/Thesis\/Datasets\/book_preprocessing\/goodreads_maharjan_trimmed_and_nered\/goodreadsnered.py')`\r\nNote that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets.\r\n\r\n## Steps to reproduce the bug\r\nI cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested.\r\n\r\n## Expected results\r\nFor my data to be loaded.\r\n\r\n## Actual results\r\n**DuplicatedKeysError** exception is raised\r\n```\r\nDownloading and preparing dataset good_reads_practice_dataset\/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/good_reads_practice_dataset\/main_domain\/1.1.0\/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nDuplicatedKeysError Traceback (most recent call last)\r\n\r\n in ()\r\n----> 1 dataset = load_dataset('\/content\/drive\/MyDrive\/Thesis\/Datasets\/book_preprocessing\/goodreads_maharjan_trimmed_and_nered\/goodreadsnered.py')\r\n\r\n5 frames\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)\r\n 749 try_from_hf_gcs=try_from_hf_gcs,\r\n 750 base_path=base_path,\r\n--> 751 use_auth_token=use_auth_token,\r\n 752 )\r\n 753 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 573 if not downloaded_from_gcs:\r\n 574 self._download_and_prepare(\r\n--> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 576 )\r\n 577 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 650 try:\r\n 651 # Prepare split will record examples associated to the split\r\n--> 652 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 653 except OSError as e:\r\n 654 raise OSError(\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _prepare_split(self, split_generator)\r\n 990 writer.write(example, key)\r\n 991 finally:\r\n--> 992 num_examples, num_bytes = writer.finalize()\r\n 993 \r\n 994 split_generator.split_info.num_examples = num_examples\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py in finalize(self, close_stream)\r\n 407 # In case current_examples < writer_batch_size, but user uses finalize()\r\n 408 if self._check_duplicates:\r\n--> 409 self.check_duplicate_keys()\r\n 410 # Re-intializing to empty list for next batch\r\n 411 self.hkey_record = []\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py in check_duplicate_keys(self)\r\n 347 for hash, key in self.hkey_record:\r\n 348 if hash in tmp_record:\r\n--> 349 raise DuplicatedKeysError(key)\r\n 350 else:\r\n 351 tmp_record.add(hash)\r\n\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.7.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.7.9\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2441\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2440","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2440\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2440\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2440\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2440","id":908521954,"node_id":"MDU6SXNzdWU5MDg1MjE5NTQ=","number":2440,"title":"Remove `extended` field from dataset tagger","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too","Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.\r\nThe repo of the tagger is here if someone wants to give this a try: https:\/\/github.com\/huggingface\/datasets-tagging\r\nOtherwise I can probably fix it next week","I've opened a PR on `datasets-tagging` to fix the issue \ud83d\ude80 ","thanks ! this is fixed now"],"created_at":1622567922000,"updated_at":1623229591000,"closed_at":1623229590000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhile working on #2435 I used the [dataset tagger](https:\/\/huggingface.co\/datasets\/tagging\/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:\r\n\r\n```\r\ndataset_name = 'arcd'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_changed_dataset_card(dataset_name):\r\n card_path = repo_path \/ \"datasets\" \/ dataset_name \/ \"README.md\"\r\n assert card_path.exists()\r\n error_messages = []\r\n try:\r\n ReadMe.from_readme(card_path)\r\n except Exception as readme_error:\r\n error_messages.append(f\"The following issues have been found in the dataset cards:\\nREADME:\\n{readme_error}\")\r\n try:\r\n DatasetMetadata.from_readme(card_path)\r\n except Exception as metadata_error:\r\n error_messages.append(\r\n f\"The following issues have been found in the dataset cards:\\nYAML tags:\\n{metadata_error}\"\r\n )\r\n \r\n if error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE __init__() got an unexpected keyword argument 'extended'\r\n\r\ntests\/test_dataset_cards.py:70: ValueError\r\n```\r\n\r\nConsider either removing this tag from the tagger or including it as part of the validation step in the CI.\r\n\r\ncc @yjernite ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2440\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2439","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2439\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2439\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2439\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2439","id":908511983,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU5MTkzMDE3","number":2439,"title":"Better error message when trying to access elements of a DatasetDict without specifying the split","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622567072000,"updated_at":1623773003000,"closed_at":1623056075000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2439","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2439","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2439.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2439.patch"},"body":"As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name.\r\n\r\ncc @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2439\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2438","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2438\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2438\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2438\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2438","id":908461914,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU5MTQ5Njg0","number":2438,"title":"Fix NQ features loading: reorder fields of features to match nested fields order in arrow data","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622563770000,"updated_at":1622797351000,"closed_at":1622797351000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2438","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2438","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2438.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2438.patch"},"body":"As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema.\r\n\r\nTo fix that I re-order the features based on the arrow schema:\r\n\r\n```python\r\ninferred_features = Features.from_arrow_schema(arrow_table.schema)\r\nself.info.features = self.info.features.reorder_fields_as(inferred_features)\r\nassert self.info.features.type == inferred_features.type\r\n```\r\n\r\nThe re-ordering is a recursive function. It takes into account that the `Sequence` feature type is a struct of list and not a list of struct.\r\n\r\nNow it's possible to load `natural_questions` again :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2438\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2437","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2437\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2437\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2437\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2437","id":908108882,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4ODUwNTkw","number":2437,"title":"Better error message when using the wrong load_from_disk","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We also have other cases where people are lost between Dataset and DatasetDict, maybe let's gather and solve them all here?\r\n\r\nFor instance, I remember that some people thought they would request a single element of a split but are calling this on a DatasetDict. Maybe here also a better error message when the split requested in not in the dict? pointing to the list of split and the fact that this is a datasetdict containing several datasets?","Good idea, let me add a better error message for this case too","As a digression from the topic of this PR, IMHO I think that the difference between Dataset and DatasetDict is an additional abstraction complexity that confuses \"typical\" end users. I think a user expects a \"Dataset\" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.\r\n\r\nI don't know your opinion about this, but it might be worth discussing...\r\n\r\nFor example, I really like the line of the solution of using the function `load_from_disk`, which hides the previous mentioned complexity and handles under the hood whether Dataset\/DatasetDict instances should be created...","I totally agree, I just haven't found a solution that doesn't imply major breaking changes x)","Yes I would also like to find a better solution. Do we have any solution actually? (even implying breaking changes)\r\n\r\nHere is a proposal for discussion and refined (and potential abandon if it's not good enough):\r\n- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other\r\n- let's disallow the use of integers in split names (probably not a very big breaking change)\r\n- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)\r\n- when you index with strings\/split name you have the same behavior as now (full backward compat)\r\n- let's then also have all the methods of a Dataset on the DatasetDict","The end goal would be to merge both `Dataset` and `DatasetDict` object in a single object that would be (pretty much totally) backward compatible with both.","I like the direction :) I think it can make sense to concatenate them.\r\n\r\nThere are a few things that I we could discuss if we want to merge Dataset and DatasetDict:\r\n1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(...)\r\ndataset[\"train\"]\r\ndataset[\"input_ids\"]\r\n```\r\n2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.\r\n\r\nMoreover regarding your points:\r\n- integers are not allowed as split names already\r\n- it's definitely doable to have all the methods. Maybe some of them like `train_test_split` that is currently only available for Dataset can be tweaked to work for a split dataset","Instead of suggesting the use of `Dataset.load_from_disk` and `DatasetDict.load_from_disk`, the error message now suggests to use `datasets.load_from_disk` directly","Merging the error message improvement, feel free to continue the discussion here or in a github issue"],"created_at":1622540602000,"updated_at":1623175430000,"closed_at":1623175430000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2437","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2437","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2437.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2437.patch"},"body":"As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2437\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2436","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2436\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2436\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2436\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2436","id":908100211,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4ODQzMzQy","number":2436,"title":"Update DatasetMetadata and ReadMe","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622539957000,"updated_at":1623677007000,"closed_at":1623677006000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2436","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2436","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2436.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2436.patch"},"body":"This PR contains the changes discussed in #2395.\r\n\r\n**Edit**:\r\nIn addition to those changes, I'll be updating the `ReadMe` as follows:\r\n\r\nCurrently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors.\r\n\r\nOne way to make `ReadMe` consistent with `DatasetMetadata` and add a separate `.validate()` method is to throw separate parsing and validation errors. \r\n\r\nThis way, we don't have to throw validation errors, but only parsing errors in `__init__ ()`. We can have an option in `__init__()` to suppress parsing errors so that an object is created for validation. Doing this will allow the user to get all the errors in one go.\r\n\r\nIn `test_dataset_cards` , we are already catching error messages and appending to a list. This can be done for `ReadMe()` for parsing errors, and `ReadMe(...,suppress_errors=True); readme.validate()` for validation, separately.\r\n\r\n**Edit 2**:\r\nThe only parsing issue we have as of now is multiple headings at the same level with the same name. I assume this will happen very rarely, but it is still better to throw an error than silently pick one of them. It should be okay to separate it this way. \r\n\r\nWdyt @lhoestq ?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2436\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2435","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2435\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2435\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2435\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2435","id":907505531,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MzQzNDE2","number":2435,"title":"Insert Extractive QA templates for SQuAD-like datasets","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :)","urgh, the windows tests are failing because of encoding issues \ud83d\ude22 \r\n\r\n```\r\ndataset_name = 'squad_kor_v1'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_changed_dataset_card(dataset_name):\r\n card_path = repo_path \/ \"datasets\" \/ dataset_name \/ \"README.md\"\r\n assert card_path.exists()\r\n error_messages = []\r\n try:\r\n ReadMe.from_readme(card_path)\r\n except Exception as readme_error:\r\n error_messages.append(f\"The following issues have been found in the dataset cards:\\nREADME:\\n{readme_error}\")\r\n try:\r\n DatasetMetadata.from_readme(card_path)\r\n except Exception as metadata_error:\r\n error_messages.append(\r\n f\"The following issues have been found in the dataset cards:\\nYAML tags:\\n{metadata_error}\"\r\n )\r\n \r\n if error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to \r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to \r\n```","Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR"],"created_at":1622470151000,"updated_at":1622730870000,"closed_at":1622730747000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2435","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2435","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2435.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2435.patch"},"body":"This PR adds task templates for 9 SQuAD-like templates with the following properties:\r\n\r\n* 1 config\r\n* A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434)\r\n* Less than 20GB (my laptop can't handle more right now)\r\n\r\nThe aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries \/ services. \r\n\r\nPR #2429 should be merged before this one.\r\n\r\ncc @abhi1thakur ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2435\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2434","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2434\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2434\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2434\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2434","id":907503557,"node_id":"MDU6SXNzdWU5MDc1MDM1NTc=","number":2434,"title":"Extend QuestionAnsweringExtractive template to handle nested columns","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["this is also the case for the following datasets and configurations:\r\n\r\n* `mlqa` with config `mlqa-translate-train.ar`\r\n\r\n"],"created_at":1622470011000,"updated_at":1623918090000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support \"flat\" features. We should extend the functionality to cover QA datasets like:\r\n\r\n* `iapp_wiki_qa_squad`\r\n* `parsinlu_reading_comprehension`\r\n\r\nwhere the nested features differ with those from `squad` and trigger an `ArrowNotImplementedError`:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nArrowNotImplementedError Traceback (most recent call last)\r\n in \r\n----> 1 ds.prepare_for_task(\"question-answering-extractive\")[0]\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in prepare_for_task(self, task)\r\n 1436 # We found a template so now flush `DatasetInfo` to skip the template update in `DatasetInfo.__post_init__`\r\n 1437 dataset.info.task_templates = None\r\n-> 1438 dataset = dataset.cast(features=template.features)\r\n 1439 return dataset\r\n 1440 \r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)\r\n 977 format = self.format\r\n 978 dataset = self.with_format(\"arrow\")\r\n--> 979 dataset = dataset.map(\r\n 980 lambda t: t.cast(schema),\r\n 981 batched=True,\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1600 \r\n 1601 if num_proc is None or num_proc == 1:\r\n-> 1602 return self._map_single(\r\n 1603 function=function,\r\n 1604 with_indices=with_indices,\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 176 }\r\n 177 # apply actual function\r\n--> 178 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 179 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 180 # re-apply format to the output\r\n\r\n~\/git\/datasets\/src\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 395 # Call actual function\r\n 396 \r\n--> 397 out = func(self, *args, **kwargs)\r\n 398 \r\n 399 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)\r\n 1940 ) # Something simpler?\r\n 1941 try:\r\n-> 1942 batch = apply_function_on_filtered_inputs(\r\n 1943 batch,\r\n 1944 indices,\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)\r\n 1836 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset\r\n 1837 processed_inputs = (\r\n-> 1838 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n 1839 )\r\n 1840 if update_data is None:\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in (t)\r\n 978 dataset = self.with_format(\"arrow\")\r\n 979 dataset = dataset.map(\r\n--> 980 lambda t: t.cast(schema),\r\n 981 batched=True,\r\n 982 batch_size=batch_size,\r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.cast()\r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.ChunkedArray.cast()\r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/compute.py in cast(arr, target_type, safe)\r\n 241 else:\r\n 242 options = CastOptions.unsafe(target_type)\r\n--> 243 return call_function(\"cast\", [arr], options)\r\n 244 \r\n 245 \r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/_compute.pyx in pyarrow._compute.call_function()\r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/_compute.pyx in pyarrow._compute.Function.call()\r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n~\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from struct, answer_start: list, text: list> to struct using function cast_struct\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2434\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2433","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2433\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2433\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2433\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2433","id":907488711,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MzI5MDQ4","number":2433,"title":"Fix DuplicatedKeysError in adversarial_qa","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622468927000,"updated_at":1622537531000,"closed_at":1622537531000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2433","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2433","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2433.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2433.patch"},"body":"Fixes #2431","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2433\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2432","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2432\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2432\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2432\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2432","id":907462881,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MzA3MTE1","number":2432,"title":"Fix CI six installation on linux","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622466936000,"updated_at":1622467027000,"closed_at":1622467026000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2432","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2432","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2432.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2432.patch"},"body":"For some reason we end up with this error in the linux CI when running pip install .[tests]\r\n```\r\npip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (\/usr\/local\/lib\/python3.6\/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequirement('six>1.9'), SpecifierRequirement('six>=1.11'), SpecifierRequirement('six~=1.15'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0'), SpecifierRequirement('six>=1.11.0'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.6.1'), SpecifierRequirement('six>=1.9'), SpecifierRequirement('six>=1.5'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six'), SpecifierRequirement('six'), SpecifierRequirement('six~=1.15.0'), SpecifierRequirement('six'), SpecifierRequirement('six<2.0,>=1.6.1'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0')\r\n```\r\nexample CI failure here:\r\nhttps:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/6200\/workflows\/b64fdec9-f9e6-431c-acd7-e9f2c440c568\/jobs\/38247\r\n\r\nThe main version requirement comes from tensorflow: `six~=1.15.0`\r\nSo I pinned the six version to this.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2432\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2431","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2431\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2431\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2431\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2431","id":907413691,"node_id":"MDU6SXNzdWU5MDc0MTM2OTE=","number":2431,"title":"DuplicatedKeysError when trying to load adversarial_qa","user":{"login":"hanss0n","id":21348833,"node_id":"MDQ6VXNlcjIxMzQ4ODMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21348833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hanss0n","html_url":"https:\/\/github.com\/hanss0n","followers_url":"https:\/\/api.github.com\/users\/hanss0n\/followers","following_url":"https:\/\/api.github.com\/users\/hanss0n\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hanss0n\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hanss0n\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hanss0n\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hanss0n\/orgs","repos_url":"https:\/\/api.github.com\/users\/hanss0n\/repos","events_url":"https:\/\/api.github.com\/users\/hanss0n\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hanss0n\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\n#2433 fixed the issue, thanks @mariosasko :)\r\n\r\nWe'll do a patch release soon of the library.\r\nIn the meantime, you can use the fixed version of adversarial_qa by adding `script_version=\"master\"` in `load_dataset`"],"created_at":1622463079000,"updated_at":1622537643000,"closed_at":1622537531000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nA clear and concise description of what the bug is.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndataset = load_dataset('adversarial_qa', 'adversarialQA')\r\n```\r\n\r\n## Expected results\r\nThe dataset should be loaded into memory\r\n\r\n## Actual results\r\n\r\n>DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\n>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4\r\n>Keys should be unique and deterministic in nature\r\n>\r\n>\r\n>During handling of the above exception, another exception occurred:\r\n>\r\n>DuplicatedKeysError Traceback (most recent call last)\r\n>\r\n>\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py in check_duplicate_keys(self)\r\n> 347 for hash, key in self.hkey_record:\r\n> 348 if hash in tmp_record:\r\n>--> 349 raise DuplicatedKeysError(key)\r\n> 350 else:\r\n> 351 tmp_record.add(hash)\r\n>\r\n>DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\n>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4\r\n>Keys should be unique and deterministic in nature\r\n\r\n## Environment info\r\n- `datasets` version: 1.7.0\r\n- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.10\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2431\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2430","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2430\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2430\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2430\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2430","id":907322595,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MTg3Njkw","number":2430,"title":"Add version-specific BibTeX","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe we should only keep one citation ?\r\ncc @thomwolf @yjernite ","For info:\r\n- The one automatically generated by Zenodo is version-specific, and a new one will be generated after each release.\r\n- Zenodo has also generated a project-specific DOI (they call it *Concept DOI* as opposed to *Version DOI*), but currently this only redirects to the DOI page of the latest version.\r\n- All the information automatically generated by Zenodo can be corrected\/customized if necessary.\r\n - If we decide to correct\/update metadata, take into account that there are the following fields (among others): Authors, Contributors, Title, Description, Keywords, Additional Notes, License,...\r\n\r\nAccording to Zenodo: https:\/\/help.zenodo.org\/#versioning\r\n> **Which DOI should I use in citations?**\r\n> \r\n> You should normally always use the DOI for the specific version of your record in citations. This is to ensure that other researchers can access the exact research artefact you used for reproducibility. By default, Zenodo uses the specific version to generate citations.\r\n> \r\n> You can use the Concept DOI representing all versions in citations when it is desirable to cite an evolving research artifact, without being specific about the version.","Thanks for the details ! As zenodo says we should probably just show the versioned DOI. And we can remove the old citation.","I have removed the old citation.\r\n\r\nWhat about the new one? Should we customize it? I have fixed some author names (replaced nickname with first and family names). Note that the list of authors is created automatically by Zenodo from this list: https:\/\/github.com\/huggingface\/datasets\/graphs\/contributors\r\nI do not know if this default automatic list of authors is what we want to show in the citation..."],"created_at":1622455542000,"updated_at":1623138802000,"closed_at":1623138802000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2430","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2430","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2430.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2430.patch"},"body":"As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release.\r\n\r\nThis PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project.\r\n\r\nSee version-specific BibTeX entry here: https:\/\/zenodo.org\/record\/4817769\/export\/hx#.YLSyd6j7RPY","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2430\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2429","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2429\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2429\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2429\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2429","id":907321665,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MTg2ODc0","number":2429,"title":"Rename QuestionAnswering template to QuestionAnsweringExtractive","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> I like having \"extractive\" in the name to make things explicit. However this creates an inconsistency with transformers.\r\n> \r\n> See\r\n> https:\/\/huggingface.co\/transformers\/task_summary.html#extractive-question-answering\r\n> \r\n> But this is minor IMO and I'm ok with this renaming\r\n\r\nyes i chose this convention because it allows us to match the `QuestionAnsweringXxx` naming and i think it's better to have `task_name-subtask_name` should auto-complete ever become part of the Hub :)"],"created_at":1622455482000,"updated_at":1622476646000,"closed_at":1622476644000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2429","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2429","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2429.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2429.patch"},"body":"Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2429\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2428","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2428\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2428\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2428\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2428","id":907169746,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MDU2MjI3","number":2428,"title":"Add copyright info for wiki_lingua dataset","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Build fails but this change should not be the reason...","rebased on master","rebased on master"],"created_at":1622445772000,"updated_at":1622802153000,"closed_at":1622802153000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2428","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2428","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2428.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2428.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2428\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2427","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2427\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2427\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2427\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2427","id":907162923,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU4MDUwMjAx","number":2427,"title":"Add copyright info to MLSUM dataset","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Build fails but this change should not be the reason...","rebased on master"],"created_at":1622445357000,"updated_at":1622800430000,"closed_at":1622800430000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2427","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2427","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2427.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2427.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2427\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2426","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2426\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2426\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2426\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2426","id":906473546,"node_id":"MDU6SXNzdWU5MDY0NzM1NDY=","number":2426,"title":"Saving Graph\/Structured Data in Datasets","user":{"login":"gsh199449","id":3295342,"node_id":"MDQ6VXNlcjMyOTUzNDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3295342?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gsh199449","html_url":"https:\/\/github.com\/gsh199449","followers_url":"https:\/\/api.github.com\/users\/gsh199449\/followers","following_url":"https:\/\/api.github.com\/users\/gsh199449\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gsh199449\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gsh199449\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gsh199449\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gsh199449\/orgs","repos_url":"https:\/\/api.github.com\/users\/gsh199449\/repos","events_url":"https:\/\/api.github.com\/users\/gsh199449\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gsh199449\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It should probably work out of the box to save structured data. If you want to show an example we can help you.","An example of a toy dataset is like:\r\n```json\r\n[\r\n {\r\n \"name\": \"mike\",\r\n \"friends\": [\r\n \"tom\",\r\n \"lily\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"aaaaa\",\r\n \"reader\": [\r\n \"tom\",\r\n \"lucy\"\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"tom\",\r\n \"friends\": [\r\n \"mike\",\r\n \"bbb\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"xxxxx\",\r\n \"reader\": [\r\n \"tom\",\r\n \"qqqq\"\r\n ]\r\n }\r\n ]\r\n }\r\n]\r\n```\r\nWe can use the friendship relation to build a directional graph, and a user node can be represented using the articles written by himself. And the relationship between articles can be built when the article has read by the same user.\r\nThis dataset can be used to model the heterogeneous relationship between users and articles, and this graph can be used to build recommendation systems to recommend articles to the user, or potential friends to the user.","Hi,\r\n\r\nyou can do the following to load this data into a `Dataset`:\r\n```python\r\nfrom datasets import Dataset\r\nexamples = [\r\n {\r\n \"name\": \"mike\",\r\n \"friends\": [\r\n \"tom\",\r\n \"lily\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"aaaaa\",\r\n \"reader\": [\r\n \"tom\",\r\n \"lucy\"\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"tom\",\r\n \"friends\": [\r\n \"mike\",\r\n \"bbb\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"xxxxx\",\r\n \"reader\": [\r\n \"tom\",\r\n \"qqqq\"\r\n ]\r\n }\r\n ]\r\n }\r\n]\r\n\r\nkeys = examples[0].keys()\r\nvalues = [ex.values() for ex in examples]\r\ndataset = Dataset.from_dict({k: list(v) for k, v in zip(keys, zip(*values))})\r\n```\r\n\r\nLet us know if this works for you.","Thank you so much, and that works! I also have a question that if the dataset is very large, that cannot be loaded into the memory. How to create the Dataset?","If your dataset doesn't fit in memory, store it in a local file and load it from there. Check out [this chapter](https:\/\/huggingface.co\/docs\/datasets\/master\/loading_datasets.html#from-local-files) in the docs for more info.","Nice! Thanks for your help."],"created_at":1622295321000,"updated_at":1622596863000,"closed_at":1622596863000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data type''.\r\nAlthough I also know that storing a python dict in pyarrow datasets is not the best practice, but I have no idea about how to save structured data in the Datasets. \r\n\r\nThank you very much for your help.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2426\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2425","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2425\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2425\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2425\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2425","id":906385457,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU3NDAwMjM3","number":2425,"title":"Fix Docstring Mistake: dataset vs. metric","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["IMO this PR is ready for review. I do not know why tests fail...","The CI fail is unrelated to this PR, and it has been fixed on master, merging :)","> I just have one comment: we use rouge, not rogue :p\r\n\r\nOops!","rebased on master"],"created_at":1622268593000,"updated_at":1622535484000,"closed_at":1622535484000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2425","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2425","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2425.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2425.patch"},"body":"PR to fix #2412","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2425\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2424","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2424\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2424\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2424\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2424","id":906193679,"node_id":"MDU6SXNzdWU5MDYxOTM2Nzk=","number":2424,"title":"load_from_disk and save_to_disk are not compatible with each other","user":{"login":"roholazandie","id":7584674,"node_id":"MDQ6VXNlcjc1ODQ2NzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7584674?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/roholazandie","html_url":"https:\/\/github.com\/roholazandie","followers_url":"https:\/\/api.github.com\/users\/roholazandie\/followers","following_url":"https:\/\/api.github.com\/users\/roholazandie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/roholazandie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/roholazandie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/roholazandie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/roholazandie\/orgs","repos_url":"https:\/\/api.github.com\/users\/roholazandie\/repos","events_url":"https:\/\/api.github.com\/users\/roholazandie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/roholazandie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\n`load_dataset` returns an instance of `DatasetDict` if `split` is not specified, so instead of `Dataset.load_from_disk`, use `DatasetDict.load_from_disk` to load the dataset from disk.","Thanks it worked!","Though I see a stream of issues open by people lost between datasets and datasets dicts so maybe there is here something that could be better in terms of UX. Could be better error handling or something else smarter to even avoid said errors but maybe we should think about this. Reopening to use this issue as a discussion place but feel free to open a new open if you prefer @lhoestq @albertvillanova ","We should probably improve the error message indeed.\r\n\r\nAlso note that there exists a function `load_from_disk` that can load a Dataset or a DatasetDict. Under the hood it calls either `Dataset.load_from_disk` or `DatasetDict.load_from_disk`:\r\n\r\n\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ndataset_dict = load_from_disk(\"path\/to\/dataset\/dict\")\r\nsingle_dataset = load_from_disk(\"path\/to\/single\/dataset\")\r\n```","I just opened #2437 to improve the error message","Superseded by #2462 "],"created_at":1622243230000,"updated_at":1623180152000,"closed_at":1623180152000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nload_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"art\")\r\ndataset.save_to_disk(\"mydir\")\r\nd = Dataset.load_from_disk(\"mydir\")\r\n```\r\n\r\n## Expected results\r\nIt is expected that these two functions be the reverse of each other without more manipulation\r\n\r\n## Actual results\r\nFileNotFoundError: [Errno 2] No such file or directory: 'mydir\/art\/state.json'\r\n\r\n## Environment info\r\n- `datasets` version: 1.6.2\r\n- Platform: Linux-5.4.0-73-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.10\r\n- PyTorch version (GPU?): 1.8.1+cu102 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2424\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2423","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2423\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2423\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2423\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2423","id":905935753,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU2OTc5MjA5","number":2423,"title":"add `desc` in `map` for `DatasetDict` object","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The CI error is unrelated to the PR, merging","@lhoestq, can we release this feature if you guys are planning for any patch release for Datasets. It'll slow down [#11927](https:\/\/github.com\/huggingface\/transformers\/pull\/11927) otherwise :\/ ","Sure definitely, having a discrepancy between Dataset.map and DatasetDict.map is an issue that we should fix and include in a patch release. Will do it in the coming days"],"created_at":1622230124000,"updated_at":1622472683000,"closed_at":1622466484000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2423","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2423","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2423.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2423.patch"},"body":"`desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2423\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2422","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2422\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2422\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2422\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2422","id":905568548,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU2NjM3MzY1","number":2422,"title":"Fix save_to_disk nested features order in dataset_info.json","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622214208000,"updated_at":1622215617000,"closed_at":1622215616000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2422","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2422","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2422.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2422.patch"},"body":"Fix issue https:\/\/github.com\/huggingface\/datasets\/issues\/2267\r\n\r\nThe order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2422\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2421","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2421\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2421\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2421\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2421","id":905549756,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU2NjIwMTM3","number":2421,"title":"doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES","user":{"login":"borisdayma","id":715491,"node_id":"MDQ6VXNlcjcxNTQ5MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/715491?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/borisdayma","html_url":"https:\/\/github.com\/borisdayma","followers_url":"https:\/\/api.github.com\/users\/borisdayma\/followers","following_url":"https:\/\/api.github.com\/users\/borisdayma\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/borisdayma\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/borisdayma\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/borisdayma\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/borisdayma\/orgs","repos_url":"https:\/\/api.github.com\/users\/borisdayma\/repos","events_url":"https:\/\/api.github.com\/users\/borisdayma\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/borisdayma\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622213530000,"updated_at":1622800365000,"closed_at":1622800365000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2421","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2421","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2421.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2421.patch"},"body":"MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2421\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2420","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2420\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2420\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2420\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2420","id":904821772,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU1OTQ1ODgw","number":2420,"title":"Updated Dataset Description","user":{"login":"binny-mathew","id":10741860,"node_id":"MDQ6VXNlcjEwNzQxODYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10741860?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/binny-mathew","html_url":"https:\/\/github.com\/binny-mathew","followers_url":"https:\/\/api.github.com\/users\/binny-mathew\/followers","following_url":"https:\/\/api.github.com\/users\/binny-mathew\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/binny-mathew\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/binny-mathew\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/binny-mathew\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/binny-mathew\/orgs","repos_url":"https:\/\/api.github.com\/users\/binny-mathew\/repos","events_url":"https:\/\/api.github.com\/users\/binny-mathew\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/binny-mathew\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622185851000,"updated_at":1623327095000,"closed_at":1623327095000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2420","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2420","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2420.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2420.patch"},"body":"Added Point of contact information and several other details about the dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2420\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2419","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2419\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2419\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2419\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2419","id":904347339,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU1NTA1OTM1","number":2419,"title":"adds license information for DailyDialog.","user":{"login":"aditya2211","id":11574558,"node_id":"MDQ6VXNlcjExNTc0NTU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11574558?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aditya2211","html_url":"https:\/\/github.com\/aditya2211","followers_url":"https:\/\/api.github.com\/users\/aditya2211\/followers","following_url":"https:\/\/api.github.com\/users\/aditya2211\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aditya2211\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aditya2211\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aditya2211\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aditya2211\/orgs","repos_url":"https:\/\/api.github.com\/users\/aditya2211\/repos","events_url":"https:\/\/api.github.com\/users\/aditya2211\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aditya2211\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks! Can you also add it as metadata in the YAML block at the top of the file?\r\n\r\nShould be in the form:\r\n\r\n```\r\nlicenses:\r\n- cc-by-sa-4.0\r\n```","seems like we need to add all the other tags ? \r\n``` \r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE __init__() missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'languages', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```","I'll let @lhoestq or @yjernite chime in (and maybe complete\/merge). Thanks!","Looks like CircleCI has an incident. Let's wait for it to be working again and make sure the CI is green","The remaining error is unrelated to this PR, merging"],"created_at":1622156622000,"updated_at":1622467012000,"closed_at":1622467012000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2419","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2419","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2419.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2419.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2419\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2418","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2418\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2418\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2418\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2418","id":904051497,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU1MjM2OTEz","number":2418,"title":"add utf-8 while reading README","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you please add encoding to this line as well to fix the issue (and maybe replace `path.open(...)` with `open(path, ...)`)?\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7bee4be44706a59b084b9b69c4cd00f73ee72f76\/src\/datasets\/utils\/metadata.py#L58","Sure, in fact even I was thinking of adding this in order to maintain the consistency!"],"created_at":1622139148000,"updated_at":1622800501000,"closed_at":1622800500000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2418","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2418","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2418.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2418.patch"},"body":"It was causing tests to fail in Windows (see #2416). In Windows, the default encoding is CP1252 which is unable to decode the character byte 0x9d ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2418\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2417","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2417\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2417\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2417\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2417","id":903956071,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU1MTU3NTI4","number":2417,"title":"Make datasets PEP-561 compliant","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is super cool, I love that \u2764\ufe0f "],"created_at":1622132177000,"updated_at":1622207410000,"closed_at":1622207356000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2417","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2417","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2417.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2417.patch"},"body":"Allows to type-check datasets with `mypy` when imported as a third-party library\r\n\r\nPEP-561: https:\/\/www.python.org\/dev\/peps\/pep-0561\r\nMyPy doc on the subject: https:\/\/mypy.readthedocs.io\/en\/stable\/installed_packages.html\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2417\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2416","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2416\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2416\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2416\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2416","id":903932299,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU1MTM3NDUy","number":2416,"title":"Add KLUE dataset","user":{"login":"jungwhank","id":53588015,"node_id":"MDQ6VXNlcjUzNTg4MDE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53588015?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jungwhank","html_url":"https:\/\/github.com\/jungwhank","followers_url":"https:\/\/api.github.com\/users\/jungwhank\/followers","following_url":"https:\/\/api.github.com\/users\/jungwhank\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jungwhank\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jungwhank\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jungwhank\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jungwhank\/orgs","repos_url":"https:\/\/api.github.com\/users\/jungwhank\/repos","events_url":"https:\/\/api.github.com\/users\/jungwhank\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jungwhank\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm not sure why I got error like below when I auto-generate dummy data \"mrc\" \r\n```\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```","> I'm not sure why I got error like below when I auto-generate dummy data \"mrc\"\r\n> \r\n> ```\r\n> datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\n> Found duplicate Key: 0\r\n> Keys should be unique and deterministic in nature\r\n> ```\r\n\r\nPlease check out the suggestion below. I think it might be a cause.","> > I'm not sure why I got error like below when I auto-generate dummy data \"mrc\"\r\n> > ```\r\n> > datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\n> > Found duplicate Key: 0\r\n> > Keys should be unique and deterministic in nature\r\n> > ```\r\n> \r\n> Please check out the suggestion below. I think it might be a cause.\r\n\r\nThe problem was `id_` in mrc when yield was not unique. (I used index in `enumerate(paragraphs)` by mistake)\r\nI fixed it and update all the things","To fix the CI you can just merge master into your branch and it should be all green hopefully :)","@lhoestq\r\nThanks for reviewing!\r\n\r\nIt's harder than I thought to add dataset card. \ud83d\ude05 \r\nI checked and updated your suggestion (script, readme details, dummy data). \r\n\r\ndummy data is little bit larger than expected because `ner` dataset is about 80 lines and `dp` dataset is about 25 lines to avoid 0 examples.\r\n\r\nI'm not sure why some CI keep fails, can u check for this?","Thanks ! That makes sense for ner and dp\r\n\r\nFor mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ?","> Thanks ! That makes sense for ner and dp\r\n> \r\n> For mrc on the other hand there are still too many examples, maybe you can generate the dummy data for 5 examples for all tasks except ner and dp ?\r\n\r\nYes, I generate default lines in dataset-cli for other dataset except \"dp\" and \"ner\"\r\nI fixed mrc dataset, hope it's fine now :)\r\n\r\nthe reason CI failed was I forgot to merge master into my branch \ud83d\ude05 "],"created_at":1622130591000,"updated_at":1623250802000,"closed_at":1622828715000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2416","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2416","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2416.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2416.patch"},"body":"Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https:\/\/arxiv.org\/abs\/2105.09680), [github](https:\/\/github.com\/KLUE-benchmark\/KLUE) and [webpage](https:\/\/klue-benchmark.com\/tasks).\r\nPlease let me know if there's anything missing in the code or README.\r\nThanks!\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2416\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2415","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2415\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2415\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2415\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2415","id":903923097,"node_id":"MDU6SXNzdWU5MDM5MjMwOTc=","number":2415,"title":"Cached dataset not loaded","user":{"login":"borisdayma","id":715491,"node_id":"MDQ6VXNlcjcxNTQ5MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/715491?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/borisdayma","html_url":"https:\/\/github.com\/borisdayma","followers_url":"https:\/\/api.github.com\/users\/borisdayma\/followers","following_url":"https:\/\/api.github.com\/users\/borisdayma\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/borisdayma\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/borisdayma\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/borisdayma\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/borisdayma\/orgs","repos_url":"https:\/\/api.github.com\/users\/borisdayma\/repos","events_url":"https:\/\/api.github.com\/users\/borisdayma\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/borisdayma\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It actually seems to happen all the time in above configuration:\r\n* the function `filter_by_duration` correctly loads cached processed dataset\r\n* the function `prepare_dataset` is always reexecuted\r\n\r\nI end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug or limitation here.","Hi ! The hash used for caching `map` results is the fingerprint of the resulting dataset. It is computed using three things:\r\n- the old fingerprint of the dataset\r\n- the hash of the function\r\n- the hash of the other parameters passed to `map`\r\n\r\nYou can compute the hash of your function (or any python object) with\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nmy_func = lambda x: x + 1\r\nprint(Hasher.hash(my_func))\r\n```\r\n\r\nIf `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.","> If `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.\r\n\r\nYes I\u00a0think that was the issue.\r\n\r\nFor the hash of the function:\r\n* does it consider just the name or the actual code of the function\r\n* does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)","> does it consider just the name or the actual code of the function\r\n\r\nIt looks at the name and the actual code and all variables such as recursively. It uses `dill` to do so, which is based on `pickle`.\r\nBasically the hash is computed using the pickle bytes of your function (computed using `dill` to support most python objects).\r\n\r\n> does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)\r\n\r\nYes it does thanks to recursive pickling.","Thanks for these explanations. I'm closing the issue."],"created_at":1622130006000,"updated_at":1622639747000,"closed_at":1622639747000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nI have a large dataset (common_voice, english) where I use several map and filter functions.\r\nSometimes my cached datasets after specific functions are not loaded.\r\nI always use the same arguments, same functions, no seed\u2026\r\n\r\n## Steps to reproduce the bug\r\n```python\r\ndef filter_by_duration(batch):\r\n return (\r\n batch[\"duration\"] <= 10\r\n and batch[\"duration\"] >= 1\r\n and len(batch[\"target_text\"]) > 5\r\n )\r\n\r\ndef prepare_dataset(batch):\r\n batch[\"input_values\"] = processor(\r\n batch[\"speech\"], sampling_rate=batch[\"sampling_rate\"][0]\r\n ).input_values\r\n with processor.as_target_processor():\r\n batch[\"labels\"] = processor(batch[\"target_text\"]).input_ids\r\n return batch\r\n\r\ntrain_dataset = train_dataset.filter(\r\n filter_by_duration,\r\n remove_columns=[\"duration\"],\r\n num_proc=data_args.preprocessing_num_workers,\r\n)\r\n\r\n# PROBLEM HERE -> below function is reexecuted and cache is not loaded\r\ntrain_dataset = train_dataset.map(\r\n prepare_dataset,\r\n remove_columns=train_dataset.column_names,\r\n batch_size=training_args.per_device_train_batch_size,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n)\r\n\r\n# Later in script\r\nset_caching_enabled(False)\r\n# apply map on trained model to eval\/test sets\r\n\r\n```\r\n\r\n## Expected results\r\nThe cached dataset should always be reloaded.\r\n\r\n## Actual results\r\nThe function is reexecuted.\r\n\r\nI have access to cached files `cache-xxxxx.arrow`.\r\nIs there a way I can somehow load manually 2 versions and see how the hash was created for debug purposes (to know if it's an issue with dataset or function)?\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.6.2\r\n- Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.5\r\n- PyTorch version (GPU?): 1.8.1+cu102 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2415\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2414","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2414\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2414\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2414\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2414","id":903877096,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU1MDg5OTIw","number":2414,"title":"Update README.md","user":{"login":"cryoff","id":15029054,"node_id":"MDQ6VXNlcjE1MDI5MDU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15029054?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cryoff","html_url":"https:\/\/github.com\/cryoff","followers_url":"https:\/\/api.github.com\/users\/cryoff\/followers","following_url":"https:\/\/api.github.com\/users\/cryoff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cryoff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cryoff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cryoff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cryoff\/orgs","repos_url":"https:\/\/api.github.com\/users\/cryoff\/repos","events_url":"https:\/\/api.github.com\/users\/cryoff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cryoff\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging since the CI error is unrelated to this PR and has been fixed on master","Thank you for taking a look at the CI error - I was a bit confused with that. Thanks!"],"created_at":1622127199000,"updated_at":1624887974000,"closed_at":1624885496000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2414","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2414","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2414.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2414.patch"},"body":"Provides description of data instances and dataset features\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2414\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2413","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2413\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2413\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2413\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2413","id":903777557,"node_id":"MDU6SXNzdWU5MDM3Nzc1NTc=","number":2413,"title":"AttributeError: 'DatasetInfo' object has no attribute 'task_templates'","user":{"login":"jungwhank","id":53588015,"node_id":"MDQ6VXNlcjUzNTg4MDE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53588015?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jungwhank","html_url":"https:\/\/github.com\/jungwhank","followers_url":"https:\/\/api.github.com\/users\/jungwhank\/followers","following_url":"https:\/\/api.github.com\/users\/jungwhank\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jungwhank\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jungwhank\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jungwhank\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jungwhank\/orgs","repos_url":"https:\/\/api.github.com\/users\/jungwhank\/repos","events_url":"https:\/\/api.github.com\/users\/jungwhank\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jungwhank\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.\r\n\r\nIdeally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code"],"created_at":1622123068000,"updated_at":1622509547000,"closed_at":1622509547000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nHello, \r\nI'm trying to add dataset and contribute, but test keep fail with below cli.\r\n` RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_`\r\n\r\n## Steps to reproduce the bug\r\nIt seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add.\r\n\r\n` RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_`\r\n\r\n\r\n## Expected results\r\nAll test passed\r\n\r\n## Actual results\r\n```\r\n # check that dataset is not empty\r\n self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset))\r\n for split in dataset_builder.info.splits.keys():\r\n # check that loaded datset is not empty\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\n \r\n # check that we can cast features for each task template\r\n> task_templates = dataset_builder.info.task_templates\r\nE AttributeError: 'DatasetInfo' object has no attribute 'task_templates'\r\n\r\ntests\/test_dataset_common.py:175: AttributeError\r\n```\r\n\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.6.2\r\n- Platform: Darwin-20.4.0-x86_64-i386-64bit\r\n- Python version: 3.7.7\r\n- PyTorch version (GPU?): 1.7.0 (False)\r\n- Tensorflow version (GPU?): 2.3.0 (False)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2413\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2412","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2412\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2412\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2412\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2412","id":903769151,"node_id":"MDU6SXNzdWU5MDM3NjkxNTE=","number":2412,"title":"Docstring mistake: dataset vs. metric","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> I can provide a PR l8er...\r\n\r\nSee #2425 "],"created_at":1622122751000,"updated_at":1622535484000,"closed_at":1622535484000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"This:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/d95b95f8cf3cb0cff5f77a675139b584dcfcf719\/src\/datasets\/load.py#L582\r\n\r\nShould better be something like:\r\n\r\n`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`\r\n\r\nI can provide a PR l8er...","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2412\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2411","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2411\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2411\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2411\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2411","id":903671778,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU0OTAzNjg2","number":2411,"title":"Add DOI badge to README","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622119007000,"updated_at":1622122974000,"closed_at":1622122974000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2411","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2411","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2411.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2411.patch"},"body":"Once published the latest release, the DOI badge has been automatically generated by Zenodo.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2411\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2410","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2410\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2410\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2410\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2410","id":903613676,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU0ODUwMjY4","number":2410,"title":"fix #2391 add original answers in kilt-TriviaQA","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM, but I'm not sure what's going on with the Unix tests @lhoestq ","The CI error is unrelated to this PR, it's been fixed now on master.","Thanks @PaulLerner !","> #- [ ] - Hey![image](https:\/\/user-images.githubusercontent.com\/71971234\/121969638-00030e00-cd75-11eb-9512-25d32ac08051.jpeg)@fr[fr_fr**fr~~fr `fr```\nFR\n````~~**_]()","Oh that was unexpected. I didn't know pokemons were into NLP"],"created_at":1622116469000,"updated_at":1623760557000,"closed_at":1623691750000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2410","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2410","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2410.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2410.patch"},"body":"cc @yjernite is it ok like this?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2410\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2409","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2409\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2409\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2409\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2409","id":903441398,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU0Njk3NjA0","number":2409,"title":"Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I thought the renaming was suggested only for the env var, and not for the config variable... As you think is better! ;)","I think it's better if they match, so that users understand directly that they're directly connected","Well, if you're not concerned about back-compat here, perhaps it could be renamed and shortened too ;)\r\n\r\nI'd suggest one of:\r\n\r\n* `HF_DATASETS_IN_MEMORY_MAX_SIZE`\r\n* `HF_DATASETS_MAX_IN_MEMORY_SIZE`\r\n\r\nthe itention is to:\r\n1. make it consistent with all the other `datasets` env vars which all start with `HF_DATASETS_`, e.g.:\r\n```\r\nHF_DATASETS_CACHE\r\nHF_DATASETS_OFFLINE \r\n```\r\n2. allow to recode in the future to support 1M, 4K, 1T and not just bytes - bytes is not a great choice for this type of variable since it will be at least X Mbytes for most reasonable uses.\r\n\r\nAnd I agree with @albertvillanova that the config variable name shouldn't have the HF prefix - it's preaching to the choir - the user already knows it's a local variable. \r\n\r\nThe only reason we prefix env vars, is because they are used outside of the software.\r\n\r\nBut I do see a good point of you trying to make things consistent too. How about this:\r\n\r\n`config.IN_MEMORY_MAX_SIZE` (or whatever the final env var will be minus `HF_DATASETS_` prefix).\r\n\r\nThis is of course just my opinion.\r\n\r\n","Thanks for the comment :)\r\nI like both propositions, and I agree this would be better in order to allow support for 1M, 1T etc. \r\nRegarding the prefix of the variable in config.py I don't have a strong opinion. I just added it for consistency with the other variables that default to the env variables like HF_DATASETS_CACHE. However I agree this would be nice to have shorter names so I'm not against removing the prefix either. Since the feature is relatively new, I think we can still allow ourself to rename it","Awesome, \r\n\r\nLet's use then:\r\n\r\n- `HF_DATASETS_IN_MEMORY_MAX_SIZE` for the env var\r\n- `config.IN_MEMORY_MAX_SIZE` for config.\r\n\r\nand for now bytes will be documented as the only option and down the road add support for K\/M\/G.\r\n\r\n@albertvillanova, does that sound good to you?","Great!!! \ud83e\udd17 ","Did I miss a PR with this change?\r\n\r\nI want to make sure to add it to transformers tests to avoid the overheard of rebuilding the datasets.\r\n\r\nThank you!","@stas00 I'm taking on this now that I have finally finished the collaborative training experiment. Sorry for the delay.","Yes, of course! Thank you for taking care of it, @albertvillanova ","Actually, why is this feature on by default? \r\n\r\nUsers are very unlikely to understand what is going on or to know where to look. Should it at the very least emit a warning that this was done w\/o asking the user to do so and how to turn it off?\r\n\r\nIMHO, this feature should be enabled explicitly by those who want it and not be On by default. This is an optimization that benefits only select users and is a burden on the rest.\r\n\r\nIn my line of dev\/debug work (multiple short runs that have to be very fast) now I have to remember to disable this feature explicitly on every machine I work :(\r\n","Having the dataset in memory is nice to get the speed but I agree that the lack of caching for dataset in memory is an issue. By default we always had caching on.\r\nHere the issue is that in-memory datasets are still not able to use the cache - we should fix this asap IMO.\r\n\r\nHere is the PR that fixes this: https:\/\/github.com\/huggingface\/datasets\/pull\/2329","But why do they have to be datasets in memory in the first place? Why not just have the default that all datasets are normal and are cached which seems to be working solidly. And only enable in memory datasets explicitly if the user chooses to and then it doesn't matter if it's cached on not for the majority of the users who will not make this choice.\r\n\r\nI mean the definition of in-memory-datasets is very arbitrary - why 250MB and not 5GB? It's very likely that the user will want to set this threshold based on their RAM availability. So while doing that they can enable the in-memory-datasets. Unless I'm missing something here.\r\n\r\nThe intention here is that things work well in general out of the box, and further performance optimizations are available to those who know what they are doing.\r\n","This is just for speed improvements, especially for data exploration\/experiments in notebooks. Ideally it shouldn't have changed anything regarding caching behavior in the first place (i.e. have the caching enabled by default).\r\n\r\nThe 250MB limit has also been chosen to not create unexpected high memory usage on small laptops.","Won't it be more straight-forward to create a performance optimization doc and share all these optimizations there? That way the user will be in the knowing and will be able to get faster speeds if their RAM is large. \r\n\r\nIt is hard for me to tell the average size of a dataset an average user will have, but my gut feeling is that many NLP datasets are larger than 250MB. Please correct me if I'm wrong.\r\n\r\nBut at the same time what you're saying is that once https:\/\/github.com\/huggingface\/datasets\/pull\/2329 is completed and merged, the in-memory-datasets will be cached too. So if I wait long enough the whole issue will go away altogether, correct?"],"created_at":1622106420000,"updated_at":1623168055000,"closed_at":1622108021000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2409","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2409","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2409.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2409.patch"},"body":"As mentioned in https:\/\/github.com\/huggingface\/datasets\/pull\/2399 the env var should be prefixed by HF_","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2409\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2408","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2408\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2408\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2408\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2408","id":903422648,"node_id":"MDExOlB1bGxSZXF1ZXN0NjU0NjgxMzE4","number":2408,"title":"Fix head_qa keys","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1622105419000,"updated_at":1622106337000,"closed_at":1622106336000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2408","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2408","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2408.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2408.patch"},"body":"There were duplicate in the keys, as mentioned in #2382 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2408\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2407","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2407\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2407\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2407\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2407","id":903111755,"node_id":"MDU6SXNzdWU5MDMxMTE3NTU=","number":2407,"title":".map() function got an unexpected keyword argument 'cache_file_name'","user":{"login":"cindyxinyiwang","id":7390482,"node_id":"MDQ6VXNlcjczOTA0ODI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7390482?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cindyxinyiwang","html_url":"https:\/\/github.com\/cindyxinyiwang","followers_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/followers","following_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/repos","events_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cindyxinyiwang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e08362256fb157c0b3038437fc0d7a0bbb50de5c\/src\/datasets\/arrow_dataset.py#L1556-L1558","Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?\r\n\r\nIf you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).\r\nIn this case, since there are several datasets in the dict, the `DatasetDict.map` method requires a `cache_file_names` argument (with an 's'), so that you can provide one file name per split.","I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you!"],"created_at":1622080466000,"updated_at":1622123200000,"closed_at":1622123200000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nI'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that \".map() function got an unexpected keyword argument 'cache_file_name'\". \r\n\r\nI believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.\r\n\r\nHere is the code I use\r\n## Steps to reproduce the bug\r\n```datasets = load_from_disk(dataset_path=my_path)\r\n\r\n[...]\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[text_column_name])\r\n\r\nlogger.info(\"Mapping dataset to tokenized dataset.\")\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n cache_file_name=\"my_tokenized_file\"\r\n)\r\n```\r\n\r\n## Actual results\r\n tokenized_datasets = datasets.map(\r\nTypeError: map() got an unexpected keyword argument 'cache_file_name'\r\n\r\n## Environment info\r\n\r\n- `datasets` version:1.6.2\r\n- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10\r\n- Python version:3.8.5\r\n- PyArrow version:3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2407\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2406","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2406\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2406\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2406\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2406","id":902643844,"node_id":"MDU6SXNzdWU5MDI2NDM4NDQ=","number":2406,"title":"Add guide on using task templates to documentation","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1622046506000,"updated_at":1622046506000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2406\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2405","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2405\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2405\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2405\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2405","id":901227658,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUyNzA2OTk1","number":2405,"title":"Add dataset tags","user":{"login":"OyvindTafjord","id":6453366,"node_id":"MDQ6VXNlcjY0NTMzNjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6453366?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/OyvindTafjord","html_url":"https:\/\/github.com\/OyvindTafjord","followers_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/followers","following_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/orgs","repos_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/repos","events_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/OyvindTafjord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1621969049000,"updated_at":1622048056000,"closed_at":1622047207000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2405","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2405","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2405.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2405.patch"},"body":"The dataset tags were provided by Peter Clark following the guide.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2405\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2404","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2404\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2404\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2404\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2404","id":901179832,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUyNjYzOTcz","number":2404,"title":"Paperswithcode dataset mapping","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["messed up my branch, repushing","live mapping can be found at https:\/\/huggingface.co\/api\/pwc\/datasets-mapping and will be kept up to date going forward"],"created_at":1621966466000,"updated_at":1622028085000,"closed_at":1622027838000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2404","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2404","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2404.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2404.patch"},"body":"This is a continuation of https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/43, encoded directly inside dataset cards.\r\n\r\nAs discussed:\r\n- `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side.\r\n- I've added this new key at the end of the yaml instead of ordering all keys alphabetically as pyyaml's default. No strong opinion on that one though\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2404\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2403","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2403\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2403\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2403\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2403","id":900059014,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUxNjcxMTMw","number":2403,"title":"Free datasets with cache file in temp dir on exit","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621894511000,"updated_at":1622049919000,"closed_at":1622047169000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2403","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2403","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2403.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2403.patch"},"body":"This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir.\r\nSince the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function.\r\n\r\nFixes #2402","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2403\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2402","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2402\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2402\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2402\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2402","id":900025329,"node_id":"MDU6SXNzdWU5MDAwMjUzMjk=","number":2402,"title":"PermissionError on Windows when using temp dir for caching","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621891379000,"updated_at":1622047169000,"closed_at":1622047169000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Currently, the following code raises a PermissionError on master if working on Windows:\r\n\r\n```python\r\n# run as a script or call exit() in REPL to initiate the temp dir cleanup\r\nfrom datasets import *\r\nd = load_dataset(\"sst\", split=\"train\", keep_in_memory=False)\r\nset_caching_enabled(False)\r\nd.map(lambda ex: ex)\r\n```\r\n\r\nError stack trace:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\weakref.py\", line 624, in _exitfunc\r\n f()\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\weakref.py\", line 548, in __call__\r\n return info.func(*info.args, **(info.kwargs or {}))\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\tempfile.py\", line 799, in _cleanup\r\n _shutil.rmtree(name)\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\shutil.py\", line 500, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\shutil.py\", line 395, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\shutil.py\", line 393, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 5] Access is denied: 'C:\\\\Users\\\\Mario\\\\AppData\\\\Local\\\\Temp\\\\tmp20epyhmq\\\\cache-87a87ffb5a956e68.arrow'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2402\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2401","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2401\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2401\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2401\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2401","id":899910521,"node_id":"MDU6SXNzdWU4OTk5MTA1MjE=","number":2401,"title":"load_dataset('natural_questions') fails with \"ValueError: External features info don't match the dataset\"","user":{"login":"jonrbates","id":15602718,"node_id":"MDQ6VXNlcjE1NjAyNzE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15602718?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonrbates","html_url":"https:\/\/github.com\/jonrbates","followers_url":"https:\/\/api.github.com\/users\/jonrbates\/followers","following_url":"https:\/\/api.github.com\/users\/jonrbates\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonrbates\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonrbates\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonrbates\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonrbates\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonrbates\/repos","events_url":"https:\/\/api.github.com\/users\/jonrbates\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonrbates\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I faced the similar problem. Downgrading datasets to 1.5.0 fixed it.","Thanks for reporting, I'm looking into it","I just opened #2438 to fix this :)","Hi ! This has been fixed in the 1.8.0 release of `datasets`"],"created_at":1621881533000,"updated_at":1623229645000,"closed_at":1623229645000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nload_dataset('natural_questions') throws ValueError\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ndatasets = load_dataset('natural_questions', split='validation[:10]')\r\n```\r\n\r\n## Expected results\r\nCall to load_dataset returns data.\r\n\r\n## Actual results\r\n```\r\nUsing custom data configuration default\r\nReusing dataset natural_questions (\/mnt\/d\/huggingface\/datasets\/natural_questions\/default\/0.0.2\/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76)\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n in \r\n----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='\/mnt\/d\/huggingface\/datasets')\r\n\r\n~\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)\r\n 756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)\r\n 757 )\r\n--> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n 759 if save_infos:\r\n 760 builder_instance._save_infos()\r\n\r\n~\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)\r\n 735 \r\n 736 # Create a dataset for each of the given splits\r\n--> 737 datasets = utils.map_nested(\r\n 738 partial(\r\n 739 self._build_single_dataset,\r\n\r\n~\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 193 # Singleton\r\n 194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 195 return function(data_struct)\r\n 196 \r\n 197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)\r\n\r\n~\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)\r\n 762 \r\n 763 # Build base dataset\r\n--> 764 ds = self._as_dataset(\r\n 765 split=split,\r\n 766 in_memory=in_memory,\r\n\r\n~\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/builder.py in _as_dataset(self, split, in_memory)\r\n 838 in_memory=in_memory,\r\n 839 )\r\n--> 840 return Dataset(**dataset_kwargs)\r\n 841 \r\n 842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]:\r\n\r\n~\/miniconda3\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 271 assert self._fingerprint is not None, \"Fingerprint can't be None in a Dataset object\"\r\n 272 if self.info.features.type != inferred_features.type:\r\n--> 273 raise ValueError(\r\n 274 \"External features info don't match the dataset:\\nGot\\n{}\\nwith type\\n{}\\n\\nbut expected something like\\n{}\\nwith type\\n{}\".format(\r\n 275 self.info.features, self.info.features.type, inferred_features, inferred_features.type\r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)}\r\nwith type\r\nstruct, long_answer: list>, short_answers: list, end_token: list, start_byte: list, start_token: list, text: list>>, yes_no_answer: list>, document: struct, token: list>>, id: string, question: struct>>\r\n\r\nbut expected something like\r\n{'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}}\r\nwith type\r\nstruct, long_answer: list>, short_answers: list, end_token: list, start_byte: list, start_token: list, text: list>>, yes_no_answer: list>, document: struct, token: list>, url: string>, id: string, question: struct>>\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.6.2\r\n- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10\r\n- Python version: 3.8.3\r\n- PyTorch version (GPU?): 1.6.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2401\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2400","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2400\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2400\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2400\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2400","id":899867212,"node_id":"MDU6SXNzdWU4OTk4NjcyMTI=","number":2400,"title":"Concatenate several datasets with removed columns is not working.","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\ndid you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?\r\n\r\nThis code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error).","@mariosasko you are right I was still on `1.5.0`. "],"created_at":1621878015000,"updated_at":1621921921000,"closed_at":1621921919000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nYou can't concatenate datasets when you removed columns before.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset, concatenate_datasets\r\n\r\nwikiann= load_dataset(\"wikiann\",\"en\")\r\n\r\nwikiann[\"train\"] = wikiann[\"train\"].remove_columns([\"langs\",\"spans\"])\r\nwikiann[\"test\"] = wikiann[\"test\"].remove_columns([\"langs\",\"spans\"])\r\n\r\nassert wikiann[\"train\"].features.type == wikiann[\"test\"].features.type\r\n\r\nconcate = concatenate_datasets([wikiann[\"train\"],wikiann[\"test\"]])\r\n```\r\n\r\n## Expected results\r\nMerged dataset \r\n\r\n\r\n## Actual results\r\n```python\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\nwith type\r\nstruct, ner_tags: list, spans: list, tokens: list>\r\n\r\nbut expected something like\r\n{'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}\r\nwith type\r\nstruct, tokens: list>\r\n```\r\n## Environment info\r\n\r\n- `datasets` version: ~1.6.2~ 1.5.0\r\n- Platform: macos\r\n- Python version: 3.8.5\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2400\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2399","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2399\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2399\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2399\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2399","id":899853610,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUxNDk0OTc2","number":2399,"title":"Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for clarifying the precedence, @albertvillanova \r\n\r\nIsn't it typically the case where env vars have the highest precedence? \r\n\r\nIn my understanding the point of env vars is to be able to override software w\/o needing to touch the code. \r\n\r\nPlease correct me if this is not so in the general case.","Hi @stas00, \r\n\r\nWell, I'm not an expert on this topic, but the precedence hierarchy I have normally found is from higher to lower:\r\n- command line parameters\r\n- env vars\r\n- config files\r\nSo yes, normally env vars have precedence over configuration files.\r\n\r\nAnyway, for Datasets, there are no configuration files. The _in-memory_ config is set from default values or env vars (which have precedence over default values). But this is done at import.\r\n\r\nHowever, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import).","In my limited experience env vars are typically above cmd line args, so that one can override scripts with cmd lines using env vars, but usually one then uses env vars inside cmd line args, so it's loud and clear.\r\n\r\nFor example specifying a specific gpu number on a command line will depend on `CUDA_VISIBLE_DEVICES` so gpu0 will be different if someone sets `CUDA_VISIBLE_DEVICES=2,3` vs `CUDA_VISIBLE_DEVICES=0,1`.\r\n\r\n> However, once the library is imported, the user can modify the in-memory config, and this will have precedence over the rest of mechanisms (which take place only at import).\r\n\r\nAnd this is exactly the problem we are trying to solve here. For a good reason HF examples don't want to use `keep_in_memory=False`, and they may choose to now set `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` and which means we again can't override it via env var.\r\n\r\n","oops, sorry, didn't think earlier - do we need to prefix this with `HF_DATASETS` or `HF_` like all the other env vars? or is it long enough already to be unique - it's just not telling the user in the config file what projet this variable is for...","You're right, I just opened https:\/\/github.com\/huggingface\/datasets\/pull\/2409"],"created_at":1621876755000,"updated_at":1622106435000,"closed_at":1622045274000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2399","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2399","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2399.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2399.patch"},"body":"Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`.\r\n\r\nThis will allow to turn off default behavior: loading in memory (and not caching) small datasets.\r\n\r\nFix #2387.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2399\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2398","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2398\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2398\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2398\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2398","id":899511837,"node_id":"MDU6SXNzdWU4OTk1MTE4Mzc=","number":2398,"title":"News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs","user":{"login":"anassalamah","id":8571003,"node_id":"MDQ6VXNlcjg1NzEwMDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8571003?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anassalamah","html_url":"https:\/\/github.com\/anassalamah","followers_url":"https:\/\/api.github.com\/users\/anassalamah\/followers","following_url":"https:\/\/api.github.com\/users\/anassalamah\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anassalamah\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anassalamah\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anassalamah\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anassalamah\/orgs","repos_url":"https:\/\/api.github.com\/users\/anassalamah\/repos","events_url":"https:\/\/api.github.com\/users\/anassalamah\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anassalamah\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621850614000,"updated_at":1621850614000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I used load_dataset to load the news_commentary dataset for \"ar-en\" translation pairs but found translations from Arabic to Hindi. \r\n\r\n```\r\ntrain_ds = load_dataset(\"news_commentary\", \"ar-en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", \"ar-en\", split='train[98%:]')\r\n\r\n# filtering out examples that are not ar-en translations but ar-hi\r\nval_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)\r\n```\r\n\r\n* I'm fairly new to using datasets so I might be doing something wrong","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2398\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2397","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2397\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2397\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2397\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2397","id":899427378,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUxMTMxMTY0","number":2397,"title":"Fix number of classes in indic_glue sna.bn dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq there are many things missing in the README.md file, but this correction is right despite not passing the validation tests...","Yes indeed. We run the validation in all modified readme because we think that it is the time when contributors are the most likely to fix a dataset card - or it will never be done"],"created_at":1621844335000,"updated_at":1621960336000,"closed_at":1621960336000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2397","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2397","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2397.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2397.patch"},"body":"As read in the [paper](https:\/\/www.aclweb.org\/anthology\/2020.findings-emnlp.445.pdf), Table 11.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2397\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2396","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2396\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2396\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2396\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2396","id":899016308,"node_id":"MDU6SXNzdWU4OTkwMTYzMDg=","number":2396,"title":"strange datasets from OSCAR corpus","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting\r\ncc @pjox is this an issue from the data ?\r\n\r\nAnyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere ","Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https:\/\/arxiv.org\/pdf\/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https:\/\/github.com\/oscar-corpus\/oscar-website\/issues) as well so that we can track it?"],"created_at":1621775162000,"updated_at":1623938077000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/50871412\/119260850-4f876b80-bc07-11eb-8894-124302600643.png)\r\n![image](https:\/\/user-images.githubusercontent.com\/50871412\/119260875-675eef80-bc07-11eb-9da4-ee27567054ac.png)\r\nFrom the [official site ](https:\/\/oscar-corpus.com\/), the Yue Chinese dataset should have 2.2KB data.\r\n7 training instances is obviously not a right number.\r\nAs I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl.\r\nAnd even if you don't read Yue Chinese, you can tell the first six instance are problematic.\r\n(It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app)\r\nIt might not be the problem of the huggingface\/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted.\r\nI will try to inform the host of OSCAR corpus later.\r\nAwy a remake about this dataset in huggingface\/datasets is needed, perhaps after the host of the dataset fixes the issue.\r\n\r\n> Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https:\/\/arxiv.org\/pdf\/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https:\/\/github.com\/oscar-corpus\/oscar-website\/issues) as well so that we can track it?\r\n\r\nThanks a lot, the new post is here:\r\nhttps:\/\/github.com\/oscar-corpus\/oscar-website\/issues\/11","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2396\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2395","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2395\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2395\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2395\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2395","id":898762730,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUwNTk3NjI0","number":2395,"title":"`pretty_name` for dataset in YAML tags","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Initially I removed the ` - ` since there was only one `pretty_name` per config but turns out it was breaking here in `from_yaml_string`https:\/\/github.com\/huggingface\/datasets\/blob\/74751e3f98c74d22c48c6beb1fab0c13b5dfd075\/src\/datasets\/utils\/metadata.py#L197 in `\/utils\/metadata.py`","@lhoestq I guess this will also need some validation?","Looks like the parser doesn't allow things like\r\n```\r\npretty_name:\r\n config_name1: My awesome config number 1\r\n config_name2: My amazing config number 2\r\n```\r\ntherefore you had to use `-` and consider them as a list.\r\n\r\nI would be nice to add support for this case in the validator.\r\n\r\nThere's one thing though: the DatasetMetadata object currently corresponds to the yaml tags that are flattened: the config names are just ignored, and the lists are concatenated.\r\n\r\nTherefore I think we would potentially need to instantiate several `DatasetMetadata` objects: one per config. Otherwise we would end up with a list of several pretty_name while we actually need at most 1 per config.\r\n\r\nWhat do you think @gchhablani ?","I was thinking of returning `metada_dict` (on line 193) whenever `load_dataset_card` is called (we can pass an extra parameter to `from_readme` or `from_yaml_string` for that to happen).\r\n\r\nOne just needs config_name as key for the dictionary inside `pretty_name` dict and for single config, there would be only one value to print. We can do this for other fields as well like `size_categories`, `languages` etc. This will obviate the need to flatten the YAML tags so that don't have to instantiate several DatasetMetadata objects. What do you guys think @lhoestq @gchhablani? \r\n\r\nUpdate: I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https:\/\/pastebin.com\/eJ84314f) is my `metadata_dict` before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.","Hi @lhoestq @bhavitvyamalik \r\n\r\n@bhavitvyamalik, I'm not sure I understand your approach, can you please elaborate? The `metadata_dict` is flattened before instantiating the object, do you want to remove that? Still confused.\r\n\r\nFew things come to my mind after going through this PR. They might not be entirely relevant to the current task, but I'm just trying to think about possible cases and discuss them here.\r\n\r\n1. Instead of instantiating a new `DatasetMetadata` for each config with flattened tags, why can't we make it more flexible and validate only non-dict items? However, in that case, the types wouldn't be as strict for the class attributes. It would also not work for cases that are like `Dict[str,List[Dict[str,str]]`, but I guess that won't be needed anyway in the foreseeable future?\r\n\r\n Ideally, it would be something like - Check the metadata tag type (root), do a DFS, and find the non-dict objects (leaves), and validate them. Is this an overkill to handle the problem?\r\n2. For single-config datasets, there can be slightly different validation for `pretty_names`, than for multi-config. The user shouldn't need to provide a config name for single-config datasets, wdyt @bhavitvyamalik @lhoestq? Either way, for multi-config, the validation can use the dictionary keys in the path to that leaf node to verify `pretty_names: ... (config)` as well. This will check whether the config name is same as the key (might be unnecessary but prevents typos, so less work for the reviewer(s)). For future, however, it might be beneficial to have something like this.\r\n3. Should we have a default config name for single-config datasets? People use any string they feel like. I've seen `plain_text`, `default` and the dataset name. I've used `image` for a few datasets myself AFAIR. For smarter validation (again, a future case ;-;), it'd be easier for us to have a simple rule for naming configs in single-config datasets. Wdyt @lhoestq?","Btw, `pretty_names` can also be used to handle this during validation :P \r\n\r\n```\r\n-# Dataset Card for [Dataset Name]\r\n+# Dataset Card for Allegro Reviews\r\n```\r\n\r\nThis is where `DatasetMetadata` and `ReadMe` should be combined. But there are very few overlaps, I guess.\r\n\r\n\n@bhavitvyamalik @lhoestq What about adding a pretty name across all configs, and then config-specific names?\n\nLike\n\n```yaml\npretty_names:\n all_configs: X (dataset_name)\n config_1: X1 (config_1_name)\n config_2: X2 (config_2_name)\n```\nThen, using the `metadata_dict`, the ReadMe header can be validated against `X`.\n\nSorry if I'm throwing too many ideas at once.","@bhavitvyamalik\r\n\r\nNow, I think I better understand what you're saying. So you want to skip validation for the unflattened metadata and just return it? And let the validation run for the flattened version?","Exactly! Validation is important but once the YAML tags are validated I feel we shouldn't do that again while calling `load_dataset_card`. +1 for default config name for single-config datasets.","@bhavitvyamalik\r\nActually, I made the `ReadMe` validation similar to `DatasetMetadata` validation and the class was validating the metadata during the creation. \r\n\r\nMaybe we need to have a separate validation method instead of having it in `__post_init__`? Wdyt @lhoestq? \r\n\r\nI'm sensing too many things to look into. It'd be great to discuss these sometime. \r\n\r\nBut if this PR is urgent then @bhavitvyamalik's logic seems good to me. It doesn't need major modifications in validation.","> Maybe we need to have a separate validation method instead of having it in __post_init__? Wdyt @lhoestq?\r\n\r\nWe can definitely have a `is_valid()` method instead of doing it in the post init.\r\n\r\n> What about adding a pretty name across all configs, and then config-specific names?\r\n\r\nLet's keep things simple to starts with. If we can allow both single-config and multi-config cases it would already be great :)\r\n\r\nfor single-config:\r\n```yaml\r\npretty_name: Allegro Reviews\r\n```\r\n\r\nfor multi-config:\r\n```yaml\r\npretty_name:\r\n mrpc: Microsoft Research Paraphrase Corpus (MRPC)\r\n sst2: Stanford Sentiment Treebank\r\n ...\r\n```\r\n\r\nTo support the multi-config case I see two options:\r\n1. Don't allow DatasetMetadata to have dictionaries but instead have separate DatasetMetadata objects per config\r\n2. allow DatasetMetadata to have dictionaries. It implies to remove the flattening step. Then we could get metadata for a specific config this way for example:\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\nglue_dataset_card = load_dataset_card(\"glue\")\r\nprint(glue_dataset_card.metadata)\r\n# DatasetMetatada object with dictionaries since there are many configs\r\nprint(glue_dataset_card.metadata.get_metadata_for_config(\"mrpc\"))\r\n# DatasetMetatada object with no dictionaries since there are only the mrpc tags\r\n```\r\n\r\nLet me know what you think or if you have other ideas.","I think Option 2 is better.\n\nJust to clarify, will `get_metadata_for_config` also return common details, like language, say?","> Just to clarify, will get_metadata_for_config also return common details, like language, say?\r\n\r\nYes that would be more convenient IMO. For example a dataset card like this\r\n```yaml\r\nlanguages:\r\n- en\r\npretty_name:\r\n config1: Pretty Name for Config 1\r\n config3: Pretty Name for Config 2\r\n```\r\n\r\nthen `metadat.get_metadata_for_config(\"config1\")` would return something like\r\n```python\r\nDatasetMetadata(languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\")\r\n```","@lhoestq, should we do this post-processing in `load_dataset_card` by returning unflattened dictionary from `DatasetMetadata` or send this from `DatasetMetadata`? Since there isn't much to do I feel once we have the unflattened dictionary","Not sure I understand the difference @bhavitvyamalik , could you elaborate please ?","I was talking about this unflattened dictionary:\r\n\r\n> I was thinking of returning the whole dictionary before flattening so that user can access whatever they want with specific configs. Let's say [this](https:\/\/pastebin.com\/eJ84314f) is my metadata_dict before flattening (the loaded YAML string), so instead of validating it and then returning the items individually we can return it just after loading the YAML string.\r\n\r\nPost-processing meant extracting config-specific fields from this dictionary and then return this `languages=[\"en\"], pretty_name=\"Pretty Name for Config 1\"`","I still don't understand what you mean by \"returning unflattened dictionary from DatasetMetadata or send this from DatasetMetadata\", sorry. Can you give an example or rephrase this ?\r\n\r\nIMO load_dataset_card can return a dataset card object with a metadata field. If the metadata isn't flat (i.e. it has several configs), you can get the flat metadata of 1 specific config with `get_metadata_for_config`. But of course if you have better ideas or suggestions, we can discuss this","@lhoestq, I think he is saying whatever `get_metadata_for_config` is doing can be done in `load_dataset_card` by taking the unflattened `metadata_dict` as input.\r\n\r\n@bhavitvyamalik, I think it'd be better to have this \"post-processing\" in `DatasetMetadata` instead of `load_dataset_card`, as @lhoestq has shown. I'll quickly get on that.\r\n\r\n---\r\nThree things that are to be changed in `DatasetMetadata`:\r\n1. Allow Non-flat elements and their validation.\r\n2. Create a method to get metadata by config name.\r\n3. Create a `validate()` method.\r\n\r\nOnce that is done, this PR can be updated and reviewed, wdys?","Thanks @gchhablani for the help ! Now that https:\/\/github.com\/huggingface\/datasets\/pull\/2436 is merged you can remove the `-` in the pretty_name @bhavitvyamalik :)"],"created_at":1621675485000,"updated_at":1624544051000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2395","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2395","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2395.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2395.patch"},"body":"I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good.\r\n\r\nIf dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2395\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2392","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2392\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2392\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2392\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2392","id":898156795,"node_id":"MDExOlB1bGxSZXF1ZXN0NjUwMDYxOTE3","number":2392,"title":"Update text classification template labels in DatasetInfo __post_init__","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:\r\n```python\r\ndset = load_dataset(\"some_dataset\") # let's say 'some_dataset' supports text classification and question answering\r\ndset_tc = dset.prepare_for_task(\"text-classification\")\r\ndset_tc.preprare_for_task(\"question-answering\") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'\r\n```\r\nI see 2 options:\r\n1. to drop the task templates after the first `Dataset.prepare_for_task` call\r\n2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid) ","> If I'm not mistaken, one way to fix this would be to drop the task templates when copying the info by inserting `dataset.info.task_templates = None` before the `Dataset.cast` call in `Dataset.prepare_for_task`. Moreover, we should do this change independently of the KeyError being raised because currently the following is possible:\r\n> \r\n> ```python\r\n> dset = load_dataset(\"some_dataset\") # let's say 'some_dataset' supports text classification and question answering\r\n> dset_tc = dset.prepare_for_task(\"text-classification\")\r\n> dset_tc.preprare_for_task(\"question-answering\") # this should raise an error because the schema is no longer valid for this task; currently this fails on 'rename_columns'\r\n> ```\r\n> \r\n> I see 2 options:\r\n> \r\n> 1. to drop the task templates after the first `Dataset.prepare_for_task` call\r\n> 2. to save only the tasks compatible with the new schema after Dataset.prepare_for_task` (but then we have to update the column names of the compatible tasks to make sure the column mapping is still valid)\r\n\r\nthanks for the great idea @mariosasko and for spotting the problem with sequential task preparation! i am in favour of your option (1) since it is simple and saves us from having to keep track of the column mappings across multiple steps. \r\n\r\ni've implemented the change and refactored the tests to account for the new approach (including a new test that the templates are flushed after we call `prepare_for_task`). perhaps the slightly inelegant aspect here is that if we want to allow the user to set `labels` in the `TextClassification` template, then we have two places (`DatasetInfo.__post_init__` and `TextClassification.__post_init__`) where we need to update `label_schema`. \r\n\r\non the other hand, dropping `labels` from the `TextClassification` signature would have the nice effect that users only have to think about column names when defining their tasks.\r\n\r\nin any case, i think it would be a good idea to merge #2376 soon as the current PR is touching a lot of the same places in the codebase \ud83d\ude04 \r\n","cc @SBrandeis who might also be interested in this feature :)","Tests are failing only because the `emotion` dataset card doesn't pass our dataset card validator (tags are missing), you can ignore this since it's unrelated to this PR.","@lhoestq @SBrandeis i've fixed the tests and think this is now in a good state for another review :)","Maybe @SBrandeis you can also take a look to make sure you're fine with it ?"],"created_at":1621610981000,"updated_at":1622201855000,"closed_at":1622201852000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2392","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2392","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2392.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2392.patch"},"body":"This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`.\r\n\r\nTo avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!).\r\n\r\nHere is an example of the current workflow:\r\n\r\n```python\r\nds1 = load_dataset(\".\/datasets\/emotion\/\")\r\n# cast features and flush templates\r\nds2 = ds1.prepare_for_task(\"text-classification\")\r\nassert ds2.info.task_templates is None\r\n```\r\n\r\nNote that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`:\r\n\r\n```python\r\nds1 = load_dataset(\".\/datasets\/emotion\/\")\r\n# TextClassification.labels is None by default => invalid template\r\ntask = TextClassification(text_column=\"text\", label_column=\"label\")\r\n# Raises ValueError\r\nds1.prepare_for_task(task)\r\n# Specifying the labels => valid template\r\ntask = TextClassification(text_column=\"text\", label_column=\"label\", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise'])\r\nds1.prepare_for_task(task)\r\n```\r\n\r\nThis PR also adds:\r\n\r\n* New tests + fixed some old tests that weren't testing `assertRaises` properly\r\n* A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place.\r\n* Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out!\r\n* Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko \r\n\r\n### PR Description from original WIP \r\n\r\nHi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`.\r\n\r\nOne problem I've spotted is that my current implementation introduces state into the `__post_init__`: \r\n\r\n* When we call `load_dataset`, `DatasetInfo.features` are the \"raw\" features without any casting so we can access the column names by the `label_column` specified in `TextClassification`\r\n* When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https:\/\/github.com\/huggingface\/datasets\/blob\/8b2a78520828e0cc13c14a31f413a5395ef25110\/src\/datasets\/arrow_dataset.py#L1959).\r\n\r\nHere's an explicit example of what I mean with the stack trace appended below:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# this works \r\nds = load_dataset(\"emotion\")\r\n# we can verify the task template is correctly set\r\nds[\"train\"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')]\r\n# but this fails because the _post_init__ is looking for the original column names\r\nds.prepare_for_task(\"text-classification\")\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n in \r\n----> 1 ds.prepare_for_task(\"text-classification\")\r\n\r\n~\/git\/datasets\/src\/datasets\/dataset_dict.py in prepare_for_task(self, task)\r\n 807 \"\"\"\r\n 808 self._check_values_type()\r\n--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})\r\n\r\n~\/git\/datasets\/src\/datasets\/dataset_dict.py in (.0)\r\n 807 \"\"\"\r\n 808 self._check_values_type()\r\n--> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()})\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in prepare_for_task(self, task)\r\n 1421 dataset = self.remove_columns(columns_to_drop)\r\n 1422 dataset = dataset.rename_columns(column_mapping)\r\n-> 1423 dataset = dataset.cast(features=template.features)\r\n 1424 return dataset\r\n 1425 \r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)\r\n 970 format = self.format\r\n 971 dataset = self.with_format(\"arrow\")\r\n--> 972 dataset = dataset.map(\r\n 973 lambda t: t.cast(schema),\r\n 974 batched=True,\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1583 \r\n 1584 if num_proc is None or num_proc == 1:\r\n-> 1585 return self._map_single(\r\n 1586 function=function,\r\n 1587 with_indices=with_indices,\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 173 }\r\n 174 # apply actual function\r\n--> 175 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 176 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 177 # re-apply format to the output\r\n\r\n~\/git\/datasets\/src\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 338 # Call actual function\r\n 339 \r\n--> 340 out = func(self, *args, **kwargs)\r\n 341 \r\n 342 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)\r\n 1959 if update_data:\r\n 1960 # Create new Dataset from buffer or file\r\n-> 1961 info = self.info.copy()\r\n 1962 info.features = writer._features\r\n 1963 if buf_writer is None:\r\n\r\n~\/git\/datasets\/src\/datasets\/info.py in copy(self)\r\n 274 \r\n 275 def copy(self) -> \"DatasetInfo\":\r\n--> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\r\n 277 \r\n 278 \r\n\r\n~\/git\/datasets\/src\/datasets\/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)\r\n\r\n~\/git\/datasets\/src\/datasets\/info.py in __post_init__(self)\r\n 174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the\r\n 175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key\r\n--> 176 object.__setattr__(template, \"labels\", tuple(self.features[template.label_column].names))\r\n 177 template.label_schema[\"labels\"] = ClassLabel(names=template.labels)\r\n 178 self.task_templates[idx] = template\r\n\r\nKeyError: 'label'\r\n```\r\n\r\nWhat do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2392\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2391","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2391\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2391\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2391\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2391","id":898128099,"node_id":"MDU6SXNzdWU4OTgxMjgwOTk=","number":2391,"title":"Missing original answers in kilt-TriviaQA","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) ","I can open a PR but there is 2 details to fix:\r\n- the name for the corresponding key (e.g. `original_answer`)\r\n- how to implement it: I\u2019m not sure what happens when you map `lambda x: {'input': ...}`\u00a0as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['original_answer']`) I implemented it with a regular function (not lambda), see below\r\n\r\n```py\r\ndef add_original_answer(x, trivia_qa, triviaqa_map):\r\n i = triviaqa_map[x['id']]\r\n x['output']['original_answer'] = trivia_qa['validation'][i]['answer']['value']\r\n return x\r\n```"],"created_at":1621609027000,"updated_at":1623691751000,"closed_at":1623691751000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I previously opened an issue at https:\/\/github.com\/facebookresearch\/KILT\/issues\/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets\r\n\r\n## Describe the bug\r\nThe `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question. \r\nHowever it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`)\r\n\r\n## How to fix\r\nIt can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/kilt_tasks\/README.md#loading-the-kilt-knowledge-source-and-task-data\r\n\r\ncc @yjernite who previously answered to an issue about KILT and TriviaQA :)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2391\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2390","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2390\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2390\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2390\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2390","id":897903642,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ5ODQ0NjQ2","number":2390,"title":"Add check for task templates on dataset load","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM now, thank you =)"],"created_at":1621592217000,"updated_at":1621612149000,"closed_at":1621612146000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2390","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2390","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2390.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2390.patch"},"body":"This PR adds a check that the features of a dataset match the schema of each compatible task template.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2390\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2389","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2389\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2389\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2389\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2389","id":897822270,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ5Nzc3MDMz","number":2389,"title":"Insert task templates for text classification","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Update: found a few datasets that slipped through the net. Adding them shortly!","You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?","> You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?\r\n\r\nhi @yjernite, these code insertions are auto-generated so could certainly be improved :) \r\n\r\njust so i understand, your idea is that instead of doing something like\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci\/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http:\/\/groups.di.unipi.it\/~gulli\/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n task_templates=[\r\n TextClassification(\r\n labels=(\"Business\", \"Sci\/Tech\", \"Sports\", \"World\"),\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ],\r\n )\r\n```\r\n\r\nwe could do the following:\r\n\r\n```python\r\nclass AGNews(datasets.GeneratorBasedBuilder):\r\n \"\"\"AG News topic classification dataset.\"\"\"\r\n\r\n def _info(self):\r\n info = datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(\r\n names=[\"World\", \"Sports\", \"Business\", \"Sci\/Tech\"]\r\n ),\r\n }\r\n ),\r\n homepage=\"http:\/\/groups.di.unipi.it\/~gulli\/AG_corpus_of_news_articles.html\",\r\n citation=_CITATION,\r\n )\r\n\r\n info.task_templates = [\r\n TextClassification(\r\n labels=info.features.names,\r\n text_column=\"text\",\r\n label_column=\"label\",\r\n )\r\n ]\r\n return info\r\n```\r\n\r\n","Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?","> Or we could simply not specify the labels and update the template in the DatasetInfo postinit to give it the labels ?\r\n\r\nOh yes, that would be great! It does mean enforcing that people use the right feature type (sometimes people still use a `string` feature still because they don't want to enumerate the classes, but I guess you've been catching most of those in reviews @lhoestq )\r\n\r\nThere might be reasons where there should be a legitimate difference, but I can't really think of nay right now, and we can always duplicate the feature","Let's ignore the CI fails since they are unrelated to your changes. They're about dataset cards issues"],"created_at":1621586186000,"updated_at":1622215738000,"closed_at":1622215588000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2389","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2389","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2389.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2389.patch"},"body":"This PR inserts text-classification templates for datasets with the following properties:\r\n\r\n* Only one config\r\n* At most two features of `(Value, ClassLabel)` type\r\n\r\nNote that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2389\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2388","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2388\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2388\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2388\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2388","id":897767470,"node_id":"MDU6SXNzdWU4OTc3Njc0NzA=","number":2388,"title":"Incorrect URLs for some datasets","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1621581755000,"updated_at":1622828385000,"closed_at":1622828385000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIt seems that the URLs for the following datasets are invalid: \r\n\r\n- [ ] `bn_hate_speech` has been renamed: https:\/\/github.com\/rezacsedu\/Bengali-Hate-Speech-Dataset\/commit\/c67ecfc4184911e12814f6b36901f9828df8a63a\r\n- [ ] `covid_tweets_japanese` has been renamed: http:\/\/www.db.info.gifu-u.ac.jp\/covid-19-twitter-dataset\/\r\n\r\nAs a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n# pick one of the datasets from the list above\r\nds = load_dataset(\"bn_hate_speech\")\r\n```\r\n\r\n## Expected results\r\nDataset loads without error.\r\n\r\n## Actual results\r\n```\r\nDownloading: 3.36kB [00:00, 1.07MB\/s] \r\nDownloading: 2.03kB [00:00, 678kB\/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset bn_hate_speech\/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to \/Users\/lewtun\/.cache\/huggingface\/datasets\/bn_hate_speech\/default\/0.0.0\/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 744, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/Users\/lewtun\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/bn_hate_speech\/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c\/bn_hate_speech.py\", line 76, in _split_generators\r\n train_path = dl_manager.download_and_extract(_URL)\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/hf-hub_eval\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/rezacsedu\/Bengali-Hate-Speech-Dataset\/main\/Bengali_%20Hate_Speech_Dataset_Subset.csv\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.6.2.dev0\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.8.8\r\n- PyArrow version: 3.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2388\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2387","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2387\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2387\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2387\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2387","id":897566666,"node_id":"MDU6SXNzdWU4OTc1NjY2NjY=","number":2387,"title":"datasets 1.6 ignores cache","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:\r\n\r\n> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them)\r\n\r\n","Hi ! Since `datasets` 1.6.0 we no longer keep small datasets (<250MB) on disk and load them in RAM instead by default. This makes data processing and iterating on data faster. However datasets in RAM currently have no way to reload previous results from the cache (since nothing is written on disk). We are working on making the caching work for datasets in RAM.\r\n\r\nUntil then, I'd recommend passing `keep_in_memory=False` to the calls to `load_dataset` like here:\r\n\r\nhttps:\/\/github.com\/huggingface\/transformers\/blob\/223943872e8c9c3fc11db3c6e93da07f5177423f\/examples\/pytorch\/language-modeling\/run_clm.py#L233\r\n\r\nThis way you say explicitly that you want your dataset to stay on the disk, and it will be able to recover previously computed results from the cache.","gotcha! thanks Quentin","OK, It doesn't look like we can use the proposed workaround - see https:\/\/github.com\/huggingface\/transformers\/issues\/11801\r\n\r\nCould you please add an env var for us to be able to turn off this unwanted in our situation behavior? It is really problematic for dev work, when one needs to restart the training very often and needs a quick startup time. Manual editing of standard scripts is not a practical option when one uses examples.\r\n\r\nThis could also be a problem for tests, which will be slower because of lack of cache, albeit usually we use tiny datasets there. I think we want caching for tests.\r\n\r\nThank you.","Hi @stas00, \r\n\r\nYou are right: an env variable is needed to turn off this behavior. I am adding it.\r\n\r\nFor the moment there is a config parameter to turn off this behavior: `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None`\r\n\r\nYou can find this info in the docs:\r\n- in the docstring of the parameter `keep_in_memory` of the function [`load_datasets`](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/loading_methods.html#datasets.load_dataset):\r\n- in a Note in the docs about [Loading a Dataset](https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#from-the-huggingface-hub)\r\n\r\n> The default in \ud83e\udd17Datasets is to memory-map the dataset on drive if its size is larger than datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES (default 250 MiB); otherwise, the dataset is copied in-memory. This behavior can be disabled by setting datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None, and in this case the dataset is not loaded in memory.","Yes, but this still requires one to edit the standard example scripts, so if I'm doing that already I just as well can add `keep_in_memory=False`.\r\n\r\nMay be the low hanging fruit is to add `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` env var to match the config, and if the user sets it to 0, then it'll be the same as `keep_in_memory=False` or `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0`?","@stas00, however, for the moment, setting the value to `0` is equivalent to the opposite, i.e. `keep_in_memory=True`. This means the max size until which I load in memory is 0 bytes.\r\n\r\nTell me if this is logical\/convenient, or I should change it.","In my PR, to turn off current default bahavior, you should set env variable to one of: `{\"\", \"OFF\", \"NO\", \"FALSE\"}`.\r\n\r\nFor example:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=\r\n```","IMHO, this behaviour is not very intuitive, as 0 is a normal quantity of bytes. So `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` to me reads as don't cache ever.\r\n\r\nAlso \"SIZE_IN_BYTES\" that can take one of `{\"\", \"OFF\", \"NO\", \"FALSE\"}` is also quite odd.\r\n\r\nI think supporting a very simple `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` that can accept any numerical value to match the name of the variable, requires minimal logic and is very straightforward. \r\n\r\nSo if you could adjust this logic - then `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` is all that's needed to not do in-memory datasets.\r\n\r\nDoes it make sense?","I understand your point @stas00, as I am not very convinced with current implementation.\r\n\r\nMy concern is: which numerical value should then pass a user who wants `keep_in_memory=True` by default, independently of dataset size? Currently it is `0` for this case.","That's a good question, and again the normal bytes can be used for that:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=1e12 # (~2**40)\r\n```\r\nSince it's unlikely that anybody will have more than 1TB RAM.\r\n\r\nIt's also silly that it uses BYTES and not MBYTES - that level of refinement doesn't seem to be of a practical use in this context.\r\n\r\nNot sure when it was added and if there are back-compat issues here, but perhaps it could be renamed `MAX_IN_MEMORY_DATASET_SIZE` and support 1M, 1G, 1T, etc. \r\n\r\nBut scientific notation is quite intuitive too, as each 000 zeros is the next M, G, T multiplier. Minus the discrepancy of 1024 vs 1000, which adds up. And it is easy to write down `1e12`, as compared to `1099511627776` (2**40). (`1.1e12` is more exact).\r\n","Great! Thanks, @stas00.\r\n\r\nI am implementing your suggestion to turn off default value when set to `0`.\r\n\r\nFor the other suggestion (allowing different metric prefixes), I will discuss with @lhoestq to agree on its implementation.","Awesome! Thank you, @albertvillanova!!!\r\n\r\n"],"created_at":1621555978000,"updated_at":1622045274000,"closed_at":1622045274000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Moving from https:\/\/github.com\/huggingface\/transformers\/issues\/11801#issuecomment-845546612 \r\n\r\nQuoting @VictorSanh:\r\n\r\n> \r\n> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):\r\n> \r\n> > `{'train': [{'filename': '\/home\/victor\/.cache\/huggingface\/datasets\/openwebtext10k\/plain_text\/1.0.0\/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '\/home\/victor\/.cache\/huggingface\/datasets\/openwebtext10k\/plain_text\/1.0.0\/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\/cache-97cf4c813e6469c6.arrow'}]}`\r\n> \r\n> while the same command with the latest version of datasets (actually starting at `1.6.0`) gives:\r\n> > `{'train': [], 'validation': []}`\r\n> \r\n\r\nI also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used.\r\n\r\nto reproduce:\r\n```\r\nUSE_TF=0 python examples\/pytorch\/language-modeling\/run_clm.py \\\r\n --model_name_or_path gpt2 \\\r\n --dataset_name \"stas\/openwebtext-10k\" \\\r\n --output_dir output_dir \\\r\n --overwrite_output_dir \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_train_samples 1000 \\\r\n --max_eval_samples 200 \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 4 \\\r\n --num_train_epochs 1 \\\r\n --warmup_steps 8 \\\r\n --block_size 64 \\\r\n --fp16 \\\r\n --report_to none\r\n```\r\n\r\nthe first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run.\r\n\r\n@lhoestq \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2387\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2386","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2386\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2386\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2386\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2386","id":897560049,"node_id":"MDU6SXNzdWU4OTc1NjAwNDk=","number":2386,"title":"Accessing Arrow dataset cache_files","user":{"login":"Mehrad0711","id":28717374,"node_id":"MDQ6VXNlcjI4NzE3Mzc0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28717374?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Mehrad0711","html_url":"https:\/\/github.com\/Mehrad0711","followers_url":"https:\/\/api.github.com\/users\/Mehrad0711\/followers","following_url":"https:\/\/api.github.com\/users\/Mehrad0711\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Mehrad0711\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Mehrad0711\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Mehrad0711\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Mehrad0711\/orgs","repos_url":"https:\/\/api.github.com\/users\/Mehrad0711\/repos","events_url":"https:\/\/api.github.com\/users\/Mehrad0711\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Mehrad0711\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @bhavitvyamalik for referencing the workaround. Setting `keep_in_memory=False` is working."],"created_at":1621555063000,"updated_at":1621624683000,"closed_at":1621624683000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nIn datasets 1.5.0 the following code snippet would have printed the cache_files:\r\n\r\n```\r\ntrain_data = load_dataset('conll2003', split='train', cache_dir='data')\r\nprint(train_data.cache_files[0]['filename'])\r\n\r\n```\r\n\r\nHowever, in the newest release (1.6.1), it prints an empty list.\r\n\r\nI also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty.\r\n\r\nWas wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2386\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2385","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2385\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2385\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2385\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2385","id":897206823,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ5MjM1Mjcy","number":2385,"title":"update citations","user":{"login":"adeepH","id":46108405,"node_id":"MDQ6VXNlcjQ2MTA4NDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46108405?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adeepH","html_url":"https:\/\/github.com\/adeepH","followers_url":"https:\/\/api.github.com\/users\/adeepH\/followers","following_url":"https:\/\/api.github.com\/users\/adeepH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adeepH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adeepH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adeepH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adeepH\/orgs","repos_url":"https:\/\/api.github.com\/users\/adeepH\/repos","events_url":"https:\/\/api.github.com\/users\/adeepH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adeepH\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621533248000,"updated_at":1621600698000,"closed_at":1621600698000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2385","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2385","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2385.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2385.patch"},"body":"To update citations for [Offenseval_dravidiain](https:\/\/huggingface.co\/datasets\/offenseval_dravidian)\r\n ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2385\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2384","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2384\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2384\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2384\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2384","id":896866461,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ4OTI4NTQ0","number":2384,"title":"Add args description to DatasetInfo","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the suggestions! I've included them and made a few minor tweaks along the way","Please merge master into this branch to fix the CI, I just fixed metadata validation tests."],"created_at":1621518790000,"updated_at":1621675576000,"closed_at":1621675574000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2384","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2384","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2384.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2384.patch"},"body":"Closes #2354 \r\n\r\nI am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2384\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2383","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2383\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2383\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2383\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2383","id":895779723,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ3OTU4MTQ0","number":2383,"title":"Improve example in rounding docs","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621450763000,"updated_at":1621601602000,"closed_at":1621600589000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2383","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2383","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2383.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2383.patch"},"body":"Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2383\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2382","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2382\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2382\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2382\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2382","id":895610216,"node_id":"MDU6SXNzdWU4OTU2MTAyMTY=","number":2382,"title":"DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en')","user":{"login":"helloworld123-lab","id":75953751,"node_id":"MDQ6VXNlcjc1OTUzNzUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75953751?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/helloworld123-lab","html_url":"https:\/\/github.com\/helloworld123-lab","followers_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/followers","following_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/orgs","repos_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/repos","events_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/helloworld123-lab\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621439388000,"updated_at":1622381176000,"closed_at":1622381176000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello everyone,\r\n\r\nI try to use head_qa dataset in [https:\/\/huggingface.co\/datasets\/viewer\/?dataset=head_qa&config=en](url)\r\n\r\n```\r\n!pip install datasets\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\r\n 'head_qa', 'en')\r\n```\r\nWhen I write above load_dataset(.), it throws the following:\r\n\r\n```\r\nDuplicatedKeysError Traceback (most recent call last)\r\n\r\n in ()\r\n 2 from datasets import load_dataset\r\n 3 dataset = load_dataset(\r\n----> 4 'head_qa', 'en')\r\n\r\n5 frames\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py in check_duplicate_keys(self)\r\n 347 for hash, key in self.hkey_record:\r\n 348 if hash in tmp_record:\r\n--> 349 raise DuplicatedKeysError(key)\r\n 350 else:\r\n 351 tmp_record.add(hash)\r\n\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 1\r\nKeys should be unique and deterministic in nature\r\n```\r\nHow can I fix the error? Thanks\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2382\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2381","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2381\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2381\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2381\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2381","id":895588844,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ3NzkyNDcw","number":2381,"title":"add dataset card title","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621438203000,"updated_at":1621536700000,"closed_at":1621536700000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2381","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2381","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2381.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2381.patch"},"body":"few of them were missed by me earlier which I've added now","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2381\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2380","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2380\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2380\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2380\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2380","id":895367201,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ3NTk3NTc3","number":2380,"title":"maintain YAML structure reading from README","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621426327000,"updated_at":1621429718000,"closed_at":1621429718000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2380","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2380","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2380.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2380.patch"},"body":"How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly):\r\n```\r\nannotations_creators:\r\nlabeled_final:\r\n- expert-generated\r\nlabeled_swap:\r\n- expert-generated\r\nunlabeled_final:\r\n- machine-generated\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- en\r\nlicenses:\r\n- other\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\nlabeled_final:\r\n- 10K\r\n- `datasets` version: datasets-1.6.2\r\n- Platform: Linux\r\n- Python version: 3.7\r\n- PyArrow version: 0.17.1, also 2.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2377\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2376","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2376\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2376\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2376\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2376","id":894852264,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ3MTU1NDE4","number":2376,"title":"Improve task api code quality","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good thanks, what do you think @lewtun ?","thanks for including the lazy `ClassLabel` class @mariosasko ! from my side this LGTM!"],"created_at":1621379620000,"updated_at":1622666397000,"closed_at":1621956654000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2376","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2376","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2376.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2376.patch"},"body":"Improves the code quality of the `TaskTemplate` dataclasses.\r\n\r\nChanges:\r\n* replaces `return NotImplemented` with raise `NotImplementedError` \r\n* replaces `sorted` with `len` in the uniqueness check \r\n* defines `label2id` and `id2label` in the `TextClassification` template as properties\r\n* replaces the `object.__setattr__(self, attr, value)` syntax with (IMO nicer) `self.__dict__[attr] = value`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2376\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2375","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2375\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2375\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2375\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2375","id":894655157,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ2OTg2NTcw","number":2375,"title":"Dataset Streaming","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621362000000,"updated_at":1624466102000,"closed_at":1624466101000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2375","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2375","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2375.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2375.patch"},"body":"# Dataset Streaming\r\n\r\n## API\r\n\r\nCurrent API is\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# Load an IterableDataset without downloading data\r\nsnli = load_dataset(\"snli\", streaming=True)\r\n\r\n# Access examples by streaming data\r\nprint(next(iter(snli[\"train\"]))) \r\n# {'premise': 'A person on a horse jumps over a broken down airplane.',\r\n# 'hypothesis': 'A person is training his horse for a competition.',\r\n# 'label': 1}\r\n```\r\n\r\nI already implemented a few methods:\r\n- IterableDataset.map: apply transforms on-the-fly to the examples\r\n- IterableDataset.shuffle: shuffle the data _a la_ TFDS, i.e. with a shuffling buffer\r\n- IterableDataset.with_format: set the format to `\"torch\"` to get a `torch.utils.data.IterableDataset`\r\n- merge_datasets: merge two iterable datasets by alternating one or the other (you can specify the probabilities)\r\n\r\nI would love to have your opinion on the API design :)\r\n\r\n## Implementation details\r\n\r\n### Streaming\r\n\r\nData streaming is done using `fsspec` which has nice caching features.\r\n\r\nTo make dataset streaming work I extend the `open` function of dataset scripts to support opening remote files without downloading them entirely. It also works with remote compressed archives (currently only zip is supported):\r\n\r\n```python\r\n# Get a file-like object by streaming data from a remote file\r\nopen(\"https:\/\/github.com\/davidsbatista\/NER-datasets\/raw\/master\/CONLL2003\/train.txt\")\r\n\r\n# Get a file-like object by streaming data from a remote compressed archive by using the hop separator \"::\"\r\nopen(\"zip:\/\/snli_1.0_train.txt::https:\/\/nlp.stanford.edu\/projects\/snli\/snli_1.0.zip\")\r\n```\r\n\r\nI also extend the `os.path.join` function to support navigation in remote compressed archives, since it has to deal with the `\"::\"` separator. This separator is used by `fsspec`.\r\n\r\nFinally I also added a retry mechanism in case the connection fails during data streaming.\r\n\r\n### Transforms\r\n\r\nAn IterableDataset wraps an ExamplesIterable instance. There are different subclasses depending on the transforms we want to apply:\r\n- ExamplesIterable: the basic one\r\n- MappedExamplesIterable: an iterable with a `map` function applied on the fly\r\n- BufferShuffledExamplesIterable: an iterable with a shuffling buffer\r\n- CyclingMultiSourcesExamplesIterable: alternates between several ExamplesIterable\r\n- RandomlyCyclingMultiSourcesExamplesIterable: randomly alternates between several ExamplesIterable\r\n\r\n### DatasetBuilder\r\n\r\nI use the same builders as usual. I just added a new method `_get_examples_iterable_for_split` to get an ExamplesIterable for a given split. Currently only the GeneratorBasedBuilder and the ArrowBasedBuilder implement it.\r\n\r\nThe BeamBasedBuilder doesn't implement it yet.\r\nIt means that datasets like wikipedia and natural_questions can't be loaded as IterableDataset for now.\r\n\r\n## Other details\r\n\r\nI may have to do some changes in many dataset script to use `download` instead of `download_and_extract` when extraction is not needed. This will avoid errors for streaming.<\/s>\r\n\r\nEDIT: Actually I just check for the extension of the file to do extraction only if needed.\r\n\r\nEDIT2: It's not possible to stream from .tar.gz files without downloading the file completely. For now I raise an error if one want to get a streaming dataset based on .tar.gz files.\r\n\r\n## TODO\r\n\r\nusual stuff:\r\n\r\n- [x] make streaming dependency \"aiohttp\" optional: `pip install datasets[streaming]`\r\n- [x] tests\r\n- [x] docs","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2375\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2374","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2374\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2374\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2374\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2374","id":894579364,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ2OTIyMjkw","number":2374,"title":"add `desc` to `tqdm` in `Dataset.map()`","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Once this is merged, let's update `transformers` examples to use this new code. As currently all those tqdm bars are who knows what they are....\r\n\r\nhttps:\/\/github.com\/huggingface\/transformers\/issues\/11797","Sure @stas00! Once this is merged let's discuss what all changes can be done on `transformers` side","@bhavitvyamalik, as it has been merged would you like to tackle https:\/\/github.com\/huggingface\/transformers\/issues\/11797?\r\n","Definitely @stas00. From what I could gather, you guys want more meaningful `.map` calls for all examples [here](https:\/\/github.com\/huggingface\/transformers\/tree\/master\/examples\/pytorch)?","That's exactly right, @bhavitvyamalik \r\n\r\nPerhaps the best approach is to do one example, see that other maintainers agree on it. and then replicate to other."],"created_at":1621356269000,"updated_at":1622130244000,"closed_at":1622041161000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2374","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2374","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2374.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2374.patch"},"body":"Fixes #2330. Please let me know if anything is also required in this ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2374\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2373","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2373\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2373\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2373\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2373","id":894499909,"node_id":"MDU6SXNzdWU4OTQ0OTk5MDk=","number":2373,"title":"Loading dataset from local path","user":{"login":"kolakows","id":34172905,"node_id":"MDQ6VXNlcjM0MTcyOTA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34172905?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kolakows","html_url":"https:\/\/github.com\/kolakows","followers_url":"https:\/\/api.github.com\/users\/kolakows\/followers","following_url":"https:\/\/api.github.com\/users\/kolakows\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kolakows\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kolakows\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kolakows\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kolakows\/orgs","repos_url":"https:\/\/api.github.com\/users\/kolakows\/repos","events_url":"https:\/\/api.github.com\/users\/kolakows\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kolakows\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Version below works, checked again in the docs, and data_files should be a path.\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='\/data\/dir\/corpus.txt', \r\n cache_dir='.')\r\n```"],"created_at":1621351250000,"updated_at":1621352196000,"closed_at":1621352195000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to load a local dataset with the code below\r\n\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='corpus.txt', \r\n data_dir='\/data\/dir', \r\n cache_dir='.')\r\n```\r\nBut internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly?\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/bc61954083f74e6460688202e9f77dde2475319c\/src\/datasets\/builder.py#L153","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2373\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2372","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2372\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2372\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2372\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2372","id":894496064,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ2ODUxODc2","number":2372,"title":"ConvQuestions benchmark added","user":{"login":"PhilippChr","id":24608689,"node_id":"MDQ6VXNlcjI0NjA4Njg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24608689?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilippChr","html_url":"https:\/\/github.com\/PhilippChr","followers_url":"https:\/\/api.github.com\/users\/PhilippChr\/followers","following_url":"https:\/\/api.github.com\/users\/PhilippChr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilippChr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilippChr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilippChr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilippChr\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilippChr\/repos","events_url":"https:\/\/api.github.com\/users\/PhilippChr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilippChr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for your helpful comments and suggestions! :)\r\nI integrated the additional fields, and extended some of the README\/dataset card.\r\nAnd I actually realized that we had the cc-by-4.0 for the dataset, so this was also changed.","I added the answers to the test set actually :)","Oh great ! Let me revert my change then"],"created_at":1621351010000,"updated_at":1622025105000,"closed_at":1622025105000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2372","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2372","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2372.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2372.patch"},"body":"Hello,\r\nI would like to integrate our dataset on conversational QA. The answers are grounded in the KG.\r\nThe work was published in CIKM 2019 (https:\/\/dl.acm.org\/doi\/10.1145\/3357384.3358016).\r\nWe hope for further research on how to deal with the challenges of factoid conversational QA.\r\nThanks! :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2372\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2371","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2371\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2371\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2371\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2371","id":894193403,"node_id":"MDU6SXNzdWU4OTQxOTM0MDM=","number":2371,"title":"Align question answering tasks with sub-domains","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1621331279000,"updated_at":1621331362000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:\r\n\r\n> `question-answering` exists in two forms: abstractive and extractive question answering.\r\n> \r\n> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input\/output for both (abstractive will have text for both while extractive can use spans indication as well as text).\r\n> \r\n> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.\r\n> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).\r\n> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https:\/\/paperswithcode.com\/area\/natural-language-processing and on nlpprogress: https:\/\/github.com\/sebastianruder\/NLP-progress\/blob\/master\/english\/question_answering.md#squad\r\n> \r\n> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.\r\n> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.\r\n> \r\n> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https:\/\/arxiv.org\/abs\/2101.00178\r\n\r\nInvestigate which grouping of QA is best suited for `datasets` and adapt \/ extend the QA task template accordingly.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2371\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2370","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2370\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2370\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2370\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2370","id":893606432,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy","number":2370,"title":"Adding HendrycksTest dataset","user":{"login":"andyzoujm","id":43451571,"node_id":"MDQ6VXNlcjQzNDUxNTcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43451571?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/andyzoujm","html_url":"https:\/\/github.com\/andyzoujm","followers_url":"https:\/\/api.github.com\/users\/andyzoujm\/followers","following_url":"https:\/\/api.github.com\/users\/andyzoujm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/andyzoujm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/andyzoujm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/andyzoujm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/andyzoujm\/orgs","repos_url":"https:\/\/api.github.com\/users\/andyzoujm\/repos","events_url":"https:\/\/api.github.com\/users\/andyzoujm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/andyzoujm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for the review. I've made the suggested changes. There still might be some problems with dummy data though due to some csv loading issues (which I haven't found the cause to).","I took a look at the dummy data and some csv lines were cropped. I fixed them :)"],"created_at":1621277585000,"updated_at":1622479033000,"closed_at":1622479033000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2370","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2370","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2370.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2370.patch"},"body":"Adding Hendrycks test from https:\/\/arxiv.org\/abs\/2009.03300.\r\nI'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2370\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2369","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2369\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2369\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2369\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2369","id":893554153,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ2MDQ5NDM1","number":2369,"title":"correct labels of conll2003","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621273074000,"updated_at":1621326462000,"closed_at":1621326462000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2369","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2369","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2369.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2369.patch"},"body":"# What does this PR\r\n\r\nIt fixes\/extends the `ner_tags` for conll2003 to include all. \r\nPaper reference https:\/\/arxiv.org\/pdf\/cs\/0306050v1.pdf\r\nModel reference https:\/\/huggingface.co\/elastic\/distilbert-base-cased-finetuned-conll03-english\/blob\/main\/config.json \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2369\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2368","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2368\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2368\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2368\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2368","id":893411076,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0","number":2368,"title":"Allow \"other-X\" in licenses","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621262874000,"updated_at":1621269387000,"closed_at":1621269387000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2368","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2368","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2368.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2368.patch"},"body":"This PR allows \"other-X\" licenses during metadata validation.\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2368\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2367","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2367\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2367\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2367\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2367","id":893317427,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ1ODUxNTE0","number":2367,"title":"Remove getchildren from hyperpartisan news detection","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621257037000,"updated_at":1621260433000,"closed_at":1621260433000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2367","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2367","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2367.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2367.patch"},"body":"`Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails).\r\n\r\nhttps:\/\/bugs.python.org\/issue29209","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2367\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2366","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2366\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2366\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2366\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2366","id":893185266,"node_id":"MDU6SXNzdWU4OTMxODUyNjY=","number":2366,"title":"Json loader fails if user-specified features don't match the json data fields order","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1621247168000,"updated_at":1623840469000,"closed_at":1623840469000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"If you do\r\n```python\r\ndataset = load_dataset(\"json\", data_files=data_files, features=features)\r\n```\r\nThen depending on the order of the features in the json data field it fails:\r\n```python\r\n[...]\r\n~\/Desktop\/hf\/datasets\/src\/datasets\/packaged_modules\/json\/json.py in _generate_tables(self, files)\r\n 94 if self.config.schema:\r\n 95 # Cast allows str <-> int\/float, while parse_option explicit_schema does NOT\r\n---> 96 pa_table = pa_table.cast(self.config.schema)\r\n 97 yield i, pa_table\r\n[...]\r\nValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens']\r\n```\r\n\r\nThis is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast.\r\n\r\nOne way to fix the `cast` would be to replace it with:\r\n```python\r\n# reorder the arrays if necessary + cast to schema\r\n# we can't simply use .cast here because we may need to change the order of the columns\r\npa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2366\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2365","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2365\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2365\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2365\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2365","id":893179697,"node_id":"MDU6SXNzdWU4OTMxNzk2OTc=","number":2365,"title":"Missing ClassLabel encoding in Json loader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/5","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/5\/labels","id":6808903,"node_id":"MDk6TWlsZXN0b25lNjgwODkwMw==","number":5,"title":"1.9","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":12,"state":"closed","created_at":1622477586000,"updated_at":1626099120000,"due_on":1625727600000,"closed_at":1625809807000},"comments":[],"created_at":1621246750000,"updated_at":1624892734000,"closed_at":1624892734000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently if you want to load a json dataset this way\r\n```python\r\ndataset = load_dataset(\"json\", data_files=data_files, features=features)\r\n```\r\nThen if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail:\r\n```python\r\n[...]\r\n~\/Desktop\/hf\/datasets\/src\/datasets\/packaged_modules\/json\/json.py in _generate_tables(self, files)\r\n 94 if self.config.schema:\r\n 95 # Cast allows str <-> int\/float, while parse_option explicit_schema does NOT\r\n---> 96 pa_table = pa_table.cast(self.config.schema)\r\n 97 yield i, pa_table\r\n[...]\r\nArrowInvalid: Failed to parse string: 'O' as a scalar of type int64\r\n```\r\n\r\nThis is because it just tries to cast the string data to integers, without applying the mapping str->int first\r\n\r\nThe current workaround is to do instead\r\n```python\r\ndataset = load_dataset(\"json\", data_files=data_files)\r\ndataset = dataset.map(features.encode_example, features=features)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2365\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2364","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2364\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2364\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2364\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2364","id":892420500,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx","number":2364,"title":"README updated for SNLI, MNLI","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ?","@lhoestq I agree, I'll look into it."],"created_at":1621078679000,"updated_at":1621260867000,"closed_at":1621258459000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2364","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2364","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2364.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2364.patch"},"body":"Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2364\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2363","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2363\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2363\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2363\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2363","id":892391232,"node_id":"MDU6SXNzdWU4OTIzOTEyMzI=","number":2363,"title":"Trying to use metric.compute but get OSError","user":{"login":"hyusterr","id":52968111,"node_id":"MDQ6VXNlcjUyOTY4MTEx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52968111?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hyusterr","html_url":"https:\/\/github.com\/hyusterr","followers_url":"https:\/\/api.github.com\/users\/hyusterr\/followers","following_url":"https:\/\/api.github.com\/users\/hyusterr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hyusterr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hyusterr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hyusterr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hyusterr\/orgs","repos_url":"https:\/\/api.github.com\/users\/hyusterr\/repos","events_url":"https:\/\/api.github.com\/users\/hyusterr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hyusterr\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["also, I test the function on some little data , get the same message:\r\n\r\n```\r\nPython 3.8.5 (default, Jan 27 2021, 15:41:15)\r\n[GCC 9.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_metric\r\n>>> metric = load_metric('accuracy')\r\n>>> metric.add_batch(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])\r\n2021-05-15 16:39:17.240991: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\n>>> metric.compute()\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/datasets\/metric.py\", line 391, in compute\r\n self._finalize()\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/datasets\/metric.py\", line 342, in _finalize\r\n self.writer.finalize()\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 370, in finalize\r\n self.stream.close()\r\n File \"pyarrow\/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow\/error.pxi\", line 112, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n```","Hi @hyusterr,\r\nIf you look at the example provided in `metrics\/accuracy.py`, it only does `metric.compute()` to calculate the accuracy. Here's an example:\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric('accuracy')\r\noutput = metric.compute(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])\r\nprint(output['accuracy']) # 0.5\r\n```\r\n","I thought I can use Metric to collect predictions and references, this follows the step from huggingface's sample colab.\r\nBTW, I fix the problem by setting other cache_dir in load_metric, but I'm still wondering about the mechanism.","I tried this code on a colab notebook and it worked fine (with gpu enabled):\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric('accuracy')\r\noutput = metric.add_batch(predictions=[1, 1, 1, 1], references=[1, 1, 0, 0])\r\nfinal_score = metric.compute()\r\nprint(final_score) # 0.5\r\n```\r\nAlso, in `load_metric`, I saw `cache_dir` is optional and it defaults to `~\/.datasets\/`","Hi ! By default it caches the predictions and references used to compute the metric in `~\/.cache\/huggingface\/datasets\/metrics` (not `~\/.datasets\/`). Let me update the documentation @bhavitvyamalik .\r\n\r\nThe cache is used to store all the predictions and references passed to `add_batch` for example in order to compute the metric later when `compute` is called.\r\n\r\nI think the issue might come from the cache directory that is used by default. Can you check that you have the right permissions ? Otherwise feel free to set `cache_dir` to another location."],"created_at":1621067946000,"updated_at":1630936866000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I want to use metric.compute from load_metric('accuracy') to get training accuracy, but receive OSError. I am wondering what is the mechanism behind the metric calculation, why would it report an OSError?\r\n\r\n```python\r\n195 for epoch in range(num_train_epochs):\r\n196 model.train()\r\n197 for step, batch in enumerate(train_loader):\r\n198 # print(batch['input_ids'].shape)\r\n199 outputs = model(**batch)\r\n200\r\n201 loss = outputs.loss\r\n202 loss \/= gradient_accumulation_steps\r\n203 accelerator.backward(loss)\r\n204\r\n205 predictions = outputs.logits.argmax(dim=-1)\r\n206 metric.add_batch(\r\n207 predictions=accelerator.gather(predictions),\r\n208 references=accelerator.gather(batch['labels'])\r\n209 )\r\n210 progress_bar.set_postfix({'loss': loss.item(), 'train batch acc.': train_metrics})\r\n211\r\n212 if (step + 1) % 50 == 0 or step == len(train_loader) - 1:\r\n213 train_metrics = metric.compute()\r\n```\r\n\r\nthe error message is as below:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_multi.py\", line 273, in \r\n main()\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/click\/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/click\/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/click\/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/click\/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"run_multi.py\", line 213, in main\r\n train_metrics = metric.compute()\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/datasets\/metric.py\", line 391, in compute\r\n self._finalize()\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/datasets\/metric.py\", line 342, in _finalize\r\n self.writer.finalize()\r\n File \"\/home\/yshuang\/.local\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 370, in finalize\r\n self.stream.close()\r\n File \"pyarrow\/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n```\r\n\r\n## Environment info\r\n\r\n- `datasets` version: 1.6.1\r\n- Platform: Linux NAME=\"Ubuntu\" VERSION=\"20.04.1 LTS (Focal Fossa)\"\r\n- Python version: python3.8.5\r\n- PyArrow version: 4.0.0\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2363\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2362","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2362\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2362\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2362\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2362","id":892100749,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw","number":2362,"title":"Fix web_nlg metadata","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `release_v2.1` and the others are dataset configuration names.\r\n\r\nThe configuration names are used to show the right code snippet in the UI to load the dataset.\r\nFor example if the parsing of the web_nlg tags worked correctly we would have:\r\n![image](https:\/\/user-images.githubusercontent.com\/42851186\/118475444-8d1e5d00-b70c-11eb-98e9-844d4daf6139.png)\r\n\r\nTherefore I don't think it's a good idea to rename the configurations from `release_v2.1` to `release_v2_1` as the code snippet would be wrong in this case.\r\n\r\nMoreover we can't really disallow dots in configuration names and rename the configurations since it would be a big breaking change. It's commonly used, especially with multilingual datasets. For example `load_dataset(\"indic_glue\", \"sna.bn\")`.\r\n\r\nIs this something that can be fixed on the moonlanding side instead ?","> Is this something that can be fixed on the moonlanding side instead ?\r\n\r\nNot really unless we change database:)\r\n\r\nWe'll maybe try to find another workaround, but super low-prio given that it's the only dataset that has those dotted keys in the YAML metadata","Ok, should we close this PR then ?"],"created_at":1621012507000,"updated_at":1621259057000,"closed_at":1621258948000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2362","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2362","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2362.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2362.patch"},"body":"Our metadata storage system does not support `.` inside keys. cc @Pierrci \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2362\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2361","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2361\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2361\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2361\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2361","id":891982808,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4","number":2361,"title":"Preserve dtype for numpy\/torch\/tf\/jax arrays","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, \r\nIt turns out that pyarrow `ListArray` are not recognized as list-like when we get output from `numpy_to_pyarrow_listarray`. This might cause tests to fail. If possible can we convert that `ListArray` output to list inorder for tests to pass? Under the hood it'll maintain the dtype as that of numpy array passed during input only","Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch` https:\/\/github.com\/huggingface\/datasets\/blob\/3d46bc384f811435e59e3916faa3aa20a1cf87bc\/tests\/test_arrow_dataset.py#L1039 and `test_map_tf`https:\/\/github.com\/huggingface\/datasets\/blob\/3d46bc384f811435e59e3916faa3aa20a1cf87bc\/tests\/test_arrow_dataset.py#L1056 \r\nthey're expecting `float64`. Shouldn't that be `float32` now?","It's normal: pytorch and tensorflow use `float32` by default, unlike numpy which uses `float64`.\r\n\r\nI think that we should always keep the precision of the original tensor (torch\/tf\/numpy).\r\nIt means that as it is in this PR it's fine (the precision is conserved when doing the torch\/tf -> numpy conversion).\r\n\r\nThis is a breaking change but in my opinion the fact that we had Value(\"float64\") for torch.float32 tensors was an issue already.\r\n\r\nLet me know what you think. Cc @albertvillanova if you have an opinion on this\r\n\r\nIf we agree on doing this breaking change, we can just change the test. ","Hi @lhoestq, \r\nMerged master into this branch. Only changing the test is left for now (mentioned below) after which all tests should pass.\r\n\r\n> Brought down the failing tests from 7 to 4. Let me know if that part looks good. Failing tests are looking quite similar. In `test_map_torch`\r\n> \r\n> https:\/\/github.com\/huggingface\/datasets\/blob\/3d46bc384f811435e59e3916faa3aa20a1cf87bc\/tests\/test_arrow_dataset.py#L1039\r\n> \r\n> and `test_map_tf`\r\n> https:\/\/github.com\/huggingface\/datasets\/blob\/3d46bc384f811435e59e3916faa3aa20a1cf87bc\/tests\/test_arrow_dataset.py#L1056\r\n> \r\n> \r\n> they're expecting `float64`. Shouldn't that be `float32` now?\r\n\r\n","> they're expecting float64. Shouldn't that be float32 now?\r\n\r\nYes feel free to update those tests :)\r\n\r\nIt would be nice to have the same test for JAX as well","Added same test for for JAX too. Also, I saw that I missed changing `test_cast_to_python_objects_jax` like I did for TF and PyTorch. Finished that as well"],"created_at":1621003523000,"updated_at":1629189004000,"closed_at":1629189004000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2361","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2361","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2361.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2361.patch"},"body":"Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2361\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2360","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2360\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2360\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2360\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2360","id":891965964,"node_id":"MDU6SXNzdWU4OTE5NjU5NjQ=","number":2360,"title":"Automatically detect datasets with compatible task schemas","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1621002220000,"updated_at":1621002220000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"See description of #2255 for details.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2360\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2359","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2359\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2359\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2359\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2359","id":891946017,"node_id":"MDU6SXNzdWU4OTE5NDYwMTc=","number":2359,"title":"Allow model labels to be passed during task preparation","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1621000708000,"updated_at":1621000708000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.\r\n\r\nFor example for sentiment classification on amazon reviews with you could have these labels:\r\n- \"1 star\", \"2 stars\", \"3 stars\", \"4 stars\", \"5 stars\"\r\n- \"1\", \"2\", \"3\", \"4\", \"5\"\r\n\r\nSome models may use the first set, while other models use the second set.\r\n\r\nHere in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ?\r\n\r\nThen in terms of API, users could use `dataset.prepare_for_task(\"text-classification\", labels=model.labels)` or something like that.\r\n\r\nThe label set could also be the same but not in the same order. For NLI for example, some models use `[\"neutral\", \"entailment\", \"contradiction\"]` and some others use `[\"neutral\", \"contradiction\", \"entailment\"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model.\r\n\r\nLet me know what you think ! This can be done in a future PR\r\n\r\n_Originally posted by @lhoestq in https:\/\/github.com\/huggingface\/datasets\/pull\/2255#discussion_r632412792_","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2359\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2358","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2358\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2358\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2358\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2358","id":891269577,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQ0MTYyOTY2","number":2358,"title":"Roman Urdu Stopwords List","user":{"login":"devzohaib","id":58664161,"node_id":"MDQ6VXNlcjU4NjY0MTYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/58664161?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/devzohaib","html_url":"https:\/\/github.com\/devzohaib","followers_url":"https:\/\/api.github.com\/users\/devzohaib\/followers","following_url":"https:\/\/api.github.com\/users\/devzohaib\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/devzohaib\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/devzohaib\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/devzohaib\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/devzohaib\/orgs","repos_url":"https:\/\/api.github.com\/users\/devzohaib\/repos","events_url":"https:\/\/api.github.com\/users\/devzohaib\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/devzohaib\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for sharing :)\r\nI think the best place to share this is probably the `Languages at Hugging Face` section of the forum:\r\nhttps:\/\/discuss.huggingface.co\/c\/languages-at-hugging-face\/15\r\n\r\nSince this is not a dataset, I'm closing this PR if you don't mind","Thank you I will look into the link that you have shared with me.\n\n\n\n\nOn Mon, May 17, 2021 at 7:05 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> Closed #2358 .\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> , or\n> unsubscribe\n> \n> .\n>\n"],"created_at":1620930567000,"updated_at":1621414243000,"closed_at":1621260310000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2358","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2358","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2358.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2358.patch"},"body":"A list of most frequently used Roman Urdu words with different spellings and usages.\r\nThis is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2358\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2357","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2357\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2357\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2357\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2357","id":890595693,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz","number":2357,"title":"Adding Microsoft CodeXGlue Datasets","user":{"login":"ncoop57","id":7613470,"node_id":"MDQ6VXNlcjc2MTM0NzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7613470?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ncoop57","html_url":"https:\/\/github.com\/ncoop57","followers_url":"https:\/\/api.github.com\/users\/ncoop57\/followers","following_url":"https:\/\/api.github.com\/users\/ncoop57\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ncoop57\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ncoop57\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ncoop57\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ncoop57\/orgs","repos_url":"https:\/\/api.github.com\/users\/ncoop57\/repos","events_url":"https:\/\/api.github.com\/users\/ncoop57\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ncoop57\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.\r\n\r\nIf it needs to be reran, I can try it do it on my own machine, but I've had a memory issues with a previous dataset due to my compute constraints so I'd prefer to hopefully avoid it all together if not necessary to regenerate.","Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n\r\n`CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?","> Was just reviewing the `builder_name`s of each dataset and it seems like it is already following this format:\r\n> \r\n> `CodeXGlueCcCloneDetectionBigCloneBenchMain -> code_x_glue_cc_clone_detection_big_clone_bench_main` Is there a location I am missing?\r\n\r\nIf it's already in this format then it's fine thanks ! It's all good then\r\n\r\nTo fix the CI you just need to add the `encoding=` parameters to the `open()` calls","@lhoestq I think everything should be good to go besides the code styling, which seem to be due to missing or unsupported metadata tags for the READMEs, is this something I should worry about since all the other datasets seem to be failing as well?","Awesome! Just committed your changes and I will begin on adding the TOCs and filling in the content for the new sections\/subsections.\r\n\r\nAlso, I see that we are having to only use the `code` tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.","> Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n\r\nYes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n\r\ncc @yjernite what do you think about extending our languages taxonomy to programming languages ?","Hey @lhoestq, just finalizing the READMEs and testing them against the automated test. For the non, WIN tests, it seems like there is some dependency issue that doesn't have to do with the new datasets. For the WIN tests, it looks like some of the headings are mislabeled such as \"Supported Tasks and Leaderboards\" -> \"Supported Tasks\" in the TOC you posted. Should I base my TOC on the one you posted or on the one that the test script is using? Also, it throws errors for some of the fields being empty, such as \"Source Data\" in the `code_x_glue_tt_text_to_text` dataset. However, I am not familiar with this dataset, so I put the `[More Information Needed]` stub, similar to the other sections I couldn't easily answer. For some of the sections like \"Source Data\", is this info required?","Yes you're right, it is `Supported Tasks and Leaderboards` that we need to use, sorry about that\r\n\r\nI also noticed the same for the splits section: we have to use `Data Splits` (not Data Splits Sample Size)\r\n","Some subsections are also missing: `Initial Data Collection and Normalization`, `Who are the source language producers?`.\r\nIf you are interested you can fill those sections as well, or leave them empty for now.\r\nThis will also fix the error regarding \"Source Data\"\r\n\r\nYou can see the template of the readme here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/9d8bf36fdb861d9b2922d7c782fb58f9f542997c\/templates\/README.md","> > Also, I see that we are having to only use the code tag instead of individual langs and I get that is required for indexing or showing available tags on the datasets hub. However, as a future feature, it might be good to add tags for individual programming languages to make it easier to search.\r\n> \r\n> Yes I agree. We'll be able to reuse the tags per programming language from this PR when we allow this feature\r\n> \r\n> cc @yjernite what do you think about extending our languages taxonomy to programming languages ?\r\n\r\nSounds good, as long as they all share a prefix! maybe `code_cpp`, `code_java`, etc. ? \r\n\r\nI don't think we currently have `_` in language codes\/names, but also don't see what it would break *a priori*","We don't use `_` but there are some languages that use `-` though like `en-US`. Let's use `-` maybe, to match the same hierarchy pattern ?","Hi guys, I just started working on https:\/\/github.com\/huggingface\/datasets\/pull\/997 this morning and I just realized that you were finishing it... You may want to get the dataset cards from https:\/\/github.com\/madlag\/datasets, and maybe some code too, as I did a few things like moving _CITATION and _DESCRIPTION to globals.\r\n\r\n","I am renaming the main classes to match the dataset names, for example : CodeXGlueTcTextToCodeMain -> CodeXGlueTcTextToCode . And I am regenerating the dataset_infos.json accordingly.","Thanks for renaming the classes and updating the dataset_infos.json ! This looks all clean now :)\r\n\r\nThis PR looks all good to me :) One just needs to merge master into this branch to make sure the CI is green with the latest changes. It should also fix the current CI issues that are not related to this PR","Woot woot :rocket:! All green, looks like it is ready for showtime. Thank you both @lhoestq and especially @madlag, I think these datasets are going to be a great new addition to :hugs: datasets and I can't wait to use them in my research :nerd_face:.","Thanks @ncoop57 for you contribution! It will be really cool to see those datasets used as soon as they are released !"],"created_at":1620866581000,"updated_at":1623144597000,"closed_at":1623144597000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2357","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2357","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2357.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2357.patch"},"body":"Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:. \r\n\r\nI believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag \"code\" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the \"code\" tag @lhoestq.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2357\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2356","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2356\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2356\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2356\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2356","id":890511019,"node_id":"MDU6SXNzdWU4OTA1MTEwMTk=","number":2356,"title":"How to Add New Metrics Guide","user":{"login":"ncoop57","id":7613470,"node_id":"MDQ6VXNlcjc2MTM0NzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7613470?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ncoop57","html_url":"https:\/\/github.com\/ncoop57","followers_url":"https:\/\/api.github.com\/users\/ncoop57\/followers","following_url":"https:\/\/api.github.com\/users\/ncoop57\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ncoop57\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ncoop57\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ncoop57\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ncoop57\/orgs","repos_url":"https:\/\/api.github.com\/users\/ncoop57\/repos","events_url":"https:\/\/api.github.com\/users\/ncoop57\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ncoop57\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! sorry for the late response \r\n\r\nIt would be fantastic to have a guide for adding metrics as well ! Currently we only have this template here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/new_metric_script.py\r\n\r\nWe can also include test utilities for metrics in the guide.\r\n\r\nWe have a pytest suite with commands that you can use to make sure your metric works as expected.\r\nIt has two useful commands:\r\n\r\n1. This commands tests the code in the `Examples:` desction of the docstring of the metric:\r\n```\r\npytest tests\/test_metric_common.py::LocalMetricTest::test_load_metric_\r\n```\r\nThis will run this code for example:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e0787aa2a781cc15a80f7597f56d1f12e23df4c9\/metrics\/accuracy\/accuracy.py#L40-L45\r\n\r\nMoreover this test is meant to be fast so users are free to add patches to the metric to avoid intensive computations.\r\nAnd example of intensive call patch can be found here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e0787aa2a781cc15a80f7597f56d1f12e23df4c9\/tests\/test_metric_common.py#L138-L151\r\n\r\n2. This test runs the same thing as 1. except that it doesn't use patches (the real metric is used):\r\n```\r\nRUN_SLOW=1 pytest tests\/test_metric_common.py::LocalMetricTest::test_load_metric_\r\n```\r\n\r\nFinally additional metric-specific tests can be added to `test_metric_common.py`.\r\n\r\nVoila :) Feel free to ping me if you have any question or if I can help\r\n"],"created_at":1620855726000,"updated_at":1622486975000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nCurrently there is an absolutely fantastic guide for how to contribute a new dataset to the library. However, there isn't one for adding new metrics.\r\n\r\n**Describe the solution you'd like**\r\nI'd like for a guide in a similar style to the dataset guide for adding metrics. I believe many of the content in the dataset guide such as setup can be easily copied over with minimal changes. Also, from what I've seen with existing metrics, it shouldn't be as complicated, especially in documentation of the metric, mainly just citation and usage. The most complicated part I see would be in automated tests that run the new metrics, but y'all's test suite seem pretty comprehensive, so it might not be that hard.\r\n\r\n**Describe alternatives you've considered**\r\nOne alternative would be just not having the metrics be community generated and so would not need a step by step guide. New metrics would just be proposed as issues and the internal team would take care of them. However, I think it makes more sense to have a step by step guide for contributors to follow.\r\n\r\n**Additional context**\r\nI'd be happy to help with creating this guide as I am very interested in adding software engineering metrics to the library :nerd_face:, the part I would need guidance on would be testing.\r\n\r\nP.S. Love the library and community y'all have built! :hugs: \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2356\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2355","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2355\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2355\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2355\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2355","id":890484408,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQzNDk5NTIz","number":2355,"title":"normalized TOCs and titles in data cards","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh right! I'd be in favor of still having the same TOC across the board, we can either leave it as is or add a `[More Info Needed]` `Contributions` Section wherever it's currently missing, wdyt?","(I thought those were programmatically updated based on git history :D )","Merging for now to avoid conflict since there are so many changes but let's figure out the contributions section next ;) "],"created_at":1620853199000,"updated_at":1620998592000,"closed_at":1620998592000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2355","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2355","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2355.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2355.patch"},"body":"I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content\r\n\r\nThis PR normalizes all of them to the newer version","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2355\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2354","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2354\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2354\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2354\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2354","id":890439523,"node_id":"MDU6SXNzdWU4OTA0Mzk1MjM=","number":2354,"title":"Document DatasetInfo attributes","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1620849689000,"updated_at":1621675574000,"closed_at":1621675574000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nAs noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2354\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2353","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2353\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2353\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2353\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2353","id":890296262,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQzMzM4MDcz","number":2353,"title":"Update README vallidation rules","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620838646000,"updated_at":1620982566000,"closed_at":1620982566000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2353","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2353","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2353.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2353.patch"},"body":"This PR allows unexpected subsections under third-level headings. All except `Contributions`.\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2353\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2352","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2352\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2352\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2352\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2352","id":889810100,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQyOTI4NTgz","number":2352,"title":"Set to_json default to JSON lines","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is perfect, @albertvillanova - thank you! Tested it to work.\r\n\r\nMight it be a good idea to document the args to `to_json`?\r\n\r\nand also even a very basic progress bar? took 10min for 8M large records for `openwebtext` so perhaps some indication of it's being alive every min or so?","@lhoestq I added tests for both `lines` and `orient`."],"created_at":1620807565000,"updated_at":1621587674000,"closed_at":1621587673000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2352","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2352","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2352.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2352.patch"},"body":"With this PR, the method `Dataset.to_json`:\r\n- is added to the docs\r\n- defaults to JSON lines","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2352\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2351","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2351\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2351\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2351\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2351","id":889584953,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQyNzI5NDIz","number":2351,"title":"simpllify faiss index save","user":{"login":"Guitaricet","id":2821124,"node_id":"MDQ6VXNlcjI4MjExMjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2821124?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Guitaricet","html_url":"https:\/\/github.com\/Guitaricet","followers_url":"https:\/\/api.github.com\/users\/Guitaricet\/followers","following_url":"https:\/\/api.github.com\/users\/Guitaricet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Guitaricet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Guitaricet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Guitaricet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Guitaricet\/orgs","repos_url":"https:\/\/api.github.com\/users\/Guitaricet\/repos","events_url":"https:\/\/api.github.com\/users\/Guitaricet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Guitaricet\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620791650000,"updated_at":1621258901000,"closed_at":1621258901000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2351","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2351","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2351.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2351.patch"},"body":"Fixes #2350\r\n\r\nIn some cases, Faiss GPU index objects do not have neither \"device\" nor \"getDevice\". Possibly this happens when some part of the index is computed on CPU.\r\n\r\nIn particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transforms cause it.\r\n\r\nI propose, instead of using the index object to get the device, to infer it form the `FaissIndex.device` field as it is done in `.add_vectors`. Here we assume that `.device` always corresponds to the index placement and it seems reasonable. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2351\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2350","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2350\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2350\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2350\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2350","id":889580247,"node_id":"MDU6SXNzdWU4ODk1ODAyNDc=","number":2350,"title":"`FaissIndex.save` throws error on GPU","user":{"login":"Guitaricet","id":2821124,"node_id":"MDQ6VXNlcjI4MjExMjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2821124?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Guitaricet","html_url":"https:\/\/github.com\/Guitaricet","followers_url":"https:\/\/api.github.com\/users\/Guitaricet\/followers","following_url":"https:\/\/api.github.com\/users\/Guitaricet\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Guitaricet\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Guitaricet\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Guitaricet\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Guitaricet\/orgs","repos_url":"https:\/\/api.github.com\/users\/Guitaricet\/repos","events_url":"https:\/\/api.github.com\/users\/Guitaricet\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Guitaricet\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```"],"created_at":1620790916000,"updated_at":1621258901000,"closed_at":1621258901000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nAfter training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.\r\n\r\n```\r\n File \"index_wikipedia.py\", line 119, in \r\n data[\"train\"].save_faiss_index(\"text_emb\", index_save_path)\r\n File \"\/home\/vlialin\/miniconda3\/envs\/cat\/lib\/python3.8\/site-packages\/datasets\/search.py\", line 470, in save_faiss_index\r\n index.save(file)\r\n File \"\/home\/vlialin\/miniconda3\/envs\/cat\/lib\/python3.8\/site-packages\/datasets\/search.py\", line 334, in save\r\n faiss.write_index(index, str(file))\r\n File \"\/home\/vlialin\/miniconda3\/envs\/cat\/lib\/python3.8\/site-packages\/faiss\/swigfaiss_avx2.py\", line 5654, in write_index\r\n return _swigfaiss.write_index(*args)\r\nRuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at \/root\/miniconda3\/conda-bld\/faiss-pkg_1613235005464\/work\/faiss\/impl\/index_write.cpp:453: don't know how to serialize this type of index\r\n```\r\n\r\n## Steps to reproduce the bug\r\n\r\nAny dataset will do, I just selected a familiar one.\r\n\r\n```python\r\nimport numpy as np\r\nimport datasets\r\nINDEX_STR = \"OPQ16_128,IVF512,PQ32\"\r\nINDEX_SAVE_PATH = \"will_not_save.faiss\"\r\n\r\ndata = datasets.load_dataset(\"Fraser\/news-category-dataset\", split=f\"train[:10000]\")\r\n\r\ndef encode(item):\r\n return {\"text_emb\": np.random.randn(768).astype(np.float32)}\r\n\r\ndata = data.map(encode)\r\n\r\ndata.add_faiss_index(column=\"text_emb\", string_factory=INDEX_STR, train_size=10_000, device=0)\r\ndata.save_faiss_index(\"text_emb\", INDEX_SAVE_PATH)\r\n```\r\n\r\n## Expected results\r\nSaving the index\r\n\r\n## Actual results\r\nError in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index\r\n\r\n## Environment info\r\n- `datasets` version: 1.6.2\r\n- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10\r\n- Python version: 3.8.8\r\n- PyTorch version (GPU?): 1.8.1+cu111 (True)\r\n- Tensorflow version (GPU?): 2.2.0 (False)\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n\r\nI will be proposing a fix in a couple of minutes","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2350\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2349","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2349\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2349\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2349\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2349","id":888586018,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3","number":2349,"title":"Update task_ids for Ascent KB","user":{"login":"phongnt570","id":6749421,"node_id":"MDQ6VXNlcjY3NDk0MjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6749421?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/phongnt570","html_url":"https:\/\/github.com\/phongnt570","followers_url":"https:\/\/api.github.com\/users\/phongnt570\/followers","following_url":"https:\/\/api.github.com\/users\/phongnt570\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/phongnt570\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/phongnt570\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/phongnt570\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/phongnt570\/orgs","repos_url":"https:\/\/api.github.com\/users\/phongnt570\/repos","events_url":"https:\/\/api.github.com\/users\/phongnt570\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/phongnt570\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620765873000,"updated_at":1621248794000,"closed_at":1621248514000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2349","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2349","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2349.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2349.patch"},"body":"This \"other-other-knowledge-base\" task is better suited for the dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2349\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2348","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2348\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2348\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2348\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2348","id":887927737,"node_id":"MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4","number":2348,"title":"Add tests for dataset cards","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq\r\n\r\nShould I remove the scripts? or atleast remove running them from the CircleCI config?\r\n\r\nAlso, I hope it is okay that the combined method (metadata+content) is only a slow test, and for the Circle CI, I assume only non-slow tests are run? If yes, this would mean separate tests for content and metadata.","Also feel free to remove the scripts from the CI and also remove the scripts files :)"],"created_at":1620753267000,"updated_at":1621599047000,"closed_at":1621599047000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2348","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2348","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2348.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2348.patch"},"body":"Adding tests for dataset cards\r\n\r\nThis PR will potentially remove the scripts being used for dataset tags and readme validation.\r\n\r\nAdditionally, this will allow testing dataset readmes by providing the name as follows:\r\n\r\n```bash\r\npytest tests\/test_dataset_cards.py::test_dataset_tags[fashion_mnist]\r\n```\r\nand\r\n\r\n```bash\r\npytest tests\/test_dataset_cards.py::test_readme_content[fashion_mnist]\r\n```\r\nor a combined test as:\r\n\r\n```bash\r\npytest tests\/test_dataset_cards.py::test_dataset_card[fashion_mnist]\r\n```\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2348\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2347","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2347\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2347\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2347\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2347","id":887404868,"node_id":"MDU6SXNzdWU4ODc0MDQ4Njg=","number":2347,"title":"Add an API to access the language and pretty name of a dataset","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).","That works for me!","maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?","What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.","hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)"],"created_at":1620742208000,"updated_at":1621589206000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2347\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2346","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2346\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2346\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2346\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2346","id":886632114,"node_id":"MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3","number":2346,"title":"Add Qasper Dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I saw that the README [template](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README.md) changed while I was working on this \ud83d\ude05 Some TOC titles may be different but I filled it to the best of my knowledge & readme quality check passes now.\r\nready for review @lhoestq "],"created_at":1620725144000,"updated_at":1621340908000,"closed_at":1621340908000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2346","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2346","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2346.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2346.patch"},"body":"[Question Answering on Scientific Research Papers](https:\/\/allenai.org\/project\/qasper\/home)\r\n\r\nDoing NLP on NLP papers to do NLP \u267b\ufe0f I had to add it~\r\n\r\n- [x] Add README (just gotta fill out some more )\r\n- [x] Dataloader code\r\n- [x] Make dummy dataset\r\n- [x] generate dataset infos\r\n- [x] Tests\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2346\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2345","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2345\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2345\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2345\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2345","id":886586872,"node_id":"MDU6SXNzdWU4ODY1ODY4NzI=","number":2345,"title":"[Question] How to move and reuse preprocessed dataset? ","user":{"login":"AtmaHou","id":15045402,"node_id":"MDQ6VXNlcjE1MDQ1NDAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15045402?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AtmaHou","html_url":"https:\/\/github.com\/AtmaHou","followers_url":"https:\/\/api.github.com\/users\/AtmaHou\/followers","following_url":"https:\/\/api.github.com\/users\/AtmaHou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AtmaHou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AtmaHou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AtmaHou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AtmaHou\/orgs","repos_url":"https:\/\/api.github.com\/users\/AtmaHou\/repos","events_url":"https:\/\/api.github.com\/users\/AtmaHou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AtmaHou\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq @LysandreJik","Hi :) Can you share with us the code you used ?<\/s>\r\n\r\nEDIT: from https:\/\/github.com\/huggingface\/transformers\/issues\/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n","Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same","> Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same\r\n\r\nI only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~"],"created_at":1620724157000,"updated_at":1623386351000,"closed_at":1623386351000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I am training a gpt-2 from scratch using run_clm.py.\r\n\r\nI want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),\r\n\r\nI tried to :\r\n\r\ncopy path_to_cache_dir\/datasets to new_cache_dir\/datasets\r\nset export HF_DATASETS_CACHE=\"new_cache_dir\/\"\r\nbut the program still re-preprocess the whole dataset without loading cache.\r\n\r\nI also tried to torch.save(lm_datasets, fw), but the saved file is only 14M.\r\n\r\nWhat is the proper way to do this?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2345\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2344","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2344\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2344\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2344\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2344","id":885331505,"node_id":"MDU6SXNzdWU4ODUzMzE1MDU=","number":2344,"title":"Is there a way to join multiple datasets in one?","user":{"login":"alexvaca0","id":35173563,"node_id":"MDQ6VXNlcjM1MTczNTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35173563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexvaca0","html_url":"https:\/\/github.com\/alexvaca0","followers_url":"https:\/\/api.github.com\/users\/alexvaca0\/followers","following_url":"https:\/\/api.github.com\/users\/alexvaca0\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexvaca0\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexvaca0\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexvaca0\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexvaca0\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexvaca0\/repos","events_url":"https:\/\/api.github.com\/users\/alexvaca0\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexvaca0\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We don't have `join`\/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n"],"created_at":1620688570000,"updated_at":1620721488000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\nI need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2? \n\n**Describe the solution you'd like**\nId like to join them with a merge or join method, just like pandas dataframes. \n\n**Additional context**\nIf you want to extend an existing dataset with more data, for example for training a language model, you need that functionality. I've not found it in the documentation.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2344\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2343","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2343\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2343\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2343\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2343","id":883208539,"node_id":"MDU6SXNzdWU4ODMyMDg1Mzk=","number":2343,"title":"Columns are removed before or after map function applied?","user":{"login":"taghizad3h","id":8199406,"node_id":"MDQ6VXNlcjgxOTk0MDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8199406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/taghizad3h","html_url":"https:\/\/github.com\/taghizad3h","followers_url":"https:\/\/api.github.com\/users\/taghizad3h\/followers","following_url":"https:\/\/api.github.com\/users\/taghizad3h\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/taghizad3h\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/taghizad3h\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/taghizad3h\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/taghizad3h\/orgs","repos_url":"https:\/\/api.github.com\/users\/taghizad3h\/repos","events_url":"https:\/\/api.github.com\/users\/taghizad3h\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/taghizad3h\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620614180000,"updated_at":1620614180000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nAccording to the documentation when applying map function the [remove_columns ](https:\/\/huggingface.co\/docs\/datasets\/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html#datasets.Dataset.map) it's documented that they are removed before applying function. I thinks the source code doc is more accurate, right?\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2343\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2342","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2342\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2342\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2342\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2342","id":882981420,"node_id":"MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3","number":2342,"title":"Docs - CER above 1","user":{"login":"borisdayma","id":715491,"node_id":"MDQ6VXNlcjcxNTQ5MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/715491?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/borisdayma","html_url":"https:\/\/github.com\/borisdayma","followers_url":"https:\/\/api.github.com\/users\/borisdayma\/followers","following_url":"https:\/\/api.github.com\/users\/borisdayma\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/borisdayma\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/borisdayma\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/borisdayma\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/borisdayma\/orgs","repos_url":"https:\/\/api.github.com\/users\/borisdayma\/repos","events_url":"https:\/\/api.github.com\/users\/borisdayma\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/borisdayma\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620603660000,"updated_at":1620653640000,"closed_at":1620653640000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2342","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2342","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2342.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2342.patch"},"body":"CER can actually be greater than 1.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2342\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2341","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2341\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2341\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2341\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2341","id":882370933,"node_id":"MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2","number":2341,"title":"Added the Ascent KB","user":{"login":"phongnt570","id":6749421,"node_id":"MDQ6VXNlcjY3NDk0MjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6749421?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/phongnt570","html_url":"https:\/\/github.com\/phongnt570","followers_url":"https:\/\/api.github.com\/users\/phongnt570\/followers","following_url":"https:\/\/api.github.com\/users\/phongnt570\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/phongnt570\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/phongnt570\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/phongnt570\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/phongnt570\/orgs","repos_url":"https:\/\/api.github.com\/users\/phongnt570\/repos","events_url":"https:\/\/api.github.com\/users\/phongnt570\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/phongnt570\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for approving it!"],"created_at":1620569859000,"updated_at":1620724619000,"closed_at":1620724619000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2341","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2341","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2341.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2341.patch"},"body":"Added the Ascent Commonsense KB of 8.9M assertions.\r\n\r\n- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https:\/\/arxiv.org\/abs\/2011.00905)\r\n- Website: https:\/\/ascent.mpi-inf.mpg.de\/\r\n\r\n(I am the author of the dataset)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2341\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2340","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2340\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2340\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2340\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2340","id":882370824,"node_id":"MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx","number":2340,"title":"More consistent copy logic","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620569853000,"updated_at":1620723513000,"closed_at":1620723513000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2340","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2340","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2340.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2340.patch"},"body":"Use `info.copy()` instead of `copy.deepcopy(info)`.\r\n`Features.copy` now creates a deep copy.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2340\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2338","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2338\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2338\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2338\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2338","id":882046077,"node_id":"MDExOlB1bGxSZXF1ZXN0NjM1NjA3NzQx","number":2338,"title":"fixed download link for web_science","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620551540000,"updated_at":1620653753000,"closed_at":1620653753000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2338","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2338","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2338.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2338.patch"},"body":"Fixes #2337. Should work with:\r\n`dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2338\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2337","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2337\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2337\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2337\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2337","id":881610567,"node_id":"MDU6SXNzdWU4ODE2MTA1Njc=","number":2337,"title":"NonMatchingChecksumError for web_of_science dataset","user":{"login":"nbroad1881","id":24982805,"node_id":"MDQ6VXNlcjI0OTgyODA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24982805?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nbroad1881","html_url":"https:\/\/github.com\/nbroad1881","followers_url":"https:\/\/api.github.com\/users\/nbroad1881\/followers","following_url":"https:\/\/api.github.com\/users\/nbroad1881\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nbroad1881\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nbroad1881\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nbroad1881\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nbroad1881\/orgs","repos_url":"https:\/\/api.github.com\/users\/nbroad1881\/repos","events_url":"https:\/\/api.github.com\/users\/nbroad1881\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nbroad1881\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! "],"created_at":1620525722000,"updated_at":1620653753000,"closed_at":1620653753000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"NonMatchingChecksumError when trying to download the web_of_science dataset. \r\n\r\n>NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/data.mendeley.com\/datasets\/9rw3vkcfy4\/6\/files\/c9ea673d-5542-44c0-ab7b-f1311f7d61df\/WebOfScience.zip?dl=1']\r\n\r\nSetting `ignore_verfications=True` results in OSError.\r\n\r\n>OSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 20] Not a directory: '\/root\/.cache\/huggingface\/datasets\/downloads\/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7\/WOS5736\/X.txt'\r\n\r\n```python\r\ndataset = load_dataset('web_of_science', 'WOS5736')\r\n```\r\nThere are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985'\r\n\r\ndatasets 1.6.2\r\npython 3.7.10\r\nUbuntu 18.04.5 LTS","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2337\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2336","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2336\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2336\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2336\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2336","id":881298783,"node_id":"MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5","number":2336,"title":"Fix overflow issue in interpolation search","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["~~Seems like the CI failure is unrelated to this PR~~ (fixed with the merge). \r\n\r\n@lhoestq Can you please verify that everything is OK in terms of speed? Another solution is to change the offsets array dtype to np.int64 (but this doesn't scale in theory compared to Python integer which is unbound). I'm not sure why on my 64-bit machine the default numpy dtype is np.int32 tho.","Hi ! Thanks for the fix.\r\nUnfortunately in terms of speed this is not acceptable :\/\r\nThe `get_batch_of_1024_random_rows` metric or the `benchmark_getitem_100B ` benchmark is almost at 1sec instead of a few milliseconds.\r\n\r\nWould it be possible to avoid the overflow by simply passing `dtype=np.int64` to `np.cumsum` ?\r\nOn windows machines the default is int32 unfortunately so we have to force the dtype to be int64\r\n\r\n","Yes, casting the array to np.int64 should work as well. Another option would be to cast the array elements (`arr[i], arr[j]`) in interpolation search to Python integers (bound only with memory) before multiplication (the error stems from this part: `(j - i) * (x - arr[i])`) when working with big values. But for now, the first option is OK for the sake of simplicity."],"created_at":1620507096000,"updated_at":1620653347000,"closed_at":1620653172000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2336","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2336","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2336.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2336.patch"},"body":"Fixes #2335 \r\n\r\nMore info about this error can be found [here](https:\/\/stackoverflow.com\/questions\/53239890\/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc\/53240100). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2336\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2335","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2335\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2335\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2335\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2335","id":881291887,"node_id":"MDU6SXNzdWU4ODEyOTE4ODc=","number":2335,"title":"Index error in Dataset.map","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620506697000,"updated_at":1620653172000,"closed_at":1620653172000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The following code, if executed on master, raises an IndexError (due to overflow):\r\n```python\r\n>>> from datasets import *\r\n>>> d = load_dataset(\"bookcorpus\", split=\"train\")\r\nReusing dataset bookcorpus (C:\\Users\\Mario\\.cache\\huggingface\\datasets\\bookcorpus\\plain_text\\1.0.0\\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700)\r\n2021-05-08 21:23:46.859818: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll\r\n>>> d.map(lambda ex: ex)\r\n 0%|\u258e | 289430\/74004228 [00:13<58:41, 20935.33ex\/s]c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\table.py:84: RuntimeWarning: overflow encountered in int_scalars\r\n k = i + ((j - i) * (x - arr[i]) \/\/ (arr[j] - arr[i]))\r\n 0%|\u258e | 290162\/74004228 [00:13<59:11, 20757.23ex\/s]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\arrow_dataset.py\", line 1498, in map\r\n new_fingerprint=new_fingerprint,\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\arrow_dataset.py\", line 174, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\fingerprint.py\", line 340, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\arrow_dataset.py\", line 1799, in _map_single\r\n for i, example in enumerate(pbar):\r\n File \"C:\\Users\\Mario\\Anaconda3\\envs\\hf-datasets\\lib\\site-packages\\tqdm\\std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\arrow_dataset.py\", line 1145, in __iter__\r\n format_kwargs=format_kwargs,\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\arrow_dataset.py\", line 1337, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\formatting\\formatting.py\", line 368, in query_table\r\n pa_subtable = _query_table(table, key)\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\formatting\\formatting.py\", line 79, in _query_table\r\n return table.fast_slice(key % table.num_rows, 1)\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\table.py\", line 128, in fast_slice\r\n i = _interpolation_search(self._offsets, offset)\r\n File \"c:\\users\\mario\\desktop\\projects\\datasets-1\\src\\datasets\\table.py\", line 91, in _interpolation_search\r\n raise IndexError(f\"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.\")\r\nIndexError: Invalid query '290162' for size 74004228.\r\n```\r\nTested on Windows, can run on Linux if needed.\r\n\r\nEDIT:\r\nIt seems like for this to happen, the default NumPy dtype has to be np.int32.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2335\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2334","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2334\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2334\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2334\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2334","id":879810107,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw","number":2334,"title":"Updating the DART file checksums in GEM","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@sebastianGehrmann "],"created_at":1620424424000,"updated_at":1620425890000,"closed_at":1620425890000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2334","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2334","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2334.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2334.patch"},"body":"The DART files were just updated on the source GitHub\r\n\r\nhttps:\/\/github.com\/Yale-LILY\/dart\/commit\/34b3c872da4811523e334f1631e54ca8105dffab","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2334\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2333","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2333\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2333\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2333\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2333","id":879214067,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy","number":2333,"title":"Fix duplicate keys","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["- @jplu "],"created_at":1620401288000,"updated_at":1620510451000,"closed_at":1620403028000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2333","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2333","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2333.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2333.patch"},"body":"As noticed in https:\/\/github.com\/huggingface\/datasets\/pull\/2245, many datasets yield duplicate keys.\r\nMost of the time it was because the counter used for ids were reset at each new data file.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2333\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2332","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2332\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2332\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2332\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2332","id":879041608,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4","number":2332,"title":"Add note about indices mapping in save_to_disk docstring","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620395382000,"updated_at":1620408048000,"closed_at":1620408048000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2332","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2332","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2332.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2332.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2332\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2331","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2331\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2331\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2331\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2331","id":879031427,"node_id":"MDU6SXNzdWU4NzkwMzE0Mjc=","number":2331,"title":"Add Topical-Chat","user":{"login":"ktangri","id":22266659,"node_id":"MDQ6VXNlcjIyMjY2NjU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22266659?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ktangri","html_url":"https:\/\/github.com\/ktangri","followers_url":"https:\/\/api.github.com\/users\/ktangri\/followers","following_url":"https:\/\/api.github.com\/users\/ktangri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ktangri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ktangri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ktangri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ktangri\/orgs","repos_url":"https:\/\/api.github.com\/users\/ktangri\/repos","events_url":"https:\/\/api.github.com\/users\/ktangri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ktangri\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620395039000,"updated_at":1620395039000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Topical-Chat\r\n- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don\u2019t have explicitly defined roles\r\n- **Paper:** https:\/\/www.isca-speech.org\/archive\/Interspeech_2019\/pdfs\/3079.pdf\r\n- **Data:** https:\/\/github.com\/alexa\/Topical-Chat\r\n- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2331\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2330","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2330\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2330\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2330\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2330","id":878490927,"node_id":"MDU6SXNzdWU4Nzg0OTA5Mjc=","number":2330,"title":"Allow passing `desc` to `tqdm` in `Dataset.map()`","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\nShould we change `desc` in [pbar](https:\/\/github.com\/huggingface\/datasets\/blob\/81fcf88172ed5e3026ef68aed4c0ec6980372333\/src\/datasets\/arrow_dataset.py#L1860) to something meaningful?","I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset."],"created_at":1620366774000,"updated_at":1622041161000,"closed_at":1622041161000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"It's normal to have many `map()` calls, and some of them can take a few minutes,\r\nit would be nice to have a description on the progress bar.\r\n\r\nAlternative solution:\r\nPrint the description before\/after the `map()` call.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2330\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2329","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2329\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2329\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2329\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2329","id":877924198,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0","number":2329,"title":"Add cache dir for in-memory datasets","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes, having `cache_dir` as an attribute looks cleaner.\r\n\r\n\r\n\r\n","Good job! Looking forward to this new feature! \ud83e\udd42","@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) ","@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.\r\n\r\nFew suggestions I have:\r\n* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`): \r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/c231abdb174987419bbde3360b5b9d6a4672c736\/src\/datasets\/arrow_dataset.py#L1807-L1824\r\n, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?\r\n* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` \ud83d\ude03) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic.","Hi @mariosasko \r\nWe discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:\r\n\r\n- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition\r\n- there are a few edge cases which are really confusing:\r\n - map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?\r\n - it would require a special cache directory just for in memory datasets, since they don\u2019t have a preferred directory for caching\r\n- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests\r\n\r\n\r\nSo in the end we're probably going to close this PR.\r\nLet me know what you think, and thank you anyway for your help on this !","Hi,\r\n\r\nI'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR.","Superseded by #2460 (to close issue #2458)."],"created_at":1620329732000,"updated_at":1623181608000,"closed_at":1623179206000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2329","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2329","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2329.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2329.patch"},"body":"Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.\r\n\r\nShould fix #2322 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2329\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2328","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2328\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2328\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2328\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2328","id":877673896,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMxNTg2MzU2","number":2328,"title":"Add Matthews\/Pearson\/Spearman correlation metrics","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620317367000,"updated_at":1620320290000,"closed_at":1620320290000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2328","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2328","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2328.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2328.patch"},"body":"Added three metrics:\r\n- The Matthews correlation coefficient (from sklearn)\r\n- The Pearson correlation coefficient (from scipy)\r\n- The Spearman correlation coefficient (from scipy)\r\n\r\ncc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2328\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2327","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2327\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2327\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2327\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2327","id":877565831,"node_id":"MDU6SXNzdWU4Nzc1NjU4MzE=","number":2327,"title":"A syntax error in example","user":{"login":"mymusise","id":6883957,"node_id":"MDQ6VXNlcjY4ODM5NTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6883957?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mymusise","html_url":"https:\/\/github.com\/mymusise","followers_url":"https:\/\/api.github.com\/users\/mymusise\/followers","following_url":"https:\/\/api.github.com\/users\/mymusise\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mymusise\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mymusise\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mymusise\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mymusise\/orgs","repos_url":"https:\/\/api.github.com\/users\/mymusise\/repos","events_url":"https:\/\/api.github.com\/users\/mymusise\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mymusise\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @beurkinger but I think this has been fixed internally and will soon be updated right ?","This issue has been fixed."],"created_at":1620311684000,"updated_at":1621479859000,"closed_at":1621479859000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/6883957\/117315905-b47a5c00-aeba-11eb-91eb-b2a4a0212a56.png)\r\n\r\nSorry to report with an image, I can't find the template source code of this snippet.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2327\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2326","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2326\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2326\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2326\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2326","id":876829254,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4","number":2326,"title":"Enable auto-download for PAN-X \/ Wikiann domain in XTREME","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620248318000,"updated_at":1620376870000,"closed_at":1620376870000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2326","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2326","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2326.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2326.patch"},"body":"This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.\r\n\r\nWhile re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2326\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2325","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2325\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2325\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2325\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2325","id":876653121,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMwNzU1MzIx","number":2325,"title":"Added the HLGD dataset","user":{"login":"tingofurro","id":2609265,"node_id":"MDQ6VXNlcjI2MDkyNjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2609265?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tingofurro","html_url":"https:\/\/github.com\/tingofurro","followers_url":"https:\/\/api.github.com\/users\/tingofurro\/followers","following_url":"https:\/\/api.github.com\/users\/tingofurro\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tingofurro\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tingofurro\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tingofurro\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tingofurro\/orgs","repos_url":"https:\/\/api.github.com\/users\/tingofurro\/repos","events_url":"https:\/\/api.github.com\/users\/tingofurro\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tingofurro\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Is there anything else needed from my end?","Thanks Bhavitvya and Quentin, this was very streamlined!"],"created_at":1620233609000,"updated_at":1620831313000,"closed_at":1620828998000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2325","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2325","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2325.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2325.patch"},"body":"Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task\r\nDataset Link: https:\/\/github.com\/tingofurro\/headline_grouping\r\nPaper link: https:\/\/people.eecs.berkeley.edu\/~phillab\/pdfs\/NAACL2021_HLG.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2325\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2324","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2324\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2324\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2324\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2324","id":876602064,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz","number":2324,"title":"Create Audio feature","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":["For optimal storage, it would be better to:\r\n- store only the audio file path in the cache Arrow file\r\n- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function)","Thanks a lot @lhoestq for your helpful insights! \ud83e\udd17 ","Just one step before having a first running example to benchmark.\r\n\r\nDecision to make: how to call the function `dataset.features.decode_example`:\r\n- The usual approach until now in speech applications: call it in a subsequent `.map` function\r\n - Pros: multiprocessing can be used out of the box\r\n - Cons: large disk storage required for caching decoded audio files, although having it cached will enhance speed for further usage\r\n- Approach suggested by @lhoestq (see above: https:\/\/github.com\/huggingface\/datasets\/pull\/2324#discussion_r660758683): doing it in formatting\r\n - Pros: no large disk storage required, as it will be done on the fly while iterating on the dataset\r\n - Cons: it is not cached; need to implement multiprocessing for this case\r\n- Other pros\/cons for the previous options?\r\n- Other options?\r\n\r\ncc: @lhoestq @patrickvonplaten @anton-l ","@albertvillanova I'm in two minds about this, to be honest. For example, if we consider CommonVoice, which is encoded in lossy `mp3`:\n\n- If we decompress `mp3` into raw `wav` arrays, loading a batch will speed up about 40x.\n- However, a 60gb English mp3 dataset will blow up to about 600gb raw (iirc), which is why loading on-the-fly (optionally?) could be very beneficial as well.","Users can do the conversion from mp3 to wav by themselves if they want to using `map`.\r\n\r\nIMO it's better if we can keep the decoding part with the minimal features to be both easy to understand and flexible, i.e. just having the on-the-fly decoding of the audio data (with the sampling rate parameter)\r\n\r\nDecompressing from mp3 to wav sounds like an optimization that depends on the problem that the user wants to solve, the constrains from its environment (disk space, IO speed), and other parameters (optimal training speed for example). Therefore I would leave this to the user to decide whether it has to do it or not.\r\n\r\nLet me know what you think about this","@albertvillanova, In my opinion the pros strongly outweigh the cons in the @lhoestq's suggestion which is why I think we should go forward with it. \r\n\r\nThe cons:\r\n- \"the operation won't be cached\" is not to important as the user will most likely access just a couple of audio array to see how it looks like and then for the \"full\" feature extraction she\/he will make use of `.map(...)` anyways which means that the result will be cached. \r\n- Regarding the multi-processing - if I understand correctly it'll follow the same logic here -> the user will only access some audio arrays for testing playing around with the model but use `.map(...)` for larger operations where multi-processing would still work as before.\r\n\r\nThe advantages mostly solve the main poinpoints being:\r\n- exploding disk space\r\n- bad user experience since the audio is not loaded on the go\r\n\r\n=> So I'm very much in favor of the \"direct-access\" feature","Update: I've retaken this issue.\r\n\r\nIf the decoding logic is implemented when \"examples are accessed\", then if afterwards we use the `.map`, it tries to apply the decoding twice (as maps iterates over the examples, thus \"accessing them\", before trying to apply the map function)...\r\n\r\nI'm thinking on some other approach...","I have reimplemented the previous approach, so that we can discuss about it: examples are decoded when accessed.","What about creating a new specific formatting, just for decoding? This would be only active within a context manager.","Hi @lhoestq, as we discussed, I've followed your suggestion of implementing the decoding step within the formatting logic: extract-decode-format. Feel free to tell me what you think.\r\n\r\n@patrickvonplaten and @anton-l, could you have a look at the use case in the test (https:\/\/github.com\/huggingface\/datasets\/pull\/2324\/files#diff-58e348f6e4deaa5f3119e420a5d48ebb82875a78c28628831748fb54f59b2c78R34-R50) and tell me if this is aligned with your needs? Thanks.","Hi @lhoestq, if you validate this approach, we could merge the Audio feature this (or early next) week.","Sure it looks nice this way :) Feel free to continue !"],"created_at":1620230122000,"updated_at":1632295573000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2324","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2324","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2324.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2324.patch"},"body":"Create `Audio` feature to handle raw audio files.\r\n\r\nSome decisions to be further discussed:\r\n- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https:\/\/github.com\/librosa\/librosa\/blob\/main\/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.\r\n- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio\/image if they do not need them.\r\n- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager\r\n - I also require `pytest-datadir`, which allow to have (audio) data files for tests\r\n- The audio data contain: array and sample_rate.\r\n- The array is reshaped as 1D array (expected input for `Wav2Vec2`).\r\n\r\nNote that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution\u2019s package manager, for example `sudo apt-get install libsndfile1`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2324\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2323","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2323\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2323\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2323\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2323","id":876438507,"node_id":"MDU6SXNzdWU4NzY0Mzg1MDc=","number":2323,"title":"load_dataset(\"timit_asr\") gives back duplicates of just one sample text","user":{"login":"ekeleshian","id":33647474,"node_id":"MDQ6VXNlcjMzNjQ3NDc0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33647474?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ekeleshian","html_url":"https:\/\/github.com\/ekeleshian","followers_url":"https:\/\/api.github.com\/users\/ekeleshian\/followers","following_url":"https:\/\/api.github.com\/users\/ekeleshian\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ekeleshian\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ekeleshian\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ekeleshian\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ekeleshian\/orgs","repos_url":"https:\/\/api.github.com\/users\/ekeleshian\/repos","events_url":"https:\/\/api.github.com\/users\/ekeleshian\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ekeleshian\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Upgrading datasets to version 1.6 fixes the issue","This bug was fixed in #1995. Upgrading the `datasets` should work! ","Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists."],"created_at":1620220488000,"updated_at":1620383550000,"closed_at":1620383550000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen you look up on key [\"train\"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence \"Would such an act of refusal be useful?\". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated \"The bungalow was pleasantly situated near the shore.\" 1680 times. \r\n\r\nI tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https:\/\/www.gitmemory.com\/issue\/huggingface\/datasets\/2052\/798904836) and removing the entire huggingface directory from ~\/.cache, but I still get the same issue. \r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\ntimit = load_dataset(\"timit_asr\")\r\nprint(timit['train']['text'])\r\nprint(timit['test']['text'])\r\n```\r\n\r\n## Expected Result\r\nRows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https:\/\/colab.research.google.com\/github\/patrickvonplaten\/notebooks\/blob\/master\/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb)\r\n\"Screen\r\n\r\n\r\n## Actual results\r\nRows of repeated text.\r\n\"Screen\r\n\r\n\r\n## Versions\r\n- Datasets: 1.3.0\r\n- Python: 3.9.1\r\n- Platform: macOS-11.2.1-x86_64-i386-64bit}\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2323\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2322","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2322\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2322\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2322\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2322","id":876383853,"node_id":"MDU6SXNzdWU4NzYzODM4NTM=","number":2322,"title":"Calls to map are not cached.","user":{"login":"villmow","id":2743060,"node_id":"MDQ6VXNlcjI3NDMwNjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2743060?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/villmow","html_url":"https:\/\/github.com\/villmow","followers_url":"https:\/\/api.github.com\/users\/villmow\/followers","following_url":"https:\/\/api.github.com\/users\/villmow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/villmow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/villmow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/villmow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/villmow\/orgs","repos_url":"https:\/\/api.github.com\/users\/villmow\/repos","events_url":"https:\/\/api.github.com\/users\/villmow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/villmow\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB\/s] \r\nDownloading: 5.99kB [00:00, 3.29MB\/s] \r\nNo config specified, defaulting to: sst\/default\r\nDownloading and preparing dataset sst\/default (download: 6.83 MiB, generated: 3.73 MiB, post-processed: Unknown size, total: 10.56 MiB) to \/home\/johannes\/.cache\/huggingface\/datasets\/sst\/default\/1.0.0\/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b...\r\n Dataset sst downloaded and prepared to \/home\/johannes\/.cache\/huggingface\/datasets\/sst\/default\/1.0.0\/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b. Subsequent calls will reuse this data.\r\nexecuted [0, 1]\r\n#0: 0%| | 0\/5 [00:00>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"oscar\", \"unshuffled_deduplicated_af\")\r\nDownloading: 14.7kB [00:00, 4.91MB\/s]\r\nDownloading: 3.07MB [00:00, 32.6MB\/s]\r\nDownloading and preparing dataset oscar\/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\\Users\\sgraaf\\.cache\\huggingface\\datasets\\oscar\\unshuffled_deduplicated_af\\1.0.0\\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 81.0\/81.0 [00:00<00:00, 40.5kB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 66.0M\/66.0M [00:18<00:00, 3.50MB\/s]\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\sgraaf\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\datasets\\load.py\", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\sgraaf\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\datasets\\builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\sgraaf\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\datasets\\builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\sgraaf\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\datasets\\builder.py\", line 979, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"C:\\Users\\sgraaf\\AppData\\Local\\Programs\\Python\\Python39\\lib\\site-packages\\tqdm\\std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"C:\\Users\\sgraaf\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\oscar\\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\\oscar.py\", line 359, in _generate_examples\r\n for line in f:\r\n File \"C:\\Users\\sgraaf\\AppData\\Local\\Programs\\Python\\Python39\\lib\\encodings\\cp1252.py\", line 23, in decode\r\n return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to \r\n```\r\n\r\n## Versions\r\nPaste the output of the following code:\r\n```python\r\nimport datasets\r\nimport sys\r\nimport platform\r\n\r\nprint(f\"\"\"\r\n- Datasets: {datasets.__version__}\r\n- Python: {sys.version}\r\n- Platform: {platform.platform()}\r\n\"\"\")\r\n```\r\n- Datasets: 1.6.2\r\n- Python: 3.9.4 (tags\/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]\r\n- Platform: Windows-10-10.0.19041-SP0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2319\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2318","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2318\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2318\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2318\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2318","id":876212460,"node_id":"MDU6SXNzdWU4NzYyMTI0NjA=","number":2318,"title":"[api request] API to obtain \"dataset_module\" dynamic path?","user":{"login":"richardliaw","id":4529381,"node_id":"MDQ6VXNlcjQ1MjkzODE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4529381?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richardliaw","html_url":"https:\/\/github.com\/richardliaw","followers_url":"https:\/\/api.github.com\/users\/richardliaw\/followers","following_url":"https:\/\/api.github.com\/users\/richardliaw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richardliaw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richardliaw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richardliaw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richardliaw\/orgs","repos_url":"https:\/\/api.github.com\/users\/richardliaw\/repos","events_url":"https:\/\/api.github.com\/users\/richardliaw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richardliaw\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)\r\n```\r\n\r\nLet me know if it is OK for you this way. \r\n\r\nI could set `MODULE_NAME_FOR_DYNAMIC_MODULES` as default value, so that you could instead obtain the path with:\r\n```\r\ndynamic_modules_path = datasets.load.init_dynamic_modules()\r\n```","Hi @albertvillanova, the default value proposal seems great :) Looking forward to this!","I like the idea as well ! thanks @albertvillanova ","Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks.","awesome work @albertvillanova !"],"created_at":1620204048000,"updated_at":1620290745000,"closed_at":1620287874000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\nThis is an awesome library. \r\n\r\nIt seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https:\/\/discuss.huggingface.co\/t\/using-hyperparameter-search-in-trainer\/785\/34\r\n\r\nThis is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import. \r\n\r\nI'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\n`datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case.\r\n\r\nBy offering this API, we will be able to address the following issues (by patching the ray integration sufficiently):\r\n\r\nhttps:\/\/github.com\/huggingface\/blog\/issues\/106\r\nhttps:\/\/github.com\/huggingface\/transformers\/issues\/11565\r\nhttps:\/\/discuss.huggingface.co\/t\/using-hyperparameter-search-in-trainer\/785\/34\r\nhttps:\/\/discuss.huggingface.co\/t\/using-hyperparameter-search-in-trainer\/785\/35\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2318\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2317","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2317\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2317\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2317\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2317","id":875767318,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMwMDQxNzc4","number":2317,"title":"Fix incorrect version specification for the pyarrow package","user":{"login":"cemilcengiz","id":32267027,"node_id":"MDQ6VXNlcjMyMjY3MDI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32267027?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cemilcengiz","html_url":"https:\/\/github.com\/cemilcengiz","followers_url":"https:\/\/api.github.com\/users\/cemilcengiz\/followers","following_url":"https:\/\/api.github.com\/users\/cemilcengiz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cemilcengiz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cemilcengiz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cemilcengiz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cemilcengiz\/orgs","repos_url":"https:\/\/api.github.com\/users\/cemilcengiz\/repos","events_url":"https:\/\/api.github.com\/users\/cemilcengiz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cemilcengiz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620156620000,"updated_at":1620209356000,"closed_at":1620206518000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2317","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2317","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2317.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2317.patch"},"body":"This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 .\r\nSimply, I put a comma between the version bounds.\r\n\r\nFix #2316.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2317\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2316","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2316\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2316\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2316\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2316","id":875756353,"node_id":"MDU6SXNzdWU4NzU3NTYzNTM=","number":2316,"title":"Incorrect version specification for pyarrow","user":{"login":"cemilcengiz","id":32267027,"node_id":"MDQ6VXNlcjMyMjY3MDI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32267027?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cemilcengiz","html_url":"https:\/\/github.com\/cemilcengiz","followers_url":"https:\/\/api.github.com\/users\/cemilcengiz\/followers","following_url":"https:\/\/api.github.com\/users\/cemilcengiz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cemilcengiz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cemilcengiz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cemilcengiz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cemilcengiz\/orgs","repos_url":"https:\/\/api.github.com\/users\/cemilcengiz\/repos","events_url":"https:\/\/api.github.com\/users\/cemilcengiz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cemilcengiz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Fixed by #2317."],"created_at":1620155711000,"updated_at":1620209403000,"closed_at":1620209403000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nThe pyarrow dependency is incorrectly specified in setup.py file, in [this line](https:\/\/github.com\/huggingface\/datasets\/blob\/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12\/setup.py#L77).\r\nAlso as a snippet:\r\n```python\r\n \"pyarrow>=1.0.0<4.0.0\",\r\n```\r\n\r\n## Steps to reproduce the bug\r\n```bash\r\n pip install \"pyarrow>=1.0.0<4.0.0\"\r\n```\r\n\r\n## Expected results\r\nIt is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive).\r\n\r\n## Actual results\r\npip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0.\r\nThis is especially problematic since \"conda env export\" fails due to incorrect version specification. Here is the conda error as well:\r\n```bash\r\nconda env export\r\nInvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s)\r\n```\r\n\r\n\r\n## Fix suggestion\r\nPut a comma between the version limits which means replacing the line in setup.py file with the following:\r\n```python\r\n \"pyarrow>=1.0.0,<4.0.0\",\r\n```\r\n\r\n## Versions\r\nPaste the output of the following code:\r\n```python\r\n- Datasets: 1.6.2\r\n- Python: 3.7.10 (default, Feb 26 2021, 18:47:35) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2316\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2315","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2315\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2315\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2315\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2315","id":875742200,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMwMDIyMDYy","number":2315,"title":"Datasets cli improvements","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Additionally, I've deleted the points that are not very relevant for this repo (I guess the deleted points originate from the transformers repo). With this change, running `datasets-cli` is identical to copy-pasting the code from `bug_report.md`, but is more elegant because it doesn't require launching the REPL and copy-pasting the code. "],"created_at":1620154511000,"updated_at":1620664611000,"closed_at":1620664610000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2315","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2315","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2315.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2315.patch"},"body":"This PR:\r\n* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)\r\n* removes the `download` command (copied from the transformers repo?)\r\n* adds missing help messages to the cli commands\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2315\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2314","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2314\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2314\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2314\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2314","id":875729271,"node_id":"MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4","number":2314,"title":"Minor refactor prepare_module","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq this is the PR that I mentioned to you, which can be considered as a first step in refactoring `prepare_module`."],"created_at":1620153446000,"updated_at":1626418799000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2314","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2314","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2314.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2314.patch"},"body":"Start to refactor `prepare_module` to try to decouple functionality.\r\n\r\nThis PR does:\r\n- extract function `_initialize_dynamic_modules_namespace_package`\r\n- extract function `_find_module_in_github_or_s3`\r\n- some renaming of variables\r\n- use of f-strings","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2314\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2313","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2313\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2313\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2313\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2313","id":875475367,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4","number":2313,"title":"Remove unused head_hf_s3 function","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620135726000,"updated_at":1620379902000,"closed_at":1620379902000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2313","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2313","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2313.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2313.patch"},"body":"Currently, the function `head_hf_s3` is not used:\r\n- neither its returned result is used\r\n- nor it raises any exception, as exceptions are catched and returned (not raised)\r\n\r\nThis PR removes it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2313\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2312","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2312\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2312\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2312\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2312","id":875435726,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz","number":2312,"title":"Add rename_columnS method","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging then \ud83d\ude04 "],"created_at":1620133073000,"updated_at":1620135793000,"closed_at":1620135792000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2312","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2312","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2312.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2312.patch"},"body":"Cherry-picked from #2255 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2312\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2311","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2311\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2311\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2311\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2311","id":875262208,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx","number":2311,"title":"Add SLR52, SLR53 and SLR54 to OpenSLR","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq , I am not sure about the error message:\r\n```\r\n#!\/bin\/bash -eo pipefail\r\n.\/scripts\/datasets_metadata_validator.py\r\nWARNING:root:\u274c Failed to validate 'datasets\/openslr\/README.md':\r\n__init__() got an unexpected keyword argument 'SLR32'\r\nINFO:root:\u274c Failed on 1 files.\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1 \r\n```\r\nCould you have a look please? Thanks.","Hi ! The error is unrelated to your PR and has been fixed on master\r\nNext time feel free to merge master into your branch to fix the CI error ;)"],"created_at":1620119283000,"updated_at":1620381055000,"closed_at":1620381055000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2311","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2311","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2311.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2311.patch"},"body":"Add large speech datasets for Sinhala, Bengali and Nepali.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2311\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2310","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2310\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2310\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2310\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2310","id":875096051,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5","number":2310,"title":"Update README.md","user":{"login":"cryoff","id":15029054,"node_id":"MDQ6VXNlcjE1MDI5MDU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15029054?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cryoff","html_url":"https:\/\/github.com\/cryoff","followers_url":"https:\/\/api.github.com\/users\/cryoff\/followers","following_url":"https:\/\/api.github.com\/users\/cryoff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cryoff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cryoff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cryoff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cryoff\/orgs","repos_url":"https:\/\/api.github.com\/users\/cryoff\/repos","events_url":"https:\/\/api.github.com\/users\/cryoff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cryoff\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @cryoff, thanks for completing the dataset card.\r\n\r\nNow there is an automatic validation tool to assure that all dataset cards contain all the relevant information. This is the cause of the non-passing test on your Pull Request:\r\n```\r\nFound fields that are not non-empty list of strings: {'annotations_creators': [], 'language_creators': []}\r\n```"],"created_at":1620103081000,"updated_at":1620110159000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2310","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2310","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2310.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2310.patch"},"body":"Provides description of data instances and dataset features","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2310\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2309","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2309\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2309\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2309\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2309","id":874644990,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx","number":2309,"title":"Fix conda release","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1620053579000,"updated_at":1620057677000,"closed_at":1620057677000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2309","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2309","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2309.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2309.patch"},"body":"There were a few issues with conda releases (they've been failing for a while now).\r\nTo fix this I had to:\r\n- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https:\/\/stackoverflow.com\/a\/64825075))\r\n- set the python version of the conda build stage to 3.8 since 3.9 isn't supported\r\n- sync the evrsion requirement of `huggingface_hub`\r\n\r\nWith these changes I'm working on uploading all missing versions until 1.6.2 to conda\r\n\r\nEDIT: I managed to build and upload all missing versions until 1.6.2 to conda :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2309\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2308","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2308\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2308\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2308\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2308","id":874559846,"node_id":"MDU6SXNzdWU4NzQ1NTk4NDY=","number":2308,"title":"Add COCO evaluation metrics","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @NielsRogge, \r\nI'd like to contribute these metrics to datasets. Let's start with `CocoEvaluator` first? Currently how are are you sending the ground truths and predictions in coco_evaluator?\r\n","Great!\r\n\r\nHere's a notebook that illustrates how I'm using `CocoEvaluator`: https:\/\/drive.google.com\/file\/d\/1VV92IlaUiuPOORXULIuAdtNbBWCTCnaj\/view?usp=sharing\r\n\r\nThe evaluation is near the end of the notebook.\r\n\r\n","I went through the code you've [mentioned](https:\/\/github.com\/facebookresearch\/detr\/blob\/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5\/datasets\/coco_eval.py) and I think there are 2 options on how we can go ahead:\r\n\r\n1) Implement how DETR people have done this (they're relying very heavily on the official implementation and they're focussing on torch dataset here. I feel ours should be something generic instead of pytorch specific.\r\n2) Do this [implementation](https:\/\/github.com\/cocodataset\/cocoapi\/blob\/ed842bffd41f6ff38707c4f0968d2cfd91088688\/PythonAPI\/pycocoEvalDemo.ipynb) where user can convert its output and ground truth annotation to pre-defined format and then feed it into our function to calculate metrics (looks very similar to you wanted above)\r\n\r\nIn my opinion, 2nd option looks very clean but I'm still figuring out how's it transforming the box co-ordinates of `coco_gt` which you've passed to `CocoEvaluator` (ground truth for evaluation). Since your model output was already converted to COCO api, I faced little problems there.","Ok, thanks for the update.\r\n\r\nIndeed, the metrics API of Datasets is framework agnostic, so we can't rely on a PyTorch-only implementation.\r\n\r\n[This file](https:\/\/github.com\/cocodataset\/cocoapi\/blob\/ed842bffd41f6ff38707c4f0968d2cfd91088688\/PythonAPI\/pycocotools\/cocoeval.py) is probably want we need to implement.\r\n\r\n"],"created_at":1620047285000,"updated_at":1622790687000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm currently working on adding Facebook AI's DETR model (end-to-end object detection with Transformers) to HuggingFace Transformers. The model is working fine, but regarding evaluation, I'm currently relying on external `CocoEvaluator` and `PanopticEvaluator` objects which are defined in the original repository ([here](https:\/\/github.com\/facebookresearch\/detr\/blob\/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5\/datasets\/coco_eval.py#L22) and [here](https:\/\/github.com\/facebookresearch\/detr\/blob\/a54b77800eb8e64e3ad0d8237789fcbf2f8350c5\/datasets\/panoptic_eval.py#L13) respectively). \r\n\r\nRunning these in a notebook gives you nice summaries like this:\r\n![image](https:\/\/user-images.githubusercontent.com\/48327001\/116878842-326f0680-ac20-11eb-9061-d6da02193694.png)\r\n\r\nIt would be great if we could import these metrics from the Datasets library, something like this:\r\n\r\n```\r\nimport datasets\r\n\r\nmetric = datasets.load_metric('coco')\r\n\r\nfor model_input, gold_references in evaluation_dataset:\r\n model_predictions = model(model_inputs)\r\n metric.add_batch(predictions=model_predictions, references=gold_references)\r\n\r\nfinal_score = metric.compute()\r\n```\r\n\r\nI think this would be great for object detection and semantic\/panoptic segmentation in general, not just for DETR. Reproducing results of object detection papers would be way easier.\r\n\r\nHowever, object detection and panoptic segmentation evaluation is a bit more complex than accuracy (it's more like a summary of metrics at different thresholds rather than a single one). I'm not sure how to proceed here, but happy to help making this possible.\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2308\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2302","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2302\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2302\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2302\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2302","id":873961435,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3","number":2302,"title":"Add SubjQA dataset","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm not sure why the windows test fails, but looking at the logs it looks like some caching issue on one of the metrics ... maybe re-run and \ud83e\udd1e ?","Hi @lewtun, thanks for adding this dataset!\r\n\r\nIf the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\nHere's a link to the [relevant section of the guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README_guide.md#dataset-creation), let me know if you have any questions!","> If the dataset is going to be referenced heavily, I think it's worth spending some time to make the dataset card really great :) To start, the information that is currently in the `Data collection` paragraph should probably be organized in the `Dataset Creation` section.\r\n\r\ngreat idea @yjernite! i've added some extra information \/ moved things as you suggest and will wrap up the rest tomorrow :)","hi @yjernite and @lhoestq, i've fleshed out the dataset card and think this is now ready for another round of review!"],"created_at":1619967080000,"updated_at":1620638479000,"closed_at":1620638479000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2302","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2302","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2302.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2302.patch"},"body":"Hello datasetters \ud83d\ude42!\r\n\r\nHere's an interesting dataset about extractive question-answering on _subjective_ product \/ restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).\r\n\r\nI found a bug in the start\/end indices that I've proposed a fix for here: https:\/\/github.com\/megagonlabs\/SubjQA\/pull\/2\r\n\r\nUnfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if\/when the creators respond.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2302\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2301","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2301\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2301\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2301\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2301","id":873941266,"node_id":"MDU6SXNzdWU4NzM5NDEyNjY=","number":2301,"title":"Unable to setup dev env on Windows","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https:\/\/visualstudio.microsoft.com\/visual-cpp-build-tools\/","Hi @albertvillanova \r\n\r\nSorry for such a trivial issue ;-; \r\n\r\nThanks a lot."],"created_at":1619961642000,"updated_at":1620055081000,"closed_at":1620055054000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\n\r\nI tried installing the `\".[dev]\"` version on Windows 10 after cloning.\r\n\r\nHere is the error I'm facing:\r\n\r\n```bat\r\n(env) C:\\testing\\datasets>pip install -e \".[dev]\"\r\nObtaining file:\/\/\/C:\/testing\/datasets\r\nRequirement already satisfied: numpy>=1.17 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.19.5)\r\nCollecting pyarrow>=0.17.1\r\n Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB)\r\nRequirement already satisfied: dill in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (0.3.1.1)\r\nCollecting pandas\r\n Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB)\r\nRequirement already satisfied: requests>=2.19.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (2.25.1)\r\nRequirement already satisfied: tqdm<4.50.0,>=4.27 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (4.49.0)\r\nRequirement already satisfied: xxhash in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (2.0.2)\r\nCollecting multiprocess\r\n Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB)\r\nRequirement already satisfied: fsspec in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (2021.4.0)\r\nCollecting huggingface_hub<0.1.0\r\n Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB)\r\nRequirement already satisfied: importlib_metadata in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (4.0.1)\r\nRequirement already satisfied: absl-py in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (0.12.0)\r\nRequirement already satisfied: pytest in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (6.2.3)\r\nCollecting pytest-xdist\r\n Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB)\r\nCollecting apache-beam>=2.24.0\r\n Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB)\r\nCollecting elasticsearch\r\n Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB)\r\nRequirement already satisfied: boto3==1.16.43 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.16.43)\r\nRequirement already satisfied: botocore==1.19.43 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.19.43)\r\nCollecting moto[s3]==1.3.16\r\n Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB)\r\nCollecting rarfile>=4.0\r\n Using cached rarfile-4.0-py3-none-any.whl (28 kB)\r\nCollecting tensorflow>=2.3\r\n Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB)\r\nRequirement already satisfied: torch in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.8.1)\r\nRequirement already satisfied: transformers in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (4.5.1)\r\nCollecting bs4\r\n Using cached bs4-0.0.1-py3-none-any.whl\r\nCollecting conllu\r\n Using cached conllu-4.4-py2.py3-none-any.whl (15 kB)\r\nCollecting langdetect\r\n Using cached langdetect-1.0.8-py3-none-any.whl\r\nCollecting lxml\r\n Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB)\r\nCollecting mwparserfromhell\r\n Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB)\r\nCollecting nltk\r\n Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB)\r\nCollecting openpyxl\r\n Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB)\r\nCollecting py7zr\r\n Using cached py7zr-0.15.2-py3-none-any.whl (66 kB)\r\nCollecting tldextract\r\n Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB)\r\nCollecting zstandard\r\n Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB)\r\nCollecting bert_score>=0.3.6\r\n Using cached bert_score-0.3.9-py3-none-any.whl (59 kB)\r\nCollecting rouge_score\r\n Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB)\r\nCollecting sacrebleu\r\n Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB)\r\nRequirement already satisfied: scipy in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.6.3)\r\nCollecting seqeval\r\n Using cached seqeval-1.2.2-py3-none-any.whl\r\nCollecting sklearn\r\n Using cached sklearn-0.0-py2.py3-none-any.whl\r\nCollecting jiwer\r\n Using cached jiwer-2.2.0-py3-none-any.whl (13 kB)\r\nRequirement already satisfied: toml>=0.10.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (0.10.2)\r\nRequirement already satisfied: requests_file>=1.5.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.5.1)\r\nRequirement already satisfied: texttable>=1.6.3 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.6.3)\r\nRequirement already satisfied: s3fs>=0.4.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (0.4.2)\r\nRequirement already satisfied: Werkzeug>=1.0.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from datasets==1.5.0.dev0) (1.0.1)\r\nCollecting black\r\n Using cached black-21.4b2-py3-none-any.whl (130 kB)\r\nCollecting isort\r\n Using cached isort-5.8.0-py3-none-any.whl (103 kB)\r\nCollecting flake8==3.7.9\r\n Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB)\r\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0)\r\nRequirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7)\r\nRequirement already satisfied: urllib3<1.27,>=1.25.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4)\r\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1)\r\nCollecting entrypoints<0.4.0,>=0.3.0\r\n Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)\r\nCollecting pyflakes<2.2.0,>=2.1.0\r\n Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB)\r\nCollecting pycodestyle<2.6.0,>=2.5.0\r\n Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB)\r\nCollecting mccabe<0.7.0,>=0.6.0\r\n Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)\r\nRequirement already satisfied: jsondiff>=1.1.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0)\r\nRequirement already satisfied: pytz in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1)\r\nRequirement already satisfied: mock in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3)\r\nRequirement already satisfied: MarkupSafe<2.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1)\r\nRequirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)\r\nRequirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0)\r\nRequirement already satisfied: cryptography>=2.3.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7)\r\nRequirement already satisfied: more-itertools in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0)\r\nRequirement already satisfied: PyYAML>=5.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1)\r\nRequirement already satisfied: boto>=2.36.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0)\r\nRequirement already satisfied: idna<3,>=2.5 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10)\r\nRequirement already satisfied: sshpubkeys>=3.1.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1)\r\nRequirement already satisfied: responses>=0.9.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3)\r\nRequirement already satisfied: xmltodict in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0)\r\nRequirement already satisfied: setuptools in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125)\r\nRequirement already satisfied: Jinja2>=2.10.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3)\r\nRequirement already satisfied: zipp in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1)\r\nRequirement already satisfied: six>1.9 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0)\r\nRequirement already satisfied: ecdsa<0.15 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1)\r\nRequirement already satisfied: docker>=2.5.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0)\r\nRequirement already satisfied: cfn-lint>=0.4.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0)\r\nRequirement already satisfied: grpcio<2,>=1.29.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0)\r\nCollecting hdfs<3.0.0,>=2.1.0\r\n Using cached hdfs-2.6.0-py3-none-any.whl (33 kB)\r\nCollecting pyarrow>=0.17.1\r\n Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB)\r\nCollecting fastavro<2,>=0.21.4\r\n Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB)\r\nRequirement already satisfied: httplib2<0.18.0,>=0.8 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4)\r\nCollecting pymongo<4.0.0,>=3.8.0\r\n Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB)\r\nCollecting crcmod<2.0,>=1.7\r\n Using cached crcmod-1.7-py3-none-any.whl\r\nCollecting avro-python3!=1.9.2,<1.10.0,>=1.8.1\r\n Using cached avro_python3-1.9.2.1-py3-none-any.whl\r\nRequirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3)\r\nRequirement already satisfied: future<1.0.0,>=0.18.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2)\r\nCollecting oauth2client<5,>=2.0.1\r\n Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)\r\nCollecting pydot<2,>=1.2.0\r\n Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB)\r\nRequirement already satisfied: protobuf<4,>=3.12.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8)\r\nRequirement already satisfied: wrapt in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1)\r\nCollecting matplotlib\r\n Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB)\r\nRequirement already satisfied: junit-xml~=1.9 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9)\r\nRequirement already satisfied: jsonpatch in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32)\r\nRequirement already satisfied: jsonschema~=3.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0)\r\nRequirement already satisfied: networkx~=2.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1)\r\nRequirement already satisfied: aws-sam-translator>=1.35.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0)\r\nRequirement already satisfied: cffi>=1.12 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5)\r\nRequirement already satisfied: pycparser in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20)\r\nRequirement already satisfied: pywin32==227 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227)\r\nRequirement already satisfied: websocket-client>=0.32.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0)\r\nRequirement already satisfied: docopt in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2)\r\nRequirement already satisfied: filelock in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12)\r\nRequirement already satisfied: pyrsistent>=0.14.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3)\r\nRequirement already satisfied: attrs>=17.4.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0)\r\nRequirement already satisfied: decorator<5,>=4.3 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2)\r\nRequirement already satisfied: rsa>=3.1.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2)\r\nRequirement already satisfied: pyasn1-modules>=0.0.5 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8)\r\nRequirement already satisfied: pyasn1>=0.1.7 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8)\r\nRequirement already satisfied: pyparsing>=2.1.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7)\r\nRequirement already satisfied: certifi>=2017.4.17 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5)\r\nRequirement already satisfied: chardet<5,>=3.0.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0)\r\nCollecting keras-preprocessing~=1.1.2\r\n Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)\r\nRequirement already satisfied: termcolor~=1.1.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0)\r\nRequirement already satisfied: tensorboard~=2.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0)\r\nRequirement already satisfied: wheel~=0.35 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2)\r\nCollecting opt-einsum~=3.3.0\r\n Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)\r\nCollecting gast==0.3.3\r\n Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)\r\nCollecting google-pasta~=0.2\r\n Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)\r\nRequirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0)\r\nCollecting astunparse~=1.6.3\r\n Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)\r\nCollecting flatbuffers~=1.12.0\r\n Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB)\r\nCollecting h5py~=2.10.0\r\n Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB)\r\nRequirement already satisfied: markdown>=2.6.8 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4)\r\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0)\r\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4)\r\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0)\r\nRequirement already satisfied: google-auth<2,>=1.6.3 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0)\r\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2)\r\nRequirement already satisfied: requests-oauthlib>=0.7.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0)\r\nRequirement already satisfied: oauthlib>=3.0.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0)\r\nRequirement already satisfied: regex!=2019.12.17 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4)\r\nRequirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2)\r\nRequirement already satisfied: sacremoses in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45)\r\nRequirement already satisfied: packaging in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from transformers->datasets==1.5.0.dev0) (20.9)\r\nCollecting pathspec<1,>=0.8.1\r\n Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB)\r\nRequirement already satisfied: click>=7.1.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from black->datasets==1.5.0.dev0) (7.1.2)\r\nCollecting appdirs\r\n Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)\r\nCollecting mypy-extensions>=0.4.3\r\n Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB)\r\nRequirement already satisfied: typed-ast>=1.4.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from black->datasets==1.5.0.dev0) (1.4.3)\r\nCollecting beautifulsoup4\r\n Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB)\r\nRequirement already satisfied: soupsieve>1.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1)\r\nCollecting python-Levenshtein\r\n Using cached python-Levenshtein-0.12.2.tar.gz (50 kB)\r\nRequirement already satisfied: jsonpointer>=1.9 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1)\r\nRequirement already satisfied: pillow>=6.2.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0)\r\nRequirement already satisfied: cycler>=0.10 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0)\r\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1)\r\nCollecting multiprocess\r\n Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB)\r\n Using cached multiprocess-0.70.10.zip (2.4 MB)\r\n Using cached multiprocess-0.70.9-py3-none-any.whl\r\nRequirement already satisfied: joblib in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1)\r\nCollecting et-xmlfile\r\n Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)\r\nRequirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4)\r\nCollecting pyppmd<0.13.0,>=0.12.1\r\n Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB)\r\nCollecting pycryptodome>=3.6.6\r\n Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB)\r\nCollecting bcj-cffi<0.6.0,>=0.5.1\r\n Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB)\r\nCollecting multivolumefile<0.3.0,>=0.2.0\r\n Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB)\r\nRequirement already satisfied: iniconfig in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1)\r\nRequirement already satisfied: py>=1.8.2 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0)\r\nRequirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1)\r\nRequirement already satisfied: atomicwrites>=1.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0)\r\nRequirement already satisfied: colorama in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4)\r\nCollecting pytest-forked\r\n Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB)\r\nCollecting execnet>=1.1\r\n Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB)\r\nRequirement already satisfied: apipkg>=1.4 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5)\r\nCollecting portalocker==2.0.0\r\n Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB)\r\nRequirement already satisfied: scikit-learn>=0.21.3 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in c:\\programdata\\anaconda3\\envs\\env\\lib\\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0)\r\nBuilding wheels for collected packages: python-Levenshtein\r\n Building wheel for python-Levenshtein (setup.py) ... error\r\n ERROR: Command errored out with exit status 1:\r\n command: 'C:\\ProgramData\\Anaconda3\\envs\\env\\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\VKC~1\\\\AppData\\\\Local\\\\Temp\\\\pip-install-ynt_dbm4\\\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\VKC~1\\\\AppData\\\\Local\\\\Temp\\\\pip-install-ynt_dbm4\\\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\\setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' bdist_wheel -d 'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-wheel-8jh7fm18'\r\n cwd: C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\r\n Complete output (27 lines):\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n creating build\r\n creating build\\lib.win-amd64-3.7\r\n creating build\\lib.win-amd64-3.7\\Levenshtein\r\n copying Levenshtein\\StringMatcher.py -> build\\lib.win-amd64-3.7\\Levenshtein\r\n copying Levenshtein\\__init__.py -> build\\lib.win-amd64-3.7\\Levenshtein\r\n running egg_info\r\n writing python_Levenshtein.egg-info\\PKG-INFO\r\n writing dependency_links to python_Levenshtein.egg-info\\dependency_links.txt\r\n writing entry points to python_Levenshtein.egg-info\\entry_points.txt\r\n writing namespace_packages to python_Levenshtein.egg-info\\namespace_packages.txt\r\n writing requirements to python_Levenshtein.egg-info\\requires.txt\r\n writing top-level names to python_Levenshtein.egg-info\\top_level.txt\r\n reading manifest file 'python_Levenshtein.egg-info\\SOURCES.txt'\r\n reading manifest template 'MANIFEST.in'\r\n warning: no previously-included files matching '*pyc' found anywhere in distribution\r\n warning: no previously-included files matching '*so' found anywhere in distribution\r\n warning: no previously-included files matching '.project' found anywhere in distribution\r\n warning: no previously-included files matching '.pydevproject' found anywhere in distribution\r\n writing manifest file 'python_Levenshtein.egg-info\\SOURCES.txt'\r\n copying Levenshtein\\_levenshtein.c -> build\\lib.win-amd64-3.7\\Levenshtein\r\n copying Levenshtein\\_levenshtein.h -> build\\lib.win-amd64-3.7\\Levenshtein\r\n running build_ext\r\n building 'Levenshtein._levenshtein' extension\r\n error: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https:\/\/visualstudio.microsoft.com\/visual-cpp-build-tools\/\r\n ----------------------------------------\r\n ERROR: Failed building wheel for python-Levenshtein\r\n Running setup.py clean for python-Levenshtein\r\nFailed to build python-Levenshtein\r\nInstalling collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam\r\n Running setup.py install for python-Levenshtein ... error\r\n ERROR: Command errored out with exit status 1:\r\n command: 'C:\\ProgramData\\Anaconda3\\envs\\env\\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\VKC~1\\\\AppData\\\\Local\\\\Temp\\\\pip-install-ynt_dbm4\\\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\VKC~1\\\\AppData\\\\Local\\\\Temp\\\\pip-install-ynt_dbm4\\\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\\setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record 'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-record-v7l7zitb\\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\\ProgramData\\Anaconda3\\envs\\env\\Include\\python-Levenshtein'\r\n cwd: C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\r\n Complete output (27 lines):\r\n running install\r\n running build\r\n running build_py\r\n creating build\r\n creating build\\lib.win-amd64-3.7\r\n creating build\\lib.win-amd64-3.7\\Levenshtein\r\n copying Levenshtein\\StringMatcher.py -> build\\lib.win-amd64-3.7\\Levenshtein\r\n copying Levenshtein\\__init__.py -> build\\lib.win-amd64-3.7\\Levenshtein\r\n running egg_info\r\n writing python_Levenshtein.egg-info\\PKG-INFO\r\n writing dependency_links to python_Levenshtein.egg-info\\dependency_links.txt\r\n writing entry points to python_Levenshtein.egg-info\\entry_points.txt\r\n writing namespace_packages to python_Levenshtein.egg-info\\namespace_packages.txt\r\n writing requirements to python_Levenshtein.egg-info\\requires.txt\r\n writing top-level names to python_Levenshtein.egg-info\\top_level.txt\r\n reading manifest file 'python_Levenshtein.egg-info\\SOURCES.txt'\r\n reading manifest template 'MANIFEST.in'\r\n warning: no previously-included files matching '*pyc' found anywhere in distribution\r\n warning: no previously-included files matching '*so' found anywhere in distribution\r\n warning: no previously-included files matching '.project' found anywhere in distribution\r\n warning: no previously-included files matching '.pydevproject' found anywhere in distribution\r\n writing manifest file 'python_Levenshtein.egg-info\\SOURCES.txt'\r\n copying Levenshtein\\_levenshtein.c -> build\\lib.win-amd64-3.7\\Levenshtein\r\n copying Levenshtein\\_levenshtein.h -> build\\lib.win-amd64-3.7\\Levenshtein\r\n running build_ext\r\n building 'Levenshtein._levenshtein' extension\r\n error: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https:\/\/visualstudio.microsoft.com\/visual-cpp-build-tools\/\r\n ----------------------------------------\r\nERROR: Command errored out with exit status 1: 'C:\\ProgramData\\Anaconda3\\envs\\env\\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '\"'\"'C:\\\\Users\\\\VKC~1\\\\AppData\\\\Local\\\\Temp\\\\pip-install-ynt_dbm4\\\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\\setup.py'\"'\"'; __file__='\"'\"'C:\\\\Users\\\\VKC~1\\\\AppData\\\\Local\\\\Temp\\\\pip-install-ynt_dbm4\\\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\\\setup.py'\"'\"';f=getattr(tokenize, '\"'\"'open'\"'\"', open)(__file__);code=f.read().replace('\"'\"'\\r\\n'\"'\"', '\"'\"'\\n'\"'\"');f.close();exec(compile(code, __file__, '\"'\"'exec'\"'\"'))' install --record 'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-record-v7l7zitb\\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\\ProgramData\\Anaconda3\\envs\\env\\Include\\python-Levenshtein' Check the logs for full command output.\r\n```\r\n\r\nHere are conda and python versions:\r\n\r\n```bat\r\n(env) C:\\testing\\datasets>conda --version\r\nconda 4.9.2\r\n\r\n(env) C:\\testing\\datasets>python --version\r\nPython 3.7.10\r\n```\r\n\r\nPlease help me out. Thanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2301\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2300","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2300\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2300\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2300\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2300","id":873928169,"node_id":"MDU6SXNzdWU4NzM5MjgxNjk=","number":2300,"title":"Add VoxPopuli","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https:\/\/github.com\/facebookresearch\/voxpopuli\/blob\/main\/voxpopuli\/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternative could be to provide the segments start and end times as a Sequence and then it's up to the user to perform the segmentation on-the-fly if they wish?","Hey @jfainberg,\r\n\r\nThis sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:\r\n\r\n```python\r\ndataset = load_dataset(\"voxpopuli\", \"french\")\r\n```\r\n\r\n=> so as a start I think your option 2 is the way to go!"],"created_at":1619957860000,"updated_at":1620901912000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Voxpopuli\r\n- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2101.00390\r\n- **Data:** https:\/\/github.com\/facebookresearch\/voxpopuli\r\n- **Motivation:** biggest unlabeled speech dataset\r\n\r\n**Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2300\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2299","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2299\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2299\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2299\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2299","id":873914717,"node_id":"MDU6SXNzdWU4NzM5MTQ3MTc=","number":2299,"title":"My iPhone","user":{"login":"Jasonbuchanan1983","id":82856229,"node_id":"MDQ6VXNlcjgyODU2MjI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/82856229?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983","html_url":"https:\/\/github.com\/Jasonbuchanan1983","followers_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/followers","following_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/repos","events_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jasonbuchanan1983\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619953871000,"updated_at":1627032256000,"closed_at":1620029858000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2299\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2298","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2298\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2298\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2298\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2298","id":873771942,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI4NDk2NjM2","number":2298,"title":"Mapping in the distributed setting","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619904185000,"updated_at":1620050093000,"closed_at":1620050093000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2298","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2298","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2298.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2298.patch"},"body":"The barrier trick for distributed mapping as discussed on Thursday with @lhoestq","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2298\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2296","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2296\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2296\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2296\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2296","id":872974907,"node_id":"MDU6SXNzdWU4NzI5NzQ5MDc=","number":2296,"title":"1","user":{"login":"zinnyi","id":82880142,"node_id":"MDQ6VXNlcjgyODgwMTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/82880142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zinnyi","html_url":"https:\/\/github.com\/zinnyi","followers_url":"https:\/\/api.github.com\/users\/zinnyi\/followers","following_url":"https:\/\/api.github.com\/users\/zinnyi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zinnyi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zinnyi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zinnyi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zinnyi\/orgs","repos_url":"https:\/\/api.github.com\/users\/zinnyi\/repos","events_url":"https:\/\/api.github.com\/users\/zinnyi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zinnyi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619805229000,"updated_at":1620029851000,"closed_at":1620029851000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2296\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2295","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2295\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2295\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2295\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2295","id":872902867,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI3NzY0NDk3","number":2295,"title":"Create ExtractManager","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2851292821,"node_id":"MDU6TGFiZWwyODUxMjkyODIx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/refactoring","name":"refactoring","color":"B67A40","default":false,"description":"Restructuring existing code without changing its external behavior"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/6","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/6\/labels","id":6836458,"node_id":"MDk6TWlsZXN0b25lNjgzNjQ1OA==","number":6,"title":"1.10","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":29,"state":"closed","created_at":1623178113000,"updated_at":1626881809000,"due_on":1628146800000,"closed_at":1626881809000},"comments":["Hi @lhoestq,\r\n\r\nOnce that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.\r\n\r\nThanks.","I think all is done @lhoestq ;)"],"created_at":1619802814000,"updated_at":1626099123000,"closed_at":1625731909000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2295","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2295","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2295.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2295.patch"},"body":"Perform refactoring to decouple extract functionality.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2295\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2294","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2294\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2294\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2294\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2294","id":872136075,"node_id":"MDU6SXNzdWU4NzIxMzYwNzU=","number":2294,"title":"Slow #0 when using map to tokenize.","user":{"login":"VerdureChen","id":31714566,"node_id":"MDQ6VXNlcjMxNzE0NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31714566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VerdureChen","html_url":"https:\/\/github.com\/VerdureChen","followers_url":"https:\/\/api.github.com\/users\/VerdureChen\/followers","following_url":"https:\/\/api.github.com\/users\/VerdureChen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VerdureChen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VerdureChen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VerdureChen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VerdureChen\/orgs","repos_url":"https:\/\/api.github.com\/users\/VerdureChen\/repos","events_url":"https:\/\/api.github.com\/users\/VerdureChen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VerdureChen\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ?\r\nThere are no difference between process 0 and the others except that it processes the first shard of the dataset.","Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this:\r\n```if args.dataset_name1 is not None:\r\n dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split=\"train\")\r\n dataset1 = dataset1.remove_columns('title')\r\n if args.dataset_name2 is not None:\r\n dataset2 = load_dataset(args.dataset_name2, args.dataset_config_name2,split=\"train\")\r\n assert dataset1.features.type == dataset2.features.type, str(dataset1.features.type)+';'+str(dataset2.features.type)\r\n datasets12 = concatenate_datasets([dataset1, dataset2], split='train')\r\n```\r\nWhen I just use one datasets, e.g. wikipedia, the problem seems no longer exist:\r\n![image](https:\/\/user-images.githubusercontent.com\/31714566\/116967059-13d24380-ace4-11eb-8d14-b7b9c9a275cc.png)\r\n\r\nBookcorpus has more row numbers than Wikipedia, however, it takes much more time to process each batch of wiki than that of bookcorpus. When we first concatenate two datasets and then use _map_ to process the concatenated datasets, e.g. `num_proc=5`, process 0 has to process all of the wikipedia data, causing the problem that #0 takes a longer time to finish the job. \r\n\r\nThe problem is caused by the different characteristic of different datasets. One solution might be using _map_ first to process two datasets seperately, then concatenate the tokenized and processed datasets before input to the `Dataloader`.\r\n\r\n","That makes sense ! You can indeed use `map` on both datasets separately and then concatenate.\r\nAnother option is to concatenate, then shuffle, and then `map`."],"created_at":1619769633000,"updated_at":1620126011000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not args.overwrite_cache,\r\n )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1\uff0cthe process _#0_ is much slower than others.\r\nIt looks like this:\r\n![image](https:\/\/user-images.githubusercontent.com\/31714566\/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png)\r\nIt takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2294\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2293","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2293\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2293\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2293\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2293","id":872079385,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI3MDQzNzQ3","number":2293,"title":"imdb dataset from Don't Stop Pretraining Paper","user":{"login":"BobbyManion","id":52530809,"node_id":"MDQ6VXNlcjUyNTMwODA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52530809?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BobbyManion","html_url":"https:\/\/github.com\/BobbyManion","followers_url":"https:\/\/api.github.com\/users\/BobbyManion\/followers","following_url":"https:\/\/api.github.com\/users\/BobbyManion\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BobbyManion\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BobbyManion\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BobbyManion\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BobbyManion\/orgs","repos_url":"https:\/\/api.github.com\/users\/BobbyManion\/repos","events_url":"https:\/\/api.github.com\/users\/BobbyManion\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BobbyManion\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619764848000,"updated_at":1619765665000,"closed_at":1619765665000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2293","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2293","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2293.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2293.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2293\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2292","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2292\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2292\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2292\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2292","id":871230183,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy","number":2292,"title":"Fixed typo seperate->separate","user":{"login":"laksh9950","id":32505743,"node_id":"MDQ6VXNlcjMyNTA1NzQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32505743?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/laksh9950","html_url":"https:\/\/github.com\/laksh9950","followers_url":"https:\/\/api.github.com\/users\/laksh9950\/followers","following_url":"https:\/\/api.github.com\/users\/laksh9950\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/laksh9950\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/laksh9950\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/laksh9950\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/laksh9950\/orgs","repos_url":"https:\/\/api.github.com\/users\/laksh9950\/repos","events_url":"https:\/\/api.github.com\/users\/laksh9950\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/laksh9950\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619714453000,"updated_at":1619789358000,"closed_at":1619787792000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2292","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2292","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2292.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2292.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2292\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2291","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2291\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2291\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2291\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2291","id":871216757,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI2MjcyNzE5","number":2291,"title":"Don't copy recordbatches in memory during a table deepcopy","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619713565000,"updated_at":1619714075000,"closed_at":1619714074000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2291","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2291","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2291.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2291.patch"},"body":"Fix issue #2276 and hopefully #2134\r\n\r\nThe recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.\r\nThis resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.\r\n\r\nI fixed the copy similarly to #2287 and updated the test to make sure it doesn't happen again (added a test for deepcopy + make sure that the immutable arrow objects are passed to the copied table without being copied).\r\n\r\nThe issue was not caught by our tests because the total allocated bytes value in PyArrow isn't updated when deepcopying recordbatches: the copy in memory wasn't detected. This behavior looks like a bug in PyArrow, I'll open a ticket on JIRA.\r\n\r\nThanks @samsontmr , @TaskManager91 and @mariosasko for the help\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2291\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2290","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2290\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2290\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2290\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2290","id":871145817,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI2MjEyNTIz","number":2290,"title":"Bbaw egyptian","user":{"login":"phiwi","id":54144149,"node_id":"MDQ6VXNlcjU0MTQ0MTQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54144149?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/phiwi","html_url":"https:\/\/github.com\/phiwi","followers_url":"https:\/\/api.github.com\/users\/phiwi\/followers","following_url":"https:\/\/api.github.com\/users\/phiwi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/phiwi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/phiwi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/phiwi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/phiwi\/orgs","repos_url":"https:\/\/api.github.com\/users\/phiwi\/repos","events_url":"https:\/\/api.github.com\/users\/phiwi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/phiwi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @phiwi,\r\n\r\nThanks for contributing this nice dataset. If you have any blocking problem or question, do not hesitate to ask here. We are pleased to help you.\r\n\r\nCould you please first synchronize with our master branch? From your branch `bbaw_egyptian`, type:\r\n```\r\ngit fetch upstream master\r\ngit merge upstream\/master\r\n```","Thanks ! Can you check that you have `black==21.4b0` and run `make style` again ? This should fix the \"check_code_quality\" CI issue","Reformatted with black.","Hi @phiwi, there are still some minor problems in relation with the tags you used in the dataset card (README.md).\r\n\r\nHere you can find the output of the metadata validator:\r\n```\r\nWARNING:root:\u274c Failed to validate 'datasets\/bbaw_egyptian\/README.md':\r\nCould not validate the metada, found the following errors:\r\n* field 'size_categories':\r\n\t['100K 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length \/\/ max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n","Resolved"],"created_at":1619702205000,"updated_at":1621408965000,"closed_at":1621408959000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\n\r\nI am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.\r\n\r\nI would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a \"tokenizable\" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:\r\n\r\n```\r\nmodel_checkpoint = 'distilbert-base-uncased'\r\n\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\r\n\r\nfrom transformers import TextDataset\r\n\r\ndataset = TextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"path\/to\/text_file.txt\",\r\n block_size=512,\r\n)\r\n```\r\n\r\nFor now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset('path\/to\/text_file.txt')\r\n\r\nmodel_checkpoint = 'distilbert-base-uncased'\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"])\r\n\r\ntokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=[\"text\"])\r\n\r\ntokenized_datasets\r\n```\r\n\r\nSo what would be the \"standard\" way of creating a dataset in the way it was done before?\r\n\r\nThank you very much for the help :))","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2285\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2284","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2284\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2284\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2284\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2284","id":870932710,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI2MDM5MDc5","number":2284,"title":"Initialize Imdb dataset as used in Don't Stop Pretraining Paper","user":{"login":"BobbyManion","id":52530809,"node_id":"MDQ6VXNlcjUyNTMwODA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52530809?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BobbyManion","html_url":"https:\/\/github.com\/BobbyManion","followers_url":"https:\/\/api.github.com\/users\/BobbyManion\/followers","following_url":"https:\/\/api.github.com\/users\/BobbyManion\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BobbyManion\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BobbyManion\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BobbyManion\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BobbyManion\/orgs","repos_url":"https:\/\/api.github.com\/users\/BobbyManion\/repos","events_url":"https:\/\/api.github.com\/users\/BobbyManion\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BobbyManion\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619697158000,"updated_at":1619700874000,"closed_at":1619700874000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2284","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2284","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2284.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2284.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2284\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2283","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2283\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2283\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2283\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2283","id":870926475,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5","number":2283,"title":"Initialize imdb dataset from don't stop pretraining paper","user":{"login":"BobbyManion","id":52530809,"node_id":"MDQ6VXNlcjUyNTMwODA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52530809?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BobbyManion","html_url":"https:\/\/github.com\/BobbyManion","followers_url":"https:\/\/api.github.com\/users\/BobbyManion\/followers","following_url":"https:\/\/api.github.com\/users\/BobbyManion\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BobbyManion\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BobbyManion\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BobbyManion\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BobbyManion\/orgs","repos_url":"https:\/\/api.github.com\/users\/BobbyManion\/repos","events_url":"https:\/\/api.github.com\/users\/BobbyManion\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BobbyManion\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619696694000,"updated_at":1619697024000,"closed_at":1619697024000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2283","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2283","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2283.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2283.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2283\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2282","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2282\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2282\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2282\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2282","id":870900332,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3","number":2282,"title":"Initialize imdb dataset from don't stop pretraining paper","user":{"login":"BobbyManion","id":52530809,"node_id":"MDQ6VXNlcjUyNTMwODA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52530809?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BobbyManion","html_url":"https:\/\/github.com\/BobbyManion","followers_url":"https:\/\/api.github.com\/users\/BobbyManion\/followers","following_url":"https:\/\/api.github.com\/users\/BobbyManion\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BobbyManion\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BobbyManion\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BobbyManion\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BobbyManion\/orgs","repos_url":"https:\/\/api.github.com\/users\/BobbyManion\/repos","events_url":"https:\/\/api.github.com\/users\/BobbyManion\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BobbyManion\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619695076000,"updated_at":1619696631000,"closed_at":1619696631000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2282","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2282","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2282.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2282.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2282\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2281","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2281\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2281\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2281\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2281","id":870792784,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI1OTI2MjAw","number":2281,"title":"Update multi_woz_v22 checksum","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619687351000,"updated_at":1619703695000,"closed_at":1619703694000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2281","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2281","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2281.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2281.patch"},"body":"Fix issue https:\/\/github.com\/huggingface\/datasets\/issues\/1876\r\nThe files were changed in https:\/\/github.com\/budzianowski\/multiwoz\/pull\/72","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2281\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2280","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2280\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2280\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2280\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2280","id":870780431,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy","number":2280,"title":"Fixed typo seperate->separate","user":{"login":"laksh9950","id":32505743,"node_id":"MDQ6VXNlcjMyNTA1NzQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32505743?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/laksh9950","html_url":"https:\/\/github.com\/laksh9950","followers_url":"https:\/\/api.github.com\/users\/laksh9950\/followers","following_url":"https:\/\/api.github.com\/users\/laksh9950\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/laksh9950\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/laksh9950\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/laksh9950\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/laksh9950\/orgs","repos_url":"https:\/\/api.github.com\/users\/laksh9950\/repos","events_url":"https:\/\/api.github.com\/users\/laksh9950\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/laksh9950\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind","The PR has been merged ! Feel free to merge master into your branch to fix the CI"],"created_at":1619686546000,"updated_at":1619714482000,"closed_at":1619714476000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2280","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2280","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2280.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2280.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2280\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2279","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2279\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2279\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2279\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2279","id":870431662,"node_id":"MDU6SXNzdWU4NzA0MzE2NjI=","number":2279,"title":"Compatibility with Ubuntu 18 and GLIBC 2.27?","user":{"login":"tginart","id":11379648,"node_id":"MDQ6VXNlcjExMzc5NjQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11379648?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tginart","html_url":"https:\/\/github.com\/tginart","followers_url":"https:\/\/api.github.com\/users\/tginart\/followers","following_url":"https:\/\/api.github.com\/users\/tginart\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tginart\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tginart\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tginart\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tginart\/orgs","repos_url":"https:\/\/api.github.com\/users\/tginart\/repos","events_url":"https:\/\/api.github.com\/users\/tginart\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tginart\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https:\/\/github.com\/huggingface\/tokenizers instead?","Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https:\/\/github.com\/huggingface\/tokenizers\/issues\/685"],"created_at":1619647687000,"updated_at":1619682162000,"closed_at":1619682162000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nFor use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https:\/\/www.digitalocean.com\/community\/questions\/how-install-glibc-2-29-or-higher-in-ubuntu-18-04). \r\n\r\nI'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface\/datasets requires either an upgrade to Ubuntu 19\/20 or a hand-rolled install of a higher version of GLIBC.\r\n\r\n## Steps to reproduce the bug\r\n1. clone the transformers repo\r\n2. move to examples\/pytorch\/language-modeling\r\n3. run example command:\r\n```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir \/tmp\/test-clm```\r\n\r\n\r\n## Expected results\r\nAs described in the transformers repo.\r\n\r\n## Actual results\r\n```Traceback (most recent call last):\r\n File \"run_clm.py\", line 34, in \r\n from transformers import (\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/__init__.py\", line 2487, in __getattr__\r\n return super().__getattr__(name)\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/file_utils.py\", line 1699, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/__init__.py\", line 2481, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/models\/__init__.py\", line 19, in \r\n from . import (\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/models\/layoutlm\/__init__.py\", line 23, in \r\n from .tokenization_layoutlm import LayoutLMTokenizer\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/models\/layoutlm\/tokenization_layoutlm.py\", line 19, in \r\n from ..bert.tokenization_bert import BertTokenizer\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/models\/bert\/tokenization_bert.py\", line 23, in \r\n from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils.py\", line 26, in \r\n from .tokenization_utils_base import (\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py\", line 68, in \r\n from tokenizers import AddedToken\r\n File \"\/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/tokenizers\/__init__.py\", line 79, in \r\n from .tokenizers import (\r\nImportError: \/lib\/x86_64-linux-gnu\/libm.so.6: version `GLIBC_2.29' not found (required by \/home\/tginart\/anaconda3\/envs\/huggingface\/lib\/python3.7\/site-packages\/tokenizers\/tokenizers.cpython-37m-x86_64-linux-gnu.so)\r\n```\r\n\r\n## Versions\r\nPaste the output of the following code:\r\n```\r\n- Datasets: 1.6.1\r\n- Python: 3.7.10 (default, Feb 26 2021, 18:47:35) \r\n[GCC 7.3.0]\r\n- Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid\r\n\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2279\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2278","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2278\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2278\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2278\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2278","id":870088059,"node_id":"MDU6SXNzdWU4NzAwODgwNTk=","number":2278,"title":"Loss result inGptNeoForCasual","user":{"login":"Yossillamm","id":51174606,"node_id":"MDQ6VXNlcjUxMTc0NjA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51174606?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Yossillamm","html_url":"https:\/\/github.com\/Yossillamm","followers_url":"https:\/\/api.github.com\/users\/Yossillamm\/followers","following_url":"https:\/\/api.github.com\/users\/Yossillamm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Yossillamm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Yossillamm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Yossillamm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Yossillamm\/orgs","repos_url":"https:\/\/api.github.com\/users\/Yossillamm\/repos","events_url":"https:\/\/api.github.com\/users\/Yossillamm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Yossillamm\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I think you might have to ask on the `transformers` repo on or the forum at https:\/\/discuss.huggingface.co\/\r\n\r\nClosing since it's not related to this library"],"created_at":1619624392000,"updated_at":1620317663000,"closed_at":1620317663000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Is there any way you give the \" loss\" and \"logits\" results in the gpt neo api? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2278\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2277","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2277\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2277\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2277\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2277","id":870071994,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI1MzI5NjIz","number":2277,"title":"Create CacheManager","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2851292821,"node_id":"MDU6TGFiZWwyODUxMjkyODIx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/refactoring","name":"refactoring","color":"B67A40","default":false,"description":"Restructuring existing code without changing its external behavior"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":[],"created_at":1619623422000,"updated_at":1630560811000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2277","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2277","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2277.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2277.patch"},"body":"Perform refactoring to decouple cache functionality (method `as_dataset`).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2277\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2276","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2276\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2276\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2276\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2276","id":870010511,"node_id":"MDU6SXNzdWU4NzAwMTA1MTE=","number":2276,"title":"concatenate_datasets loads all the data into memory","user":{"login":"TaskManager91","id":7063207,"node_id":"MDQ6VXNlcjcwNjMyMDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7063207?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TaskManager91","html_url":"https:\/\/github.com\/TaskManager91","followers_url":"https:\/\/api.github.com\/users\/TaskManager91\/followers","following_url":"https:\/\/api.github.com\/users\/TaskManager91\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TaskManager91\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TaskManager91\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TaskManager91\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TaskManager91\/orgs","repos_url":"https:\/\/api.github.com\/users\/TaskManager91\/repos","events_url":"https:\/\/api.github.com\/users\/TaskManager91\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TaskManager91\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceback (most recent call last)\r\n in \r\n 20 print(file_name)\r\n 21 cv_batch = load_from_disk(file_name)\r\n---> 22 cv_sampled_train = concatenate_datasets([cv_sampled_train, cv_batch])\r\n 23 \r\n 24 print(\"Saving to disk!\")\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\arrow_dataset.py in concatenate_datasets(dsets, info, split, axis)\r\n 2891 \r\n 2892 # Concatenate tables\r\n-> 2893 table = concat_tables([dset._data for dset in dsets if len(dset._data) > 0], axis=axis)\r\n 2894 table = update_metadata_with_features(table, None)\r\n 2895 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in concat_tables(tables, axis)\r\n 837 if len(tables) == 1:\r\n 838 return tables[0]\r\n--> 839 return ConcatenationTable.from_tables(tables, axis=axis)\r\n 840 \r\n 841 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in from_tables(cls, tables, axis)\r\n 697 return result\r\n 698 \r\n--> 699 blocks = to_blocks(tables[0])\r\n 700 for table in tables[1:]:\r\n 701 table_blocks = to_blocks(table)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in to_blocks(table)\r\n 669 return [[InMemoryTable(table)]]\r\n 670 elif isinstance(table, ConcatenationTable):\r\n--> 671 return copy.deepcopy(table.blocks)\r\n 672 else:\r\n 673 return [[table]]\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 151 copier = getattr(x, \"__deepcopy__\", None)\r\n 152 if copier is not None:\r\n--> 153 y = copier(memo)\r\n 154 else:\r\n 155 reductor = dispatch_table.get(cls)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in __deepcopy__(self, memo)\r\n 143 # by adding it to the memo, self.table won't be copied\r\n 144 memo[id(self.table)] = self.table\r\n--> 145 return _deepcopy(self, memo)\r\n 146 \r\n 147 def __getstate__(self):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\datasets\\table.py in _deepcopy(x, memo)\r\n 62 memo[id(x)] = result\r\n 63 for k, v in x.__dict__.items():\r\n---> 64 setattr(result, k, copy.deepcopy(v, memo))\r\n 65 return result\r\n 66 \r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in (.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 170 y = x\r\n 171 else:\r\n--> 172 y = _reconstruct(x, memo, *rv)\r\n 173 \r\n 174 # If is its own copy, don't memoize.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy)\r\n 262 if deep and args:\r\n 263 args = (deepcopy(arg, memo) for arg in args)\r\n--> 264 y = func(*args)\r\n 265 if deep:\r\n 266 memo[id(x)] = y\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in (.0)\r\n 261 deep = memo is not None\r\n 262 if deep and args:\r\n--> 263 args = (deepcopy(arg, memo) for arg in args)\r\n 264 y = func(*args)\r\n 265 if deep:\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in (.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_tuple(x, memo, deepcopy)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in (.0)\r\n 208 \r\n 209 def _deepcopy_tuple(x, memo, deepcopy=deepcopy):\r\n--> 210 y = [deepcopy(a, memo) for a in x]\r\n 211 # We're not going to put the tuple in the memo, but it's still important we\r\n 212 # check for it, in case the tuple contains recursive mutable structures.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 144 copier = _deepcopy_dispatch.get(cls)\r\n 145 if copier is not None:\r\n--> 146 y = copier(x, memo)\r\n 147 else:\r\n 148 if issubclass(cls, type):\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in _deepcopy_list(x, memo, deepcopy)\r\n 203 append = y.append\r\n 204 for a in x:\r\n--> 205 append(deepcopy(a, memo))\r\n 206 return y\r\n 207 d[list] = _deepcopy_list\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\copy.py in deepcopy(x, memo, _nil)\r\n 159 reductor = getattr(x, \"__reduce_ex__\", None)\r\n 160 if reductor is not None:\r\n--> 161 rv = reductor(4)\r\n 162 else:\r\n 163 reductor = getattr(x, \"__reduce__\", None)\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.__reduce_ex__()\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\pyarrow\\io.pxi in pyarrow.lib.Buffer.to_pybytes()\r\n\r\nMemoryError: \r\n\r\n```","Hi ! this looks like an important issue. Let me try to reproduce this.\r\nCc @samsontmr this might be related to the memory issue you have in #2134 ","@lhoestq Just went to open a similar issue.\r\n\r\nIt seems like deep copying (tested on master) the dataset object writes the table's record batches (`dset._data._batches`) into RAM.\r\n\r\nTo find the bug, I modified the `_deepcopy` function in `table.py` as follows:\r\n```python\r\ndef _deepcopy(x, memo: dict):\r\n \"\"\"deepcopy a regular class instance\"\"\"\r\n import psutil # pip install this package\r\n import time\r\n cls = x.__class__\r\n result = cls.__new__(cls)\r\n memo[id(x)] = result\r\n for k, v in x.__dict__.items():\r\n print(\"=\"* 50)\r\n print(\"Current memory:\", psutil.virtual_memory().percent)\r\n print(f\"Saving object {k} with value {v}\")\r\n setattr(result, k, copy.deepcopy(v, memo))\r\n time.sleep(5)\r\n print(\"Memory after copy:\", psutil.virtual_memory().percent)\r\n return result\r\n```\r\nTest script:\r\n```python\r\nimport copy\r\nfrom datasets import load_dataset\r\nbk = load_dataset(\"bookcorpus\", split=\"train\")\r\nbk_copy = copy.deepcopy(bk)\r\n```","Thanks for the insights @mariosasko ! I'm working on a fix.\r\nSince this is a big issue I'll make a patch release as soon as this is fixed","Hi @samsontmr @TaskManager91 the fix is on the master branch, feel free to install `datasets` from source and let us know if you still have issues","We just released `datasets` 1.6.2 that includes the fix :)","thanks it works like a charm! :)"],"created_at":1619620041000,"updated_at":1620031315000,"closed_at":1620031315000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nWhen I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.\r\n\r\nInterestingly, this happens when trying to save the new dataset to disk or concatenating it again.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/7063207\/116420321-2b21b480-a83e-11eb-9006-8f6ca729fb6f.png)\r\n\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import concatenate_datasets, load_from_disk\r\n\r\ntest_sampled_pro = load_from_disk(\"test_sampled_pro\")\r\nval_sampled_pro = load_from_disk(\"val_sampled_pro\")\r\n\r\nbig_set = concatenate_datasets([test_sampled_pro, val_sampled_pro])\r\n\r\n# Loaded to memory\r\nbig_set.save_to_disk(\"big_set\")\r\n\r\n# Loaded to memory\r\nbig_set = concatenate_datasets([big_set, val_sampled_pro])\r\n```\r\n\r\n## Expected results\r\nThe data should be loaded into memory in batches and then saved directly to disk.\r\n\r\n## Actual results\r\nThe entire data set is loaded into the memory and then saved to the hard disk.\r\n\r\n## Versions\r\nPaste the output of the following code:\r\n```python\r\n- Datasets: 1.6.1\r\n- Python: 3.8.8 (default, Apr 13 2021, 19:58:26) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2276\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2275","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2275\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2275\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2275\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2275","id":869378311,"node_id":"MDU6SXNzdWU4NjkzNzgzMTE=","number":2275,"title":"SNLI dataset has labels of -1 ","user":{"login":"puzzler10","id":17426779,"node_id":"MDQ6VXNlcjE3NDI2Nzc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17426779?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/puzzler10","html_url":"https:\/\/github.com\/puzzler10","followers_url":"https:\/\/api.github.com\/users\/puzzler10\/followers","following_url":"https:\/\/api.github.com\/users\/puzzler10\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/puzzler10\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/puzzler10\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/puzzler10\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/puzzler10\/orgs","repos_url":"https:\/\/api.github.com\/users\/puzzler10\/repos","events_url":"https:\/\/api.github.com\/users\/puzzler10\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/puzzler10\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train\/val\/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!"],"created_at":1619569945000,"updated_at":1621258458000,"closed_at":1621258458000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https:\/\/nlp.stanford.edu\/projects\/snli\/) and [here](https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set.\r\n\r\nIt isn't clear what these labels mean. I found a [line of code](https:\/\/github.com\/huggingface\/datasets\/blob\/80e59ef178d3bb2090d091bc32315c655eb0633d\/datasets\/snli\/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained. \r\n\r\nPerhaps the documentation should be updated.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2275\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2274","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2274\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2274\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2274\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2274","id":869186276,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI0NTkyMjQx","number":2274,"title":"Always update metadata in arrow schema","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619551317000,"updated_at":1619690271000,"closed_at":1619690270000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2274","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2274","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2274.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2274.patch"},"body":"We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types.\r\n\r\nFor each function that transforms the feature types of the dataset, I added a step in the tests to make sure the metadata in the arrow schema are up to date.\r\n\r\nI also added a line to update the metadata directly in the Dataset.__init__ method.\r\nThis way even a dataset instantiated with __init__ will have a table with the right metadata.\r\n\r\ncc @mariosasko ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2274\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2273","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2273\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2273\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2273\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2273","id":869046290,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1","number":2273,"title":"Added CUAD metrics","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619542152000,"updated_at":1619704787000,"closed_at":1619704787000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2273","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2273","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2273.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2273.patch"},"body":"`EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2273\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2272","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2272\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2272\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2272\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2272","id":869017977,"node_id":"MDU6SXNzdWU4NjkwMTc5Nzc=","number":2272,"title":"Bug in Dataset.class_encode_column","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This has been fixed in this commit: https:\/\/github.com\/huggingface\/datasets\/pull\/2254\/commits\/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore"],"created_at":1619539998000,"updated_at":1619787267000,"closed_at":1619787267000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nAll the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.\r\n\r\n## Expected results\r\n\r\nAll the original columns should be kept.\r\n\r\nThis needs regression tests.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2272\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2271","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2271\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2271\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2271\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2271","id":869002141,"node_id":"MDU6SXNzdWU4NjkwMDIxNDE=","number":2271,"title":"Synchronize table metadata with features","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["See PR #2274 "],"created_at":1619538913000,"updated_at":1619614105000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"**Is your feature request related to a problem? Please describe.**\r\n\r\nAs pointed out in this [comment](https:\/\/github.com\/huggingface\/datasets\/pull\/2145#discussion_r621326767):\r\n> Metadata stored in the schema is just a redundant information regarding the feature types.\r\nIt is used when calling Dataset.from_file to know which feature types to use.\r\nThese metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`.\r\nHowever this something that's almost never tested properly.\r\n\r\n**Describe the solution you'd like**\r\n\r\nWe should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2271\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2270","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2270\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2270\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2270\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2270","id":868913660,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI0MzU5Njky","number":2270,"title":"Fix iterable interface expected by numpy","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's been fixed in this commit: https:\/\/github.com\/huggingface\/datasets\/commit\/549110e08238b3716a5904667095fb003acda54e\r\n\r\nBasically #2246 broke querying an index with a simple iterable.\r\nWith the fix, it's again possible to use iterables and we can keep RandIter as it is.\r\n\r\nClosing since the fix is already on master"],"created_at":1619534156000,"updated_at":1619631567000,"closed_at":1619631567000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2270","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2270","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2270.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2270.patch"},"body":"Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2270\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2269","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2269\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2269\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2269\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2269","id":868878468,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI0MzMwNDA3","number":2269,"title":"Fix query table with iterable","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619531978000,"updated_at":1619533317000,"closed_at":1619533316000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2269","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2269","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2269.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2269.patch"},"body":"The benchmark runs are failing on master because it tries to use an iterable to query the dataset.\r\nHowever there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable.\r\nThis PR fixes it","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2269\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2268","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2268\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2268\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2268\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2268","id":868773380,"node_id":"MDExOlB1bGxSZXF1ZXN0NjI0MjQyODg1","number":2268,"title":"Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq note that the segfault also occurs on Linux.","Created the ticket at\r\nhttps:\/\/issues.apache.org\/jira\/browse\/ARROW-12568","@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 systems."],"created_at":1619524708000,"updated_at":1623501889000,"closed_at":1619531000000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2268","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2268","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2268.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2268.patch"},"body":"This test `tests\/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.\r\nSetting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2268\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2267","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2267\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2267\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2267\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2267","id":868291129,"node_id":"MDU6SXNzdWU4NjgyOTExMjk=","number":2267,"title":"DatasetDict save load Failing test in 1.6 not in 1.5","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting ! We're looking into it","I'm not able to reproduce this, do you think you can provide a code that creates a DatasetDict that has this issue when saving and reloading ?","Hi, I just ran into a similar error. Here is the minimal code to reproduce:\r\n```python\r\nfrom datasets import load_dataset, DatasetDict\r\nds = load_dataset('super_glue', 'multirc')\r\n\r\nds.save_to_disk('tempds')\r\n\r\nds = DatasetDict.load_from_disk('tempds')\r\n\r\n```\r\n\r\n```bash\r\nReusing dataset super_glue (\/home\/idahl\/.cache\/huggingface\/datasets\/super_glue\/multirc\/1.0.2\/2fb163bca9085c1deb906aff20f00c242227ff704a4e8c9cfdfe820be3abfc83)\r\nTraceback (most recent call last):\r\n File \"\/home\/idahl\/eval-util-expl\/multirc\/tmp.py\", line 7, in \r\n ds = DatasetDict.load_from_disk('tempds')\r\n File \"\/home\/idahl\/miniconda3\/envs\/eval-util-expl\/lib\/python3.9\/site-packages\/datasets\/dataset_dict.py\", line 710, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n File \"\/home\/idahl\/miniconda3\/envs\/eval-util-expl\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 687, in load_from_disk\r\n return Dataset(\r\n File \"\/home\/idahl\/miniconda3\/envs\/eval-util-expl\/lib\/python3.9\/site-packages\/datasets\/arrow_dataset.py\", line 274, in __init__\r\n raise ValueError(\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'answer': Value(dtype='string', id=None), 'idx': {'answer': Value(dtype='int32', id=None), 'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None)}, 'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct, label: int64, paragraph: string, question: string>\r\n\r\nbut expected something like\r\n{'answer': Value(dtype='string', id=None), 'idx': {'paragraph': Value(dtype='int32', id=None), 'question': Value(dtype='int32', id=None), 'answer': Value(dtype='int32', id=None)}, 'label': Value(dtype='int64', id=None), 'paragraph': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)}\r\nwith type\r\nstruct, label: int64, paragraph: string, question: string>\r\n\r\n```\r\n\r\nThe non-matching part seems to be\r\n`'label': ClassLabel(num_classes=2, names=['False', 'True'], names_file=None, id=None),`\r\nvs \r\n`'label': Value(dtype='int64', id=None),`\r\n\r\nAnd the order in the `1.6` -- fixes the problem.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\n\r\n### Load a dataset dict from jsonl \r\n\r\npath = '\/test\/foo'\r\n\r\nds_dict.save_to_disk(path)\r\n\r\nds_from_disk = DatasetDict.load_from_disk(path). ## <-- this is where I see the error on 1.6\r\n```\r\n\r\n## Expected results\r\n\r\nUpgrading to 1.6 shouldn't break that test. We should be able to serialize to and from disk.\r\n\r\n## Actual results\r\n```\r\n # Infer features if None\r\n inferred_features = Features.from_arrow_schema(arrow_table.schema)\r\n if self.info.features is None:\r\n self.info.features = inferred_features\r\n \r\n # Infer fingerprint if None\r\n \r\n if self._fingerprint is None:\r\n self._fingerprint = generate_fingerprint(self)\r\n \r\n # Sanity checks\r\n \r\n assert self.features is not None, \"Features can't be None in a Dataset object\"\r\n assert self._fingerprint is not None, \"Fingerprint can't be None in a Dataset object\"\r\n if self.info.features.type != inferred_features.type:\r\n> raise ValueError(\r\n \"External features info don't match the dataset:\\nGot\\n{}\\nwith type\\n{}\\n\\nbut expected something like\\n{}\\nwith type\\n{}\".format(\r\n self.info.features, self.info.features.type, inferred_features, inferred_features.type\r\n )\r\n )\r\nE ValueError: External features info don't match the dataset:\r\nE Got\r\nE {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'child': Value(dtype='int64', id=None), 'child_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'color': Value(dtype='string', id=None), 'head': Value(dtype='int64', id=None), 'head_span': {'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None)}, 'label': Value(dtype='string', id=None)}], 'spans': [{'end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'token_end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'disabled': Value(dtype='bool', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'start': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None), 'ws': Value(dtype='bool', id=None)}]}\r\nE with type\r\nE struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list, encoding__offsets: list>, encoding__overflowing: list, encoding__tokens: list, encoding__words: list, ner_ids: list, ner_labels: list, relations: list, color: string, head: int64, head_span: struct, label: string>>, spans: list>, text: string, tokens: list>>\r\nE \r\nE but expected something like\r\nE {'_input_hash': Value(dtype='int64', id=None), '_task_hash': Value(dtype='int64', id=None), '_view_id': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'encoding__ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'encoding__offsets': Sequence(feature=Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), length=-1, id=None), 'encoding__overflowing': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'encoding__tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'encoding__words': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'ner_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'relations': [{'head': Value(dtype='int64', id=None), 'child': Value(dtype='int64', id=None), 'head_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'child_span': {'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'label': Value(dtype='string', id=None)}, 'color': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'spans': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'token_start': Value(dtype='int64', id=None), 'token_end': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'type': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)}], 'text': Value(dtype='string', id=None), 'tokens': [{'text': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'end': Value(dtype='int64', id=None), 'id': Value(dtype='int64', id=None), 'ws': Value(dtype='bool', id=None), 'disabled': Value(dtype='bool', id=None)}]}\r\nE with type\r\nE struct<_input_hash: int64, _task_hash: int64, _view_id: string, answer: string, encoding__ids: list, encoding__offsets: list>, encoding__overflowing: list, encoding__tokens: list, encoding__words: list, ner_ids: list, ner_labels: list, relations: list, child_span: struct, color: string, label: string>>, spans: list>, text: string, tokens: list>>\r\n\r\n..\/..\/..\/..\/..\/.virtualenvs\/tf_ner_rel_lib\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py:274: ValueError\r\n```\r\n## Versions\r\n- Datasets: 1.6.1\r\n- Python: 3.8.5 (default, Jan 26 2021, 10:01:04) \r\n[Clang 12.0.0 (clang-1200.0.32.2)]\r\n- Platform: macOS-10.15.7-x86_64-i386-64bit\r\n\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2267\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2266","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2266\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2266\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2266\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2266","id":867864353,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIzNDY1OTI5","number":2266,"title":"Make tests run faster","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LOL, I was also working on something similar \ud83d\ude05. I'm gonna have a look!!!","Sorry I didn't know you were also working on it ^^'\r\nAnd yes I 100% agree with you on the points you mentioned. We should definitely improve the coverage. It would be nice to have a clearer separation to know which tests in the suite are unit tests and which ones are integration tests\r\n","Never mind: we both noticed tests can be improved. More PRs to come... \ud83d\ude09 \r\n\r\nAccording to the literature, unit tests are those that test a behavior unit, isolated from the other components and must be very fast: for me, this last requirement implies that they must be performed completely _in memory_.\r\n\r\nAs opposed, integration tests are those which also test interactions with _external_ components, like web services, databases, file system, etc.\r\n\r\nThe problem I see is that our code is still too coupled and it is difficult to isolate components for testing. Therefore, I would suggest acting iteratively, by refactoring to decouple components and then implement unit tests for each component in isolation."],"created_at":1619452540000,"updated_at":1619690413000,"closed_at":1619690404000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2266","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2266","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2266.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2266.patch"},"body":"From 7min to 2min to run pytest.\r\nIdeally we should keep the whole CI run time below 10min.\r\n\r\nIn this PR I removed the remote tests that were never used.\r\nI also replaced nested parametrized tests with unit tests.\r\nThis makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).\r\nLet me know what you think\r\n\r\nFinally in another PR we can also separate in two circleci jobs:\r\n- the tests of the code code of the lib\r\n- the tests of the all the dataset\/metric scripts.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2266\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2265","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2265\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2265\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2265\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2265","id":867490646,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIzMTUyOTg5","number":2265,"title":"Update black","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619429709000,"updated_at":1619430468000,"closed_at":1619430467000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2265","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2265","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2265.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2265.patch"},"body":"Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib.\r\nThis makes the CI currently fail on master","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2265\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2264","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2264\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2264\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2264\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2264","id":867476228,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIzMTQwODA1","number":2264,"title":"Fix memory issue in multiprocessing: Don't pickle table index","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The code quality check is going to be fixed by #2265 ","The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.\r\nTherefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling and this is the issue.","I'm still investigating why we didn't catch this issue in the tests.\r\nThis test should have caught it but didn't:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/3db67f5ff6cbf807b129d2b4d1107af27623b608\/tests\/test_table.py#L350-L353","I'll focus on the patch release and fix the test in another PR after the release","Yes, I think it is better that way..."],"created_at":1619428895000,"updated_at":1619433028000,"closed_at":1619431694000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2264","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2264","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2264.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2264.patch"},"body":"The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.\r\n\r\nI fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.\r\n\r\nFix issue #2256\r\n\r\nWe'll do a patch release asap !","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2264\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2263","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2263\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2263\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2263\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2263","id":867420912,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIzMDk0NTcy","number":2263,"title":"test data added, dataset_infos updated","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619425638000,"updated_at":1619688621000,"closed_at":1619688620000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2263","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2263","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2263.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2263.patch"},"body":"Fixes #2262. Thanks for pointing out issue with dataset @jinmang2!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2263\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2262","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2262\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2262\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2262\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2262","id":867325351,"node_id":"MDU6SXNzdWU4NjczMjUzNTE=","number":2262,"title":"NewsPH NLI dataset script fails to access test data.","user":{"login":"jinmang2","id":37775784,"node_id":"MDQ6VXNlcjM3Nzc1Nzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37775784?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jinmang2","html_url":"https:\/\/github.com\/jinmang2","followers_url":"https:\/\/api.github.com\/users\/jinmang2\/followers","following_url":"https:\/\/api.github.com\/users\/jinmang2\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jinmang2\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jinmang2\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jinmang2\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jinmang2\/orgs","repos_url":"https:\/\/api.github.com\/users\/jinmang2\/repos","events_url":"https:\/\/api.github.com\/users\/jinmang2\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jinmang2\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset."],"created_at":1619419481000,"updated_at":1619688723000,"closed_at":1619688620000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In Newsph-NLI Dataset (#1192), it fails to access test data.\r\n\r\nAccording to the script below, the download manager will download the train data when trying to download the test data. \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b\/datasets\/newsph_nli\/newsph_nli.py#L71\r\n\r\nIf you download it according to the script above, you can see that train and test receive the same data as shown below.\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> newsph_nli = load_dataset(path=\".\/datasets\/newsph_nli.py\")\r\n>>> newsph_nli\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['premise', 'hypothesis', 'label'],\r\n num_rows: 420000\r\n })\r\n test: Dataset({\r\n features: ['premise', 'hypothesis', 'label'],\r\n num_rows: 420000\r\n })\r\n validation: Dataset({\r\n features: ['premise', 'hypothesis', 'label'],\r\n num_rows: 90000\r\n })\r\n})\r\n>>> newsph_nli[\"train\"][0]\r\n{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',\r\n 'label': 1,\r\n 'premise': '\"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa,\" ayon kay Robredo sa inilabas nitong statement.'}\r\n>>> newsph_nli[\"test\"][0]\r\n{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',\r\n 'label': 1,\r\n 'premise': '\"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa,\" ayon kay Robredo sa inilabas nitong statement.'}\r\n```\r\n\r\nIn local, I modified the code of the source as below and got the correct result.\r\n```python\r\n71 test_path = os.path.join(download_path, \"test.csv\") \r\n```\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> newsph_nli = load_dataset(path=\".\/datasets\/newsph_nli.py\")\r\n>>> newsph_nli\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['premise', 'hypothesis', 'label'],\r\n num_rows: 420000\r\n })\r\n test: Dataset({\r\n features: ['premise', 'hypothesis', 'label'],\r\n num_rows: 9000\r\n })\r\n validation: Dataset({\r\n features: ['premise', 'hypothesis', 'label'],\r\n num_rows: 90000\r\n })\r\n})\r\n>>> newsph_nli[\"train\"][0]\r\n{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',\r\n 'label': 1,\r\n 'premise': '\"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa,\" ayon kay Robredo sa inilabas nitong statement.'}\r\n>>> newsph_nli[\"test\"][0]\r\n{'hypothesis': '-- JAI (@JaiPaller) September 13, 2019',\r\n 'label': 1,\r\n 'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'}\r\n```\r\n\r\nI don't have experience with open source pull requests, so I suggest that you reflect them in the source.\r\n\r\nThank you for reading :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2262\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2261","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2261\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2261\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2261\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2261","id":867088818,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyODIxNzQw","number":2261,"title":"Improve ReadInstruction logic and update docs","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ready for the final review"],"created_at":1619377646000,"updated_at":1621275884000,"closed_at":1621270137000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2261","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2261","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2261.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2261.patch"},"body":"Improve ReadInstruction logic and docs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2261\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2260","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2260\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2260\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2260\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2260","id":866961697,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyNzMwODYx","number":2260,"title":"GooAQ dataset added","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for adding this one !\r\nThe download manager does support downloading files on git lfs via their github url. No need for a manual download option ;)"],"created_at":1619342808000,"updated_at":1620376577000,"closed_at":1620376577000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2260","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2260","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2260.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2260.patch"},"body":"@lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2260\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2259","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2259\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2259\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2259\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2259","id":866880092,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyNjc2ODA0","number":2259,"title":"Add support for Split.ALL","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:\r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"sst\", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError\r\n```\r\n\r\nEDIT:\r\nActually, think it's OK to merge this PR because the fix will not touch this PR's code."],"created_at":1619315142000,"updated_at":1624868487000,"closed_at":1624868487000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2259","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2259","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2259.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2259.patch"},"body":"The title says it all.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2259\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2258","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2258\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2258\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2258\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2258","id":866870588,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyNjcxNTQy","number":2258,"title":"Fix incorrect update_metadata_with_features calls in ArrowDataset","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future."],"created_at":1619311718000,"updated_at":1619457390000,"closed_at":1619456044000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2258","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2258","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2258.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2258.patch"},"body":"Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2258\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2257","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2257\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2257\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2257\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2257","id":866755203,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyNTkwMDQw","number":2257,"title":"added metrics for CUAD","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https:\/\/arxiv.org\/pdf\/2103.06268.pdf). Please let me know if we require `exact_match` metric too here\r\n\r\n@bhavitvyamalik I guess the mentioned metrics are enough but it would be better if exact match is also added since the standard SQUAD dataset also has it.","I would like to quote it from the website that I am following to learn\nthese things.\nExact Match:\nThis metric is as simple as it sounds. For each question+answer pair, if\nthe characters of the model's prediction exactly match the characters of\n*(one\nof) the True Answer(s)*, EM = 1, otherwise EM = 0. This is a strict\nall-or-nothing metric; being off by a single character results in a score\nof 0. When assessing against a negative example, if the model predicts any\ntext at all, it automatically receives a 0 for that example.\n\nSo, I guess you need to ensure at least 1 predicted answer matches for EM\nto be 1.\nSource:\nhttps:\/\/qa.fastforwardlabs.com\/no%20answer\/null%20threshold\/bert\/distilbert\/exact%20match\/f1\/robust%20predictions\/2020\/06\/09\/Evaluating_BERT_on_SQuAD.html\n\nYou can go to their homepage and read the other links. They have detailed\nexplanations on evaluation metrics. You can also have a look at the\nsquad_v2 metric file for further clarification.\n\nRegards,\nMohammed Rakib\n\nOn Sun, 25 Apr 2021 at 15:20, Bhavitvya Malik ***@***.***>\nwrote:\n\n> I'm a little confused when it comes to 2 ground truths which can be a\n> possible answer. Like here for eg.\n>\n> predictions = [{'prediction_text': ['The seller:', 'The buyer\/End-User:\n> Shenzhen LOHAS Supply Chain Management Co., Ltd.'], 'id':\n> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply\n> Agreement__Parties'}]\n>\n> references = [{'answers': {'answer_start': [143, 49], 'text': ['The\n> seller:', 'The buyer\/End-User: Shenzhen LOHAS Supply Chain Management Co.,\n> Ltd.']}, 'id':\n> 'LohaCompanyltd_20191209_F-1_EX-10.16_11917878_EX-10.16_Supply\n> Agreement__Parties'}]\n>\n> Should I ensure at least 1 predicted answer matches or both predicted\n> answers should match (like in this case) for EM to be 1?\n>\n> \u2014\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","Updated the same @MohammedRakib! Even if a single answer matches I'm returning 1 in that case for EM (not traversing all predictions once we have one `exact_match` from prediction)"],"created_at":1619273394000,"updated_at":1619690018000,"closed_at":1619540192000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2257","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2257","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2257.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2257.patch"},"body":"For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https:\/\/arxiv.org\/pdf\/2103.06268.pdf). Please let me know if we require `exact_match` metric too here","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2257\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2256","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2256\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2256\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2256\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2256","id":866708609,"node_id":"MDU6SXNzdWU4NjY3MDg2MDk=","number":2256,"title":"Running `datase.map` with `num_proc > 1` uses a lot of memory","user":{"login":"roskoN","id":8143425,"node_id":"MDQ6VXNlcjgxNDM0MjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8143425?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/roskoN","html_url":"https:\/\/github.com\/roskoN","followers_url":"https:\/\/api.github.com\/users\/roskoN\/followers","following_url":"https:\/\/api.github.com\/users\/roskoN\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/roskoN\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/roskoN\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/roskoN\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/roskoN\/orgs","repos_url":"https:\/\/api.github.com\/users\/roskoN\/repos","events_url":"https:\/\/api.github.com\/users\/roskoN\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/roskoN\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting ! We are working on this and we'll do a patch release very soon.","We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)"],"created_at":1619258180000,"updated_at":1619457135000,"closed_at":1619457135000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\nRunning `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.\r\n\r\n## Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndstc8_datset = load_dataset(\"roskoN\/dstc8-reddit-corpus\", keep_in_memory=False)\r\n\r\n\r\ndef _prepare_sample(batch):\r\n return {\"input_ids\": list(), \"attention_mask\": list()}\r\n\r\n\r\nfor split_name, dataset_split in list(dstc8_datset.items()):\r\n print(f\"Processing {split_name}\")\r\n encoded_dataset_split = dataset_split.map(\r\n function=_prepare_sample,\r\n batched=True,\r\n num_proc=4,\r\n remove_columns=dataset_split.column_names,\r\n batch_size=10,\r\n writer_batch_size=10,\r\n keep_in_memory=False,\r\n )\r\n print(encoded_dataset_split)\r\n\r\n path = f\".\/data\/encoded_{split_name}\"\r\n\r\n encoded_dataset_split.save_to_disk(path)\r\n```\r\n\r\n## Expected results\r\nMemory usage should stay within reasonable boundaries.\r\n\r\n\r\n## Actual results\r\nThis is htop-output from running the provided script.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/8143425\/115954836-66954980-a4f3-11eb-8340-0153bdc3a475.png)\r\n\r\n## Versions\r\n```\r\n- Datasets: 1.6.0\r\n- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)\r\n[GCC 7.3.0]\r\n- Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10\r\n```\r\nRunning on WSL2\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2256\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2255","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2255\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2255\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2255\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2255","id":866242892,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyMTc0Njg4","number":2255,"title":"Task casting for text classification & question answering","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @abhi1thakur ","Looks really nice so far, thanks !\r\nMaybe if a dataset doesn't have a template for a specific task we could try the default template of this task ?","hey @SBrandeis @lhoestq,\r\n\r\ni now have a better idea about what you guys are trying to achieve with the task templates and have a few follow-up questions:\r\n\r\n1. how did you envision using `DatasetInfo` for running evaluation? my understanding is that all `dataset_infos.json` files are stored in the `datasets` repo (unlike `transformers` where each model's weights etc are stored in a dedicated repo). \r\nthis suggests the following workflow:\r\n\r\n```\r\n- git clone datasets\r\n- load target dataset to evaluate\r\n- load `dataset_infos.json` for target dataset\r\n- run eval for each task template in `task_templates`\r\n- store metrics as evaluation cards (similar to what is done in `autonlp`)\r\n```\r\n2. assuming the above workflow, i see that the current `TaskTemplate` attributes of `task`, `input_schema`, and `label_schema` still require some wrangling from `dataset_infos.json` to reproduce additional mappings like `label2id` that we'd need for e.g. text classification. an alternative would be to instantiate the task template class directly from the JSON with something like\r\n```python\r\nfrom datasets.tasks import TextClassification\r\nfrom transformers import AutoModelForSequenceClassification, AutoConfig\r\n\r\ntc = TextClassification.from_json(\"path\/to\/dataset_infos.json\")\r\n# load a model with the desired config\r\nmodel_ckpt = ...\r\nconfig = AutoConfig.from_pretrained(model_ckpt, label2id=tc.label2id, id2label=tc.id2label)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(model_ckpt, config=config)\r\n# run eval ...\r\n```\r\nperhaps this is what @SBrandeis had in mind with the `TaskTemplate.from_dict` method?\r\n\r\n3. i personally prefer using `task_templates` over `supervised_keys` because it encourages the contributor to think in terms of 1 or more tasks. my question here is do we currently use `supervised_keys` for anything important in the `datasets` library?","1. How do you envision using DatasetInfo for running evaluation?\r\n\r\nThe initial idea was to be able to do something like this:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"name\", task=\"binary_classification\")\r\n# OR\r\ndset = load_dataset(\"name\")\r\ndset = dset.prepare_for_task(\"binary_classification\")\r\n```\r\n\r\n2. I don't think that's needed if we proceed as mentioned above\r\n\r\n3. `supervised_keys` are mostly a legacy compatibility thing with TF datasets, not sure it's used for anything right now. I'll let @lhoestq give more details on that\r\n\r\n[Edit 1] Typo","> The initial idea was to be able to do something like this:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> dset = load_dataset(\"name\", task=\"binary_classification\")\r\n> # OR\r\n> dset = load_dataset(\"name\")\r\n> dset = dset.prepare_for_task(\"binary_classification\")\r\n> ```\r\n\r\nah that's very elegant! just so i've completely understood, the result would be that the relevant column names of `dset` would be mapped to e.g. `text` and `label` and thus we'd have a uniform schema for the evaluation of all `binary_classification` tasks?","That's correct! Also, the features need to be appropriately casted\r\nFor a classification task for example, we would need to cast the datasets features to something like this:\r\n```python\r\ndatasets.Features({\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.ClassLabel(names=[...]),\r\n})\r\n```\r\n","3. We can ignore `supervised_keys` (it came from TFDS and we're not using it) and use `task_templates`","great, thanks a lot for your answers! now it's much clearer what i need to do next \ud83d\ude03 ","hey @lhoestq @SBrandeis, \r\n\r\ni've made some small tweaks to @SBrandeis's code so that `Dataset.prepare_for_task` is called in `DatasetBuilder`. using the `emotion` dataset as a test case, the following now works:\r\n\r\n ```python\r\n# DatasetDict with default columns\r\nds = load_dataset(\".\/datasets\/emotion\/\")\r\n# DatasetDict({\r\n# train: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 16000\r\n# })\r\n# validation: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 2000\r\n# })\r\n# test: Dataset({\r\n# features: ['tweet', 'emotion'],\r\n# num_rows: 2000\r\n# })\r\n# })\r\n\r\n# DatasetDict with remapped columns\r\nds = load_dataset(\".\/datasets\/emotion\/\", task=\"text_classification\")\r\nDatasetDict({\r\n# train: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 16000\r\n# })\r\n# validation: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 2000\r\n# })\r\n# test: Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 2000\r\n# })\r\n# })\r\n\r\n# Dataset with default columns\r\nds = load_dataset(\".\/datasets\/emotion\/\", split=\"train\")\r\n# Map\/cast features\r\nds = ds.prepare_for_task(\"text_classification\")\r\n# Dataset({\r\n# features: ['text', 'label'],\r\n# num_rows: 16000\r\n# })\r\n```\r\n\r\ni have a few follow-up questions \/ remarks:\r\n\r\n1. i'm working under the assumption that contributors \/ users only provide a unique set of task types. in particular, the current implementation does not support something like:\r\n```python\r\ntask_templates=[TextClassification(labels=class_names, text_column=\"tweet\", label_column=\"emotion\"), TextClassification(labels=class_names, text_column=\"some_other_column\", label_column=\"some_other_column\")]\r\n```\r\nsince we use `TaskTemplate.task` and the filter for compatible templates in `Dataset.prepare_for_task`. should we support these scenarios? my hunch is that this is rare in practice, but please correct me if i'm wrong.\r\n\r\n2. when we eventually run evaluation for `transformers` models, i expect we'll be using the `Trainer` for which we can pass the standard label names to `TrainingArguments.label_names`. if that's the case, it might be prudent to heed the warning from the [docs](https:\/\/huggingface.co\/transformers\/main_classes\/trainer.html?highlight=trainer#trainer) and use `labels` instead of `label` in the schema:\r\n> your model can accept multiple label arguments (use the label_names in your TrainingArguments to indicate their name to the Trainer) but none of them should be named \"label\".\r\n\r\n3. i plan to forge ahead on the rest of the pipeline taxonomy. please let me know if you'd prefer smaller, self-contained pull requests (e.g. one per task)","hey @lhoestq @SBrandeis, i think this is ready for another review \ud83d\ude03 \r\n\r\nin addition to a few comments \/ questions i've left in the pr, here's a few remarks:\r\n\r\n1. after some experimentation, i decided against allowing the user to specify nested column names for question-answering. i couldn't find a simple solution with the current api and suspect that i'd have to touch many areas of `datasets` to \"unflatten\" columns in a generic fashion.\r\n2. in the current implementation, the user can specify the outer column name for question-answering, but is expected to follow the inner schema for e.g. `answers.text` and `answers.answer_start`. we can decide later how much flexibility we want to give users\r\n3. i added a few unit tests\r\n4. as discussed, let's keep this pr focused on text classification \/ question answering and i'll add the other tasks in separate prs\r\n5. i renamed the tasks e.g. `text_classification` -> `text-classification` for consistency with the `Trainer` model cards [here](https:\/\/github.com\/huggingface\/transformers\/pull\/11599#pullrequestreview-656371007).","i'm not sure why the benchmarks are getting cancelled - is this expected?","> i'm not sure why the benchmarks are getting cancelled - is this expected?\r\n\r\nHmm I don't know. It's certainly unrelated to this PR though. Maybe github has some issues","Something is happening with actions: https:\/\/www.githubstatus.com\/","hey @lhoestq and @SBrandeis, i've: \r\n\r\n* extended the `prepare_for_task` API along the lines that @lhoestq suggested. i wasn't entirely sure what the `datasets` convention is for docstrings with mixed types, so please see if my proposal makes sense\r\n* added a few new tests to check that we trigger the value errors on incorrect input\r\n\r\ni think this is ready for another review :)","> Looks all good thank you :)\r\n> \r\n> Can you also add `prepare_for_task` in the `main_classes.rst` file of the documentation ?\r\n\r\nDone! I also remembered that I needed to do the same for `DatasetDict`, so included this as well :)"],"created_at":1619193641000,"updated_at":1621344696000,"closed_at":1621344695000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2255","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2255","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2255.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2255.patch"},"body":"This PR implements task preparation for a given task, in the continuation of #2143 \r\n\r\nTask taxonomy follows \ud83e\udd17 Transformers's pipelines taxonomy: https:\/\/github.com\/huggingface\/transformers\/tree\/master\/src\/transformers\/pipelines\r\n\r\nEdit by @lewtun:\r\n\r\nThis PR implements support for the following tasks:\r\n\r\n* `text-classification`\r\n* `question-answering`\r\n\r\nThe intended usage is as follows:\r\n\r\n```python\r\n# Load a dataset with default column names \/ features\r\nds = load_dataset(\"dataset_name\")\r\n# Cast column names \/ features to schema. Casting is defined in the dataset's `DatasetInfo`\r\nds = ds.prepare_for_task(task=\"text-classification\")\r\n# Casting can also be realised during load\r\nds = load_dataset(\"dataset_name\", task=\"text-classification\")\r\n# We can also combine shared tasks across dataset concatenation\r\nds1 = load_dataset(\"dataset_name_1\", task=\"text-classification\")\r\nds2 = load_dataset(\"dataset_name_2\", task=\"text-classification\")\r\n# If the tasks have the same schema, so will `ds_concat`\r\nds_concat = concatenate_datasets([ds1, ds2])\r\n```\r\n\r\nNote that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user \/ contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.\r\n\r\nAs pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.\r\n\r\n```python\r\nsquad = load_dataset(\".\/datasets\/squad\", split=\"train\")\r\nqa = QuestionAnswering()\r\nschema = Features({**qa.input_schema, **qa.label_schema})\r\nassert all(item in squad.features.items() for item in schema.items())\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2255\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2254","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2254\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2254\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2254\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2254","id":866169312,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyMTE1NDI0","number":2254,"title":"Update format, fingerprint and indices after add_item","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I renamed the variable, added a test for dataset._indices and fixed an issue with class_encode_column"],"created_at":1619188309000,"updated_at":1619541049000,"closed_at":1619541048000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2254","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2254","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2254.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2254.patch"},"body":"Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2254\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2253","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2253\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2253\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2253\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2253","id":866034321,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIyMDA2Njg3","number":2253,"title":"Perform minor refactoring: use config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2851292821,"node_id":"MDU6TGFiZWwyODUxMjkyODIx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/refactoring","name":"refactoring","color":"B67A40","default":false,"description":"Restructuring existing code without changing its external behavior"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq is there a problem in the master branch? I got a segmentation fault...\r\n```\r\ntests\/test_table.py::test_concatenation_table_cast[in_memory] Fatal Python error: Segmentation fault\r\n```","Oh wow. Let me re-run the CI just to make sure","Hmm interesting, the segfault is still there. I'm investigating this issue on my windows machine","Feel free to merge master into this branch to fix the CI :)"],"created_at":1619178347000,"updated_at":1622106765000,"closed_at":1619535779000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2253","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2253","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2253.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2253.patch"},"body":"Perform minor refactoring related to `config`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2253\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2252","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2252\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2252\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2252\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2252","id":865870710,"node_id":"MDU6SXNzdWU4NjU4NzA3MTA=","number":2252,"title":"Slow dataloading with big datasets issue persists","user":{"login":"hwijeen","id":29157715,"node_id":"MDQ6VXNlcjI5MTU3NzE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29157715?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hwijeen","html_url":"https:\/\/github.com\/hwijeen","followers_url":"https:\/\/api.github.com\/users\/hwijeen\/followers","following_url":"https:\/\/api.github.com\/users\/hwijeen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hwijeen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hwijeen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hwijeen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hwijeen\/orgs","repos_url":"https:\/\/api.github.com\/users\/hwijeen\/repos","events_url":"https:\/\/api.github.com\/users\/hwijeen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hwijeen\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Sorry to hear that. This may come from another issue then.\r\n\r\nFirst can we check if this latency comes from the dataset itself ?\r\nYou can try to load your dataset and benchmark the speed of querying random examples inside it ?\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(...) # or from load_dataset...\r\n\r\n_start = time.time()\r\nn = 100\r\nfor i in np.random.default_rng(42).integers(0, len(dataset), size=n):\r\n _ = dataset[i]\r\nprint(time.time() - _start)\r\n```\r\n\r\nIf we see a significant speed difference between your two datasets then it would mean that there's an issue somewhere","Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:\r\n* 60GB\r\n```\r\nloading took: 22.618776321411133\r\nramdom indexing 100 times took: 0.10214924812316895\r\n```\r\n\r\n* 600GB\r\n```\r\nloading took: 1176.1764674186707\r\nramdom indexing 100 times took: 2.853600025177002\r\n```\r\n\r\nHmm.. I double checked that it's version 1.6.0. The difference seems quite big, could it be related to the running environment? \r\n","I'm surprised by the speed change. Can you give more details about your dataset ?\r\nThe speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.\r\nYou can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory).\r\n\r\nAlso can you explain what parameters you used if you used `map` calls ?\r\nAlso if you have some code that reproduces the issue I'd be happy to investigate it.","Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD","Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeling).\r\n```\r\nlen(batches):\r\n492763\r\n\r\nbatches[0]: \r\npyarrow.RecordBatch\r\nattention_mask: list\r\n child 0, item: uint8\r\ninput_ids: list\r\n child 0, item: int16\r\nspecial_tokens_mask: list\r\n child 0, item: uint8\r\ntoken_type_ids: list\r\n child 0, item: uint8\r\n```\r\n\r\nHere the some parameters to `map` function just in case it is relevant:\r\n```\r\nnum_proc=1 # as multi processing is slower in my case\r\nload_from_cache_file=False\r\n```\r\n","Regarding the environment, I am running the code on a cloud server. Here are some info:\r\n```\r\nUbuntu 18.04.5 LTS # cat \/etc\/issue\r\npyarrow 3.0.0 # pip list | grep pyarrow\r\n```\r\nThe data is stored in SSD and it is mounted to the machine via Network File System.\r\n\r\nIf you could point me to some of the commands to check the details of the environment, I would be happy to provide relevant information @lhoestq !","I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.\r\n\r\n```python\r\nclass MyModel(pytorch_lightning.LightningModule)\r\n def setup(self, stage):\r\n self.dataset = datasets.load_from_disk(path)\r\n self.dataset.set_format(\"torch\")\r\n\r\n def train_dataloader(self):\r\n collate_fn = transformers.DataCollatorForLanguageModeling(\r\n tokenizer=transformers.ElectraTokenizerFast.from_pretrained(tok_path)\r\n )\r\n dataloader = torch.utils.DataLoader(\r\n self.dataset,\r\n batch_size=32,\r\n collate_fn=collate_fn,\r\n num_workers=8,\r\n pin_memory=True,\r\n )\r\n```","Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?\r\nI'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs","Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x.","@lhoestq and @hwijeen\r\n\r\nDespite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb\/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.\r\n\r\nStack details:\r\n=========\r\n\r\n> GCC version: Could not collect\r\n> Clang version: Could not collect\r\n> CMake version: Could not collect\r\n> \r\n> Python version: 3.7 (64-bit runtime)\r\n> Is CUDA available: True\r\n> CUDA runtime version: 10.2.89\r\n> GPU models and configuration: GPU 0: GeForce GTX 1050\r\n> Nvidia driver version: 457.63\r\n> cuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\bin\\cudnn64_7.dll\r\n> HIP runtime version: N\/A\r\n> MIOpen runtime version: N\/A\r\n> \r\n> Versions of relevant libraries:\r\n> [pip3] datasets==1.6.2\r\n> [pip3] transformers==4.5.1\r\n> [pip3] numpy==1.19.1\r\n> [pip3] numpydoc==1.1.0\r\n> [pip3] pytorch-metric-learning==0.9.98\r\n> [pip3] torch==1.8.1\r\n> [pip3] torchaudio==0.8.1\r\n> [pip3] torchvision==0.2.2\r\n> [conda] blas 2.16 mkl conda-forge\r\n> [conda] cudatoolkit 10.2.89 hb195166_8 conda-forge\r\n> [conda] libblas 3.8.0 16_mkl conda-forge\r\n> [conda] libcblas 3.8.0 16_mkl conda-forge\r\n> [conda] liblapack 3.8.0 16_mkl conda-forge\r\n> [conda] liblapacke 3.8.0 16_mkl conda-forge\r\n> [conda] mkl 2020.1 216\r\n> [conda] numpy 1.19.1 py37hae9e721_0 conda-forge\r\n> [conda] numpydoc 1.1.0 py_1 conda-forge\r\n> [conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7_0 pytorch\r\n> [conda] pytorch-metric-learning 0.9.98 pyh39e3cac_0 metric-learning\r\n> [conda] torchaudio 0.8.1 py37 pytorch\r\n> [conda] torchvision 0.2.2 py_3 pytorch","Hi @BenoitDalFerro how do your load your dataset ?","Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any\r\n\r\n> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir))","I\u2019m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M\/s.","@tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator\r\n\r\n@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to bottom, suggest PySmart https:\/\/pypi.org\/project\/pySMART\/ a Smartmontools implementation","I wasn't able to reproduce this on a toy dataset of around 300GB:\r\n\r\n```python\r\nimport datasets as ds\r\n\r\ns = ds.load_dataset(\"squad\", split=\"train\")\r\ns4000 = ds.concatenate_datasets([s] * 4000)\r\nprint(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'\r\n\r\ns4000.save_to_disk(\"tmp\/squad_4000\")\r\n```\r\n\r\n```python\r\nimport psutil\r\nimport time\r\nfrom datasets import load_from_disk\r\n\r\ndisk = \"disk0\" # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\n\r\ns4000_reloaded = load_from_disk(\"tmp\/squad_4000\")\r\n\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\n\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```\r\n\r\nCould you run this on your side and tell me if how much time it takes ? Please run this when your machine is idle so that other processes don't interfere.\r\n\r\nI got these results on my macbook pro on datasets 1.6.2","@lhoestq thanks, test running as we speak, bear with me","Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab.","@lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ?","@lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds.","Okay, here\u2019s the ouput:\r\nBlocks read 158396\r\nElapsed time: 529.10s\r\n\r\nAlso using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem?","@lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards.","The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.\r\n\r\nHere are three consecutive runs\r\nFirst run (freshly written to disk):\r\nBlocks read 309702\r\nElapsed time: 1267.74s\r\nSecond run (immediately after):\r\nBlocks read 113944\r\nElapsed time: 417.55s\r\nThird run (immediately after):\r\nBlocks read 42518\r\nElapsed time: 199.19s\r\n","@lhoestq \r\nFirst test\r\n> elapsed time: 11219.05s\r\n\r\nSecond test running bear with me, for Windows users slight trick to modify original \"disk0\" string:\r\n\r\nFirst find physical unit relevant key in dictionnary\r\n```\r\nimport psutil\r\npsutil.disk_io_counters(perdisk=True)\r\n```\r\n\r\n> {'PhysicalDrive0': sdiskio(read_count=18453286, write_count=4075333, read_bytes=479546467840, write_bytes=161590275072, read_time=20659, write_time=2464),\r\n> 'PhysicalDrive1': sdiskio(read_count=1495778, write_count=388781, read_bytes=548628622336, write_bytes=318234849280, read_time=426066, write_time=19085)}\r\n\r\nIn my case it's _PhysicalDrive1_\r\n\r\nThen insert relevant key's string as _disk_ variable\r\n\r\n```\r\npsutil.disk_io_counters()\r\ndisk = 'PhysicalDrive1' # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\ns4000_reloaded = load_from_disk(\"your path here\")\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```","@lhoestq\r\nSecond test\r\n\r\n> Blocks read 1265609\r\n> Elapsed time: 11216.55s","@lhoestq any luck ?","Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.\r\n\r\nI did some tests on google colab and have the same issue. The first time the dataset arrow file is memory mapped takes always a lot of time (time seems linear with respect to the dataset size). Reloading the dataset is then instantaneous since the arrow file has already been memory mapped.\r\n\r\nI also tried using the Arrow IPC file format (see #1933) instead of the current streaming format that we use but it didn't help.\r\n\r\nMemory mapping is handled by the OS and depends on the disk you're using, so I'm not sure we can do much about it. I'll continue to investigate anyway, because I still don't know why in some cases it would go through the entire file (high `Blocks read ` as in your tests) and in other cases it would do almost no reading.","@lhoestq thanks for the effort, let's stay in touch","Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` ","Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes.","Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue."],"created_at":1619165900000,"updated_at":1631204972000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).\r\nHowever, the problem seems to persist. Here is the profiled results:\r\n\r\n\r\n1) Running with 60GB\r\n```\r\nAction \t| Mean duration (s)\t|Num calls \t| Total time (s) \t| Percentage % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nTotal \t| - \t|_ \t| 517.96 \t| 100 % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nmodel_backward \t| 0.26144 \t|100 \t| 26.144 \t| 5.0475 \t|\r\nmodel_forward \t| 0.11123 \t|100 \t| 11.123 \t| 2.1474 \t|\r\nget_train_batch \t| 0.097121 \t|100 \t| 9.7121 \t| 1.8751 \t|\r\n```\r\n\r\n\r\n3) Running with 600GB, datasets==1.6.0\r\n```\r\nAction \t| Mean duration (s)\t|Num calls \t| Total time (s) \t| Percentage % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nTotal \t| - \t|_ \t| 4563.2 \t| 100 % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nget_train_batch \t| 5.1279 \t|100 \t| 512.79 \t| 11.237 \t|\r\nmodel_backward \t| 4.8394 \t|100 \t| 483.94 \t| 10.605 \t|\r\nmodel_forward \t| 0.12162 \t|100 \t| 12.162 \t| 0.26652 \t|\r\n```\r\n\r\nI see that `get_train_batch` lags when data is large. Could this be related to different issues?\r\nI would be happy to provide necessary information to investigate.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2252\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2251","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2251\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2251\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2251\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2251","id":865848705,"node_id":"MDU6SXNzdWU4NjU4NDg3MDU=","number":2251,"title":"while running run_qa.py, ran into a value error","user":{"login":"nlee0212","id":44570724,"node_id":"MDQ6VXNlcjQ0NTcwNzI0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44570724?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nlee0212","html_url":"https:\/\/github.com\/nlee0212","followers_url":"https:\/\/api.github.com\/users\/nlee0212\/followers","following_url":"https:\/\/api.github.com\/users\/nlee0212\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nlee0212\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nlee0212\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nlee0212\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nlee0212\/orgs","repos_url":"https:\/\/api.github.com\/users\/nlee0212\/repos","events_url":"https:\/\/api.github.com\/users\/nlee0212\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nlee0212\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1619164263000,"updated_at":1619164263000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"command:\r\n\r\npython3 run_qa.py --model_name_or_path hyunwoongko\/kobart --dataset_name squad_kor_v2 --do_train --do_eval --per_device_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3 --max_seq_length 512 --doc_stride 128 --output_dir \/tmp\/debug_squad\/\r\n\r\nerror: \r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answer': {'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None)}, 'url': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None)}\r\nwith type\r\nstruct, context: string, id: string, question: string, raw_html: string, title: string, url: string>\r\n\r\nbut expected something like\r\n{'answer': {'answer_start': Value(dtype='int32', id=None), 'html_answer_start': Value(dtype='int32', id=None), 'text': Value(dtype='string', id=None)}, 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'raw_html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}\r\nwith type\r\nstruct, context: string, id: string, question: string, raw_html: string, title: string, url: string>\r\n\r\nI didn't encounter this error 4 hours ago. any solutions for this kind of issue?\r\nlooks like gained dataset format refers to 'Data Fields', while expected refers to 'Data Instances'.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2251\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2250","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2250\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2250\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2250\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2250","id":865402449,"node_id":"MDU6SXNzdWU4NjU0MDI0NDk=","number":2250,"title":"some issue in loading local txt file as Dataset for run_mlm.py","user":{"login":"alighofrani95","id":14968123,"node_id":"MDQ6VXNlcjE0OTY4MTIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14968123?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alighofrani95","html_url":"https:\/\/github.com\/alighofrani95","followers_url":"https:\/\/api.github.com\/users\/alighofrani95\/followers","following_url":"https:\/\/api.github.com\/users\/alighofrani95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alighofrani95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alighofrani95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alighofrani95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alighofrani95\/orgs","repos_url":"https:\/\/api.github.com\/users\/alighofrani95\/repos","events_url":"https:\/\/api.github.com\/users\/alighofrani95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alighofrani95\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\n1. try\r\n ```python\r\n dataset = load_dataset(\"text\", data_files={\"train\": [\"a1.txt\", \"b1.txt\"], \"test\": [\"c1.txt\"]})\r\n ```\r\n instead.\r\n\r\n Sadly, I can't reproduce the error on my machine. If the above code doesn't resolve the issue, try to update the library to the \r\n newest version (`pip install datasets --upgrade`).\r\n\r\n2. https:\/\/github.com\/huggingface\/transformers\/blob\/3ed5e97ba04ce9b24b4a7161ea74572598a4c480\/examples\/pytorch\/language-modeling\/run_mlm.py#L258-L259\r\nThis is the original code. You'll have to modify the example source to work with multiple train files. To make it easier, let's say \"|\" will act as a delimiter between files:\r\n ```python\r\n if data_args.train_file is not None:\r\n data_files[\"train\"] = data_args.train_file.split(\"|\") # + .split(\"|\")\r\n ```\r\n Then call the script as follows (**dataset_name must be None**):\r\n ```bash\r\n python run_mlm.py [... other args] --train_file a1.txt|b1.txt\r\n ```","i meet the same error with datasets 1.11.0, is there any insight about this?"],"created_at":1619120353000,"updated_at":1629258552000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/14968123\/115773877-18cef300-a3c6-11eb-8e58-a9cbfd1001ec.png)\r\n\r\nfirst of all, I tried to load 3 .txt files as a dataset (sure that the directory and permission is OK.), I face with the below error.\r\n\r\n> FileNotFoundError: [Errno 2] No such file or directory: 'c'\r\n\r\nby removing one of the training .txt files It's fixed and although if I put all file as training it's ok\r\n![image](https:\/\/user-images.githubusercontent.com\/14968123\/115774207-867b1f00-a3c6-11eb-953b-905cfb112d25.png)\r\n![image](https:\/\/user-images.githubusercontent.com\/14968123\/115774264-9b57b280-a3c6-11eb-9f36-7b109f0e5a31.png)\r\n\r\n\r\nafter this, my question is how could I use this defined Dataset for run_mlm.py for from scratch pretraining.\r\nby using --train_file path_to_train_file just can use one .txt , .csv or, .json file. I tried to set my defined Dataset as --dataset_name but the below issue occurs.\r\n\r\n\r\n> Traceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py\", line 336, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py\", line 291, in cached_path\r\n use_auth_token=download_config.use_auth_token,\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py\", line 621, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/dataset\/dataset.py\r\n\r\n> During handling of the above exception, another exception occurred:\r\n\r\n> Traceback (most recent call last):\r\n File \"run_mlm.py\", line 486, in \r\n main()\r\n File \"run_mlm.py\", line 242, in main\r\n datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir)\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py\", line 719, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py\", line 347, in prepare_module\r\n combined_path, github_file_path\r\nFileNotFoundError: Couldn't find file locally at dataset\/dataset.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.6.0\/datasets\/dataset\/dataset.py.\r\nThe file is also not present on the master branch on github.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2250\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2249","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2249\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2249\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2249\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2249","id":865257826,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIxMzU1MzE3","number":2249,"title":"Allow downloading\/processing\/caching only specific splits","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":["> If you pass a dictionary like this:\r\n> \r\n> ```\r\n> {\"main_metadata\": url_to_main_data,\r\n> \"secondary_metadata\": url_to_sec_data,\r\n> \"train\": url_train_data,\r\n> \"test\": url_test_data}\r\n> ```\r\n> \r\n> then only the train or test keys will be kept, which I feel not intuitive.\r\n> \r\n> For example if the users asks to load the \"train\" split, then the main and secondary metadata won't be downloaded.\r\n> You can fix that by keeping all the keys except the splits to ignore\r\n\r\nHi @lhoestq, I have been thinking about this and I think it is worth that we discuss about it.\r\n\r\nWhen I created this PR, my first idea was to create a \"hack\" inside the download manager that will be able to filter some split(s) without touching any dataset script. Of course, the download manager does not know about splits logic, and thus this trick would only work for some very specific datasets: only the ones containing that pass a dict to the download manager containing only the keys \"train\", \"validation\", \"test\" (or the one passed by the user for advanced users knowing they can do it), e.g. the `natural_questions` dataset (which was one of the targets).\r\n\r\nThe big inconvenient of this approach is that it is not applicable to many datasets (or worse, it should be constantly tweaked to cope with exceptional cases). One exceptional case is the one you pointed out. But I see others:\r\n- the split keys can be different: train, test, dev, val, validation, eval,...\r\n- in `hope_edi` dataset, the split keys are: TRAIN_DOWNLOAD_URL, VALIDATION_DOWNLOAD_URL\r\n- in `few_rel` dataset, the split keys are: train_wiki, val_nyt, val_pubmed,..., pid2name\r\n- in `curiosity_dialogs`, the split keys are: train, val, test, test_zero; this means that for every split we pass, we will also get test_zero\r\n- in `deal_or_no_dialog`, each of the splits URL is passed separately to the download manager, so all splits would be always downloaded\r\n- etc.\r\n\r\nThen after discussing, another idea emerged: pass a `split` parameter to `_split_generators`, which know about the splits logic, so that it can handle which splits are passed to the download manager. This approach is more accurate and can be tweaked so that it works with all the datasets we want. The only inconvenient is that then for every target dataset, we must modify its corresponding `_split_generators` script method.\r\n\r\nMy point is that I don't think it is a good idea to implement both approaches. They could even interfere with each other! \r\n\r\nIf you agree, I would implement ONLY the second one, which is simpler, more consistent and stable and will avoid future problems.","Hi @albertvillanova !\r\nYup I agree with you, implementing the 2nd approach seems to be the right solution"],"created_at":1619113904000,"updated_at":1630560811000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2249","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2249","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2249.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2249.patch"},"body":"Allow downloading\/processing\/caching only specific splits without downloading\/processing\/caching the other splits.\r\n\r\nThis PR implements two steps to handle only specific splits:\r\n- it allows processing\/caching only specific splits into Arrow files\r\n- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)\r\n\r\nThis PR makes several assumptions:\r\n- `DownloadConfig` contains the configuration settings for downloading\r\n- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2249\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2248","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2248\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2248\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2248\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2248","id":864853447,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIxMDEyNzg5","number":2248,"title":"Implement Dataset to JSON","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/3","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/3","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/3\/labels","id":6644287,"node_id":"MDk6TWlsZXN0b25lNjY0NDI4Nw==","number":3,"title":"1.7","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":3,"state":"closed","created_at":1617974191000,"updated_at":1622478053000,"due_on":1620975600000,"closed_at":1622478053000},"comments":[],"created_at":1619092011000,"updated_at":1619537361000,"closed_at":1619537360000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2248","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2248","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2248.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2248.patch"},"body":"Implement `Dataset.to_json`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2248\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2247","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2247\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2247\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2247\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2247","id":864817520,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIwOTgzNzY3","number":2247,"title":"Implement Dataset from Parquet","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/7","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/7","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/7\/labels","id":6931350,"node_id":"MDk6TWlsZXN0b25lNjkzMTM1MA==","number":7,"title":"1.11","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":2,"state":"closed","created_at":1625809740000,"updated_at":1630560843000,"due_on":1627628400000,"closed_at":1630560843000},"comments":["Hi @albertvillanova , I'll implement the parquet builder as an ArrowBasedBuilder if you don't mind","closing in favor of #2537 that is already merged"],"created_at":1619089298000,"updated_at":1627306132000,"closed_at":1627306131000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2247","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2247","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2247.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2247.patch"},"body":"Implement instantiation of Dataset from Parquet file.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2247\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2246","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2246\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2246\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2246\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2246","id":864220031,"node_id":"MDExOlB1bGxSZXF1ZXN0NjIwNDg3OTUw","number":2246,"title":"Faster map w\/ input_columns & faster slicing w\/ Iterable keys","user":{"login":"norabelrose","id":39116809,"node_id":"MDQ6VXNlcjM5MTE2ODA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39116809?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/norabelrose","html_url":"https:\/\/github.com\/norabelrose","followers_url":"https:\/\/api.github.com\/users\/norabelrose\/followers","following_url":"https:\/\/api.github.com\/users\/norabelrose\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/norabelrose\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/norabelrose\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/norabelrose\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/norabelrose\/orgs","repos_url":"https:\/\/api.github.com\/users\/norabelrose\/repos","events_url":"https:\/\/api.github.com\/users\/norabelrose\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/norabelrose\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Just fixed the code style issues\u2014 I think it should be good to merge now :)"],"created_at":1619034547000,"updated_at":1619453639000,"closed_at":1619453639000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2246","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2246","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2246.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2246.patch"},"body":"@lhoestq Fixes #2193 \r\n\r\n- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set\r\n- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices.\r\n\r\nTogether these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2246\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2245","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2245\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2245\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2245\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2245","id":863191655,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3","number":2245,"title":"Add `key` type and duplicates verification with hashing","user":{"login":"NikhilBartwal","id":42388668,"node_id":"MDQ6VXNlcjQyMzg4NjY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42388668?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NikhilBartwal","html_url":"https:\/\/github.com\/NikhilBartwal","followers_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/followers","following_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/orgs","repos_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/repos","events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist\/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples\/s]2021-04-26 02:50:03.703836: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\n\r\nFAILURE TO GENERATE DATASET: Invalid key type detected\r\nFound Key [0, 0] of type \r\nKeys should be either str, int or bytes type\r\n```\r\n\r\nIn the case of duplicate keys, it now gives:\r\n\r\n```\r\nDownloading and preparing dataset mnist\/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples\/s]2021-04-26 02:53:13.498579: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\load.py\", line 746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 587, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 1002, in _prepare_split\r\n writer.write(example, key)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 321, in write\r\n self.check_duplicates()\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 331, in check_duplicates\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 234467\r\nKeys should be unique and deterministic in nature\r\n```\r\nPlease let me know if this is what we wanted to implement. Thanks!","This looks pretty cool !\r\nWe can make focus on the GeneratorBasedBuilder for now yes.\r\n\r\nDo you think we could make the ArrowWriter not look for duplicates by default ?\r\nThis way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now.","Thank you @lhoestq\r\n\r\n\r\n\r\n> Do you think we could make the ArrowWriter not look for duplicates by default ?\r\n\r\nWe can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`. \r\n\r\nHowever, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.\r\n\r\nNonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks!","I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.\r\nThis class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\nThat's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nAn alternative would be to subclass the writer to include duplicates detection in another class.\r\n\r\nBoth options are fine for me, let me know what you think !","> This class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\n> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nWell, that makes sense as the writer can indeed be used for other purposes as well.\r\n\r\n> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.\r\n\r\nI think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`). \r\n\r\nI will be adding the changes soon. Thanks for the feedback @lhoestq!","@lhoestq I have pushed the final changes just now. \r\nNow, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)\r\n\r\nLet me know if this is what was required. Thanks!","@lhoestq Thanks for the feedback! I will be adding the tests for the same very soon. \r\n\r\nHowever, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks!","You can merge master into your branch to fix this issue.\r\nBasically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).\r\nSo until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently.","@lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.\r\nWill be pushing the commit for the tests soon!","Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.\r\nI think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing). \r\nThese datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches. \r\n\r\nI'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks!","Hi ! Once https:\/\/github.com\/huggingface\/datasets\/pull\/2333 is merged, feel free to merge master into your branch to fix the CI :)","Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :)","I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :)","@lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :)","Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks!","Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch)","@lhoestq Thank you for the help and feedback. Feels great to contribute!"],"created_at":1618948999000,"updated_at":1620669877000,"closed_at":1620667882000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2245","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2245","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2245.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2245.patch"},"body":"Closes #2230 \r\nThere is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.\r\nThis PR is currently a work in progress with the following goals:\r\n\r\n- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash\r\n- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing\r\n- [x] Add a hashing class which takes an input key of certain type (`str`\/`int`\/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`\r\n- [x] Creating a function giving a custom error message when non-unique keys are found \r\n **[This will take care of type-checking for keys]**\r\n- [x] Checking for duplicate keys in `writer.write()` for each batch\r\n\r\n[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]\r\n\r\n@lhoestq Thank you for the feedback. It would be great to have your guidance on this!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2245\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2244","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2244\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2244\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2244\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2244","id":863029946,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE5NTAyODc0","number":2244,"title":"Set specific cache directories per test function call","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/8","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/8\/labels","id":6968069,"node_id":"MI_kwDODunzps4AalMF","number":8,"title":"1.12","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":5,"closed_issues":1,"state":"open","created_at":1626881696000,"updated_at":1630565260000,"due_on":1630306800000,"closed_at":null},"comments":["@lhoestq, I think this reaches some memory limit on Linux instances... (?)","It looks like the `comet` metric test fails because it tries to load a model in memory.\r\nIn the tests I think we have `patch_comet` that mocks the model download + inference. Not sure why it didn't work though.\r\nI can take a look tomorrow (this afternoon is the pytorch ecosystem day)","@lhoestq thanks for the hint: I'm going to have a look at that mock... ;)","@lhoestq finally I did not find out why the mock is not used... If you can give me some other hint tomorrow..."],"created_at":1618938382000,"updated_at":1630560811000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2244","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2244","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2244.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2244.patch"},"body":"Implement specific cache directories (datasets, metrics and modules) per test function call.\r\n\r\nCurrently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.\r\n\r\nThis PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2244\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2243","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2243\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2243\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2243\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2243","id":862909389,"node_id":"MDU6SXNzdWU4NjI5MDkzODk=","number":2243,"title":"Map is slow and processes batches one after another","user":{"login":"villmow","id":2743060,"node_id":"MDQ6VXNlcjI3NDMwNjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2743060?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/villmow","html_url":"https:\/\/github.com\/villmow","followers_url":"https:\/\/api.github.com\/users\/villmow\/followers","following_url":"https:\/\/api.github.com\/users\/villmow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/villmow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/villmow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/villmow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/villmow\/orgs","repos_url":"https:\/\/api.github.com\/users\/villmow\/repos","events_url":"https:\/\/api.github.com\/users\/villmow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/villmow\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.","Hi @albertvillanova, thanks for the reply. I just tried the new version and the problem still persists. \r\n\r\nDo I need to rebuild the saved dataset (which I load from disk) with the 1.6.0 version of datasets? My script loads this dataset and creates new datasets from it. I tried it without rebuilding.\r\n\r\nSee this short video of what happens. It does not create all processes at the same time:\r\n\r\nhttps:\/\/user-images.githubusercontent.com\/2743060\/115720139-0da3a500-a37d-11eb-833a-9bbacc70868d.mp4\r\n\r\n","There can be a bit of delay between the creations of the processes but this delay should be the same for both your `map` calls. We should look into this.\r\nAlso if you hav some code that reproduces this issue on google colab that'd be really useful !\r\n\r\nRegarding the speed differences:\r\nThis looks like a similar issue as https:\/\/github.com\/huggingface\/datasets\/issues\/1992 who is experiencing the same speed differences between processes.\r\nThis is a known bug that we are investigating. As of now I've never managed to reproduce it on my machine so it's pretty hard for me to find where this issue comes from.\r\n","Upgrade to 1.6.1 solved my problem somehow. I did not change any of my code, but now it starts all processes around the same time.","Nice ! I'm glad this works now.\r\nClosing for now, but feel free to re-open if you experience this issue again."],"created_at":1618930700000,"updated_at":1620064473000,"closed_at":1620064472000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nI have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry. \r\n\r\nI process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps.\r\n\r\npseudo code:\r\n```python\r\nds = datasets.load_from_disk(\"path\")\r\nnew_dataset = ds.map(work, batched=True, ...) # fast uses all processes\r\nfinal_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another\r\n```\r\n\r\n## Expected results\r\nSecond stage should be as fast as the first stage.\r\n\r\n## Versions\r\nPaste the output of the following code:\r\n- Datasets: 1.5.0\r\n- Python: 3.8.8 (default, Feb 24 2021, 21:46:12)\r\n- Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10 \r\n\r\nDo you guys have any idea? Thanks a lot!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2243\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2242","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2242\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2242\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2242\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2242","id":862870205,"node_id":"MDU6SXNzdWU4NjI4NzAyMDU=","number":2242,"title":"Link to datasets viwer on Quick Tour page returns \"502 Bad Gateway\"","user":{"login":"martavillegas","id":6735707,"node_id":"MDQ6VXNlcjY3MzU3MDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6735707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/martavillegas","html_url":"https:\/\/github.com\/martavillegas","followers_url":"https:\/\/api.github.com\/users\/martavillegas\/followers","following_url":"https:\/\/api.github.com\/users\/martavillegas\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/martavillegas\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/martavillegas\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/martavillegas\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/martavillegas\/orgs","repos_url":"https:\/\/api.github.com\/users\/martavillegas\/repos","events_url":"https:\/\/api.github.com\/users\/martavillegas\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/martavillegas\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This should be fixed now!\r\n\r\ncc @srush "],"created_at":1618928391000,"updated_at":1618930965000,"closed_at":1618930965000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Link to datasets viwer (https:\/\/huggingface.co\/datasets\/viewer\/) on Quick Tour page (https:\/\/huggingface.co\/docs\/datasets\/quicktour.html) returns \"502 Bad Gateway\"\r\n\r\nThe same error with https:\/\/huggingface.co\/datasets\/viewer\/?dataset=glue&config=mrpc ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2242\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2241","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2241\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2241\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2241\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2241","id":862696460,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE5MjI0MzIw","number":2241,"title":"Add SLR32 to OpenSLR","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> And yet another one ! Thanks a lot :)\r\n\r\nI just hope you don\u2019t get fed up with openslr PR \ud83d\ude0a there are still few other datasets created by google in openslr that is not in hf dataset yet\r\n"],"created_at":1618916565000,"updated_at":1619194884000,"closed_at":1619192175000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2241","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2241","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2241.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2241.patch"},"body":"I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2241\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2240","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2240\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2240\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2240\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2240","id":862537856,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE5MDkyODc5","number":2240,"title":"Clarify how to load wikihow","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618905778000,"updated_at":1618998897000,"closed_at":1618998897000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2240","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2240","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2240.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2240.patch"},"body":"Explain clearer how to load the dataset in the manual download instructions.\r\n\r\nEn relation with #2239.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2240\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2239","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2239\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2239\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2239\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2239","id":861904306,"node_id":"MDU6SXNzdWU4NjE5MDQzMDY=","number":2239,"title":"Error loading wikihow dataset","user":{"login":"odellus","id":4686956,"node_id":"MDQ6VXNlcjQ2ODY5NTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4686956?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/odellus","html_url":"https:\/\/github.com\/odellus","followers_url":"https:\/\/api.github.com\/users\/odellus\/followers","following_url":"https:\/\/api.github.com\/users\/odellus\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/odellus\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/odellus\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/odellus\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/odellus\/orgs","repos_url":"https:\/\/api.github.com\/users\/odellus\/repos","events_url":"https:\/\/api.github.com\/users\/odellus\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/odellus\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @odellus, thanks for reporting.\r\n\r\nThe `wikihow` dataset has 2 versions:\r\n- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.\r\n- `sep`: Consisting of each paragraph and its summary.\r\n\r\nTherefore, in order to load it, you have to specify which version you would like, for example:\r\n```python\r\ndataset = load_dataset('wikihow', 'all')\r\n```\r\n\r\nPlease, tell me if this solves your problem.","Good call out. I did try that and that's when it told me to download the\ndataset. Don't believe I have tried it with local files. Will try first\nthing in the morning and get back to you.\n\nOn Mon, Apr 19, 2021, 11:17 PM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hi @odellus , thanks for reporting.\n>\n> The wikihow dataset has 2 versions:\n>\n> - all: Consisting of the concatenation of all paragraphs as the\n> articles and the bold lines as the reference summaries.\n> - sep: Consisting of each paragraph and its summary.\n>\n> Therefore, in order to load it, you have to specify which version you\n> would like, for example:\n>\n> dataset = load_dataset('wikihow', 'all')\n>\n> Please, tell me if this solves your problem.\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","Hi @odellus, yes you are right.\r\n\r\nDue to the server where the `wikihow` dataset is hosted, the dataset can't be downloaded automatically by `huggingface` and you have to download it manually as you did.\r\n\r\nNevertheless, you have to specify which dataset version you would like to load anyway:\r\n```python\r\ndataset = load_dataset('wikihow', 'all', data_dir='.\/wikihow')\r\n```\r\nor\r\n```python\r\ndataset = load_dataset('wikihow', 'sep', data_dir='.\/wikihow')\r\n```\r\nI find that the instructions given by `huggingface` are not clear enough: I am going to fix this.\r\nPlease tell me if this eventually works for you.","That was it. Thank you Albert!"],"created_at":1618866151000,"updated_at":1618936391000,"closed_at":1618936391000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Describe the bug\r\n\r\nWhen attempting to load wikihow into a dataset with\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('wikihow', data_dir='.\/wikihow')\r\n```\r\nI get the message:\r\n```\r\nAttributeError: 'BuilderConfig' object has no attribute 'filename'\r\n```\r\nat the end of a [full stack trace](https:\/\/gist.github.com\/odellus\/602c3b2de52f541d353b1022f320ffc2).\r\n\r\n## Steps to reproduce the bug\r\n\r\nI have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https:\/\/huggingface.co\/datasets\/wikihow) says to use \r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('wikihow')\r\n```\r\nto load the dataset. I do so and I get the message\r\n```\r\nAssertionError: The dataset wikihow with config all requires manual data.\r\n Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https:\/\/github.com\/mahnazkoupaee\/WikiHow-Dataset.\r\n You need to download the following two files manually:\r\n 1) https:\/\/ucsb.app.box.com\/s\/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under \/wikihowAll.csv\r\n 2) https:\/\/ucsb.app.box.com\/s\/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under \/wikihowSep.csv\r\n\r\n The can e.g. be \"~\/manual_wikihow_data\".\r\n\r\n Wikihow can then be loaded using the following command `datasets.load_dataset(\"wikihow\", data_dir=\"\")`.\r\n .\r\n Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='')\r\n```\r\n\r\nSo I create a directory `.\/wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory.\r\n\r\nThen I run \r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('wikihow', data_dir='.\/wikihow')\r\n```\r\n\r\nthat's when I get the [stack trace](https:\/\/gist.github.com\/odellus\/602c3b2de52f541d353b1022f320ffc2)\r\n\r\n## Expected results\r\nI expected it to load the downloaded files into a dataset.\r\n\r\n## Actual results\r\n```python\r\nUsing custom data configuration default-data_dir=.%2Fwikihow\r\nDownloading and preparing dataset wikihow\/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/azureuser\/.cache\/huggingface\/datasets\/wikihow\/default-data_dir=.%2Fwikihow\/0.0.0\/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... ---------------------------------------------------------------------------\r\nAttributeError\r\nTraceback (most recent call last)\r\n in \r\n----> 1 dataset = load_dataset('wikihow',data_dir='.\/wikihow')\r\n~\/.local\/lib\/python3.6\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)\r\n745 try_from_hf_gcs=try_from_hf_gcs,\r\n746 base_path=base_path,--> \r\n747 use_auth_token=use_auth_token,\r\n748 )\r\n749 \r\n~\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n577 if not downloaded_from_gcs:\r\n578 self._download_and_prepare( -->\r\n579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs \r\n580 ) \r\n581 # Sync info\r\n~\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n632 split_dict = SplitDict(dataset_name=self.name)\r\n633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -->\r\n634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) \r\n635 \r\n636 # Checksums verification\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wikihow\/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2\/wikihow.py in _split_generators(self, dl_manager)\r\n132\r\n133 path_to_manual_file = os.path.join(\r\n--> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename \r\n135 ) \r\n136\r\nAttributeError: 'BuilderConfig' object has no attribute 'filename'\r\n```\r\n## Versions\r\nPaste the output of the following code:\r\n```python\r\nimport datasets\r\nimport sys\r\nimport platform\r\n\r\nprint(f\"\"\"\r\n- Datasets: {datasets.__version__}\r\n- Python: {sys.version}\r\n- Platform: {platform.platform()}\r\n\"\"\")\r\n```\r\n```\r\n- Datasets: 1.5.0\r\n- Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]\r\n- Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2239\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2238","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2238\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2238\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2238\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2238","id":861518291,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE4MTY5NzM5","number":2238,"title":"NLU evaluation data","user":{"login":"dkajtoch","id":32985207,"node_id":"MDQ6VXNlcjMyOTg1MjA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32985207?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dkajtoch","html_url":"https:\/\/github.com\/dkajtoch","followers_url":"https:\/\/api.github.com\/users\/dkajtoch\/followers","following_url":"https:\/\/api.github.com\/users\/dkajtoch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dkajtoch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dkajtoch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dkajtoch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dkajtoch\/orgs","repos_url":"https:\/\/api.github.com\/users\/dkajtoch\/repos","events_url":"https:\/\/api.github.com\/users\/dkajtoch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dkajtoch\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618850840000,"updated_at":1619191925000,"closed_at":1619191925000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2238","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2238","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2238.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2238.patch"},"body":"New intent classification dataset from https:\/\/github.com\/xliuhw\/NLU-Evaluation-Data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2238\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2237","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2237\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2237\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2237\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2237","id":861427439,"node_id":"MDU6SXNzdWU4NjE0Mjc0Mzk=","number":2237,"title":"Update Dataset.dataset_size after transformed with map","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!"],"created_at":1618845578000,"updated_at":1618928525000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2237\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2236","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2236\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2236\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2236\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2236","id":861388145,"node_id":"MDU6SXNzdWU4NjEzODgxNDU=","number":2236,"title":"Request to add StrategyQA dataset","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618843586000,"updated_at":1618843586000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Request to add StrategyQA dataset\r\n- **Name:** StrategyQA\r\n- **Description:** open-domain QA [(project page)](https:\/\/allenai.org\/data\/strategyqa)\r\n- **Paper:** [url](https:\/\/arxiv.org\/pdf\/2101.02235.pdf)\r\n- **Data:** [here](https:\/\/allenai.org\/data\/strategyqa)\r\n- **Motivation:** uniquely-formulated dataset that also includes a question-decomposition breakdown and associated Wikipedia annotations for each step. Good for multi-hop reasoning modeling.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2236\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2235","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2235\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2235\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2235\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2235","id":861040716,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE3Nzc0NDUw","number":2235,"title":"Update README.md","user":{"login":"PierreColombo","id":22492839,"node_id":"MDQ6VXNlcjIyNDkyODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22492839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PierreColombo","html_url":"https:\/\/github.com\/PierreColombo","followers_url":"https:\/\/api.github.com\/users\/PierreColombo\/followers","following_url":"https:\/\/api.github.com\/users\/PierreColombo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PierreColombo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PierreColombo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PierreColombo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PierreColombo\/orgs","repos_url":"https:\/\/api.github.com\/users\/PierreColombo\/repos","events_url":"https:\/\/api.github.com\/users\/PierreColombo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PierreColombo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618820462000,"updated_at":1618836559000,"closed_at":1618836559000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2235","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2235","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2235.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2235.patch"},"body":"Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2235\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2234","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2234\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2234\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2234\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2234","id":860442246,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3","number":2234,"title":"Fix bash snippet formatting in ADD_NEW_DATASET.md","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618675268000,"updated_at":1618829851000,"closed_at":1618818696000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2234","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2234","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2234.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2234.patch"},"body":"This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2234\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2233","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2233\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2233\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2233\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2233","id":860097084,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE3MDYwMTkw","number":2233,"title":"Fix `xnli` dataset tuple key","user":{"login":"NikhilBartwal","id":42388668,"node_id":"MDQ6VXNlcjQyMzg4NjY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42388668?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NikhilBartwal","html_url":"https:\/\/github.com\/NikhilBartwal","followers_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/followers","following_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/orgs","repos_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/repos","events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618600362000,"updated_at":1618822602000,"closed_at":1618822602000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2233","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2233","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2233.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2233.patch"},"body":"Closes #2229 \r\nThe `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str\/int).\r\nThe key was thus ported to `str` keeping the original information intact.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2233\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2232","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2232\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2232\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2232\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2232","id":860075931,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4","number":2232,"title":"Start filling GLUE dataset card","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I replaced all the \"we\" and applied your suggestion","Merging this for now, we can continue improving this card in other PRs :)"],"created_at":1618598257000,"updated_at":1618997589000,"closed_at":1618997588000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2232","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2232","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2232.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2232.patch"},"body":"The dataset card was pretty much empty.\r\n\r\nI added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks.\r\n\r\ncc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2232\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2231","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2231\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2231\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2231\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2231","id":859850488,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE2ODYyNTEx","number":2231,"title":"Fix map when removing columns on a formatted dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618582135000,"updated_at":1618585805000,"closed_at":1618585804000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2231","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2231","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2231.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2231.patch"},"body":"This should fix issue #2226\r\n\r\nThe `remove_columns` argument was ignored on formatted datasets","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2231\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2230","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2230\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2230\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2230\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2230","id":859817159,"node_id":"MDU6SXNzdWU4NTk4MTcxNTk=","number":2230,"title":"Keys yielded while generating dataset are not being checked","user":{"login":"NikhilBartwal","id":42388668,"node_id":"MDQ6VXNlcjQyMzg4NjY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42388668?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NikhilBartwal","html_url":"https:\/\/github.com\/NikhilBartwal","followers_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/followers","following_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/orgs","repos_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/repos","events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?","Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how the `ArrowWriter` functions, I think we can implement this as follows:\r\n\r\n1. First, we would have to update the `ArrowWriter.write()` function here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fcd3c3c8e3b1d9a2f3686a496082e21f06591380\/src\/datasets\/arrow_writer.py#L296\r\nso that it accepts an additional argument `key` which would be appended along with the example here after hashing.\r\n\r\n2. Then, we would need to create a `Hasher` class which will take the key as its input and return a hash for it (We might need to use some hash salt which can be passed to the ArrowWriter.writer() with value equal to the `split_name` for differentiating between same keys of different splits)\r\n\r\n We can use the `hashlib.md5` function for hashing which will conert each key to its byte code before hashing (depending on the data type of the key) **Thus, the `key` type will be verified here**.\r\n\r\n3. Now, we would have to edit this\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fcd3c3c8e3b1d9a2f3686a496082e21f06591380\/src\/datasets\/arrow_writer.py#L257\r\n so that it iterates over each `(hash, example)` pair (sorted according to hash). We can then simply **check whether each hash is different from the previous hash** (since they will be sorted)\r\n\r\nHowever, since I'm not very familiar with how the data is being written on disk in the form of a table, I might need some guidance for Step 3. \r\nPlease let me know your thought on this. Thanks!","Interesting !\r\nWe keep the dataset sorted in the order examples are generated by the builder (we expect the dataset builders to generate examples in deterministic order). Therefore I don't think we should shuffle the examples with the hashing. Let me know what you think.\r\nOther that that, I really like the idea of checking for keys duplicates in `write_examples_on_file` :)\r\n\r\nThis looks like a great plan ! Feel free to open a PR and ping me if you have questions or if I can help\r\n","@lhoestq I'm glad you liked the idea!\r\nI think that since the keys will be unique and deterministic in the nature themselves, so even if we shuffle the examples according to the hash, a deterministic order would still be maintained (as the keys will always have the same hash, whenever the dataset is generated). \r\nAnd since, we are not dealing with time series data (which would require the data to be in original order), I don't think the order of examples would matter much, as long as the order is deterministic and constant for all users.\r\n\r\nI think that this is also what was originally envisioned as mentioned in the documentation here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/6775661b19d2ec339784f3d84553a3996a1d86c3\/src\/datasets\/builder.py#L973\r\n\r\nAlso, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\nLet me know your thoughts in it! I would be opening a PR soon :)","When users load their own data, they expect the order to stay the same. I think that shuffling the data can make things inconvenient.\r\n\r\n> I think that this is also what was originally envisioned as mentioned in the documentation here:\r\n\r\nThis part was originally developed by tensorflow datasets, and tensorflow datasets indeed does the shuffling. However in this library this is probably not what we want in the general case. But if @albertvillanova and @thomwolf you have opinions on this please let us know.\r\n\r\n> Also, if we avoid this, we would need to keep track of all the hashed keys in some place and compare each individual key with all others. This can cause some major overhead as each dataset consists of tens of thousands of examples.\r\n\r\nMaybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch, but there might still be duplicates across batches. For 10 000 examples the hashes can just be stored as a python `set`.\r\n\r\nOtherwise if we want full deduplication, we need an extra tool that allows to temporarily save and query hashes that may need to use disk space rather than memory.","Yes I think we want to keep the original order by default and only shuffle when the user ask for it (for instance by calling `dataset.shuffle()`). That\u2019s how I had it in mind originally.","Hey @lhoestq, I just had a more in-depth look at the original TFDS code about why the keys and hash were used in the first place.\r\n\r\nIn my opinion, the only use that the `hash(key)` serves is that it allows us to shuffle the examples in a deterministic order (as each example will always yield the same key and thus, the same hash on every system) so that the same dataset is generated for each user, irrespective of the order the examples are yielded by the dataset builder on different user systems.\r\n\r\nOtherwise, if we are not shuffling, then while yielding and writing the data, after getting the key and hashing it for an example, I can't quite see the use of the hash or the key. The hash will simply be generated for each example but not actually used anywhere?\r\n\r\n@lhoestq @thomwolf It would be great if you could explain a bit more about the usage of keys. Thanks!\r\n","In `datasets` the keys are currently ignored.\r\nFor shuffling we don't use the keys. Instead we shuffle an array of indices. Since both the original order of the dataset and the indices shuffling are deterministic, then `dataset.shuffle` is deterministic as well.\r\nWe can use it to:\r\n1. detect duplicates\r\n2. verify that the generation order is indeed deterministic\r\n3. maybe more ?","Thanks a lot @lhoestq. I think I understand what we need to do now. The keys can indeed be used for detecting duplicates in generated examples as well as ensuring the order.\r\n\r\n> Maybe we cam simply keep track of the hashes of of each batch being written ? The size of the batch when the data are save in arrow is 10 000 examples. This would only ensure that we don't have duplicates in each batch,\r\n\r\nI think that checking for duplicates in every batch independently would be sufficient as the probability of collisions using something like `MD5` is very low. I would be opening a draft PR soon. It would be great to have your guidance. Thanks!"],"created_at":1618579787000,"updated_at":1620667881000,"closed_at":1620667881000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.\r\nCurrently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/56346791aed417306d054d89bd693d6b7eab17f7\/datasets\/xnli\/xnli.py#L196\r\nEven after having a tuple as key, the dataset is generated without any warning.\r\n\r\nAlso, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):\r\n```\r\n>>> import datasets\r\n>>> nik = datasets.load_dataset('anli')\r\nDownloading and preparing dataset anli\/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\anli\\plain_text\\0.1.0\\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...\r\n0 examples [00:00, ? examples\/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: \"Rete filoviaria di Parma\" ) forms part of the public transport network of the city and \"comune\" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}\r\n2021-04-16 12:38:14.483968: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\n1 examples [00:01, 1.87s\/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 \u2013 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage\/science fiction adventure series \"The Champions\". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': \"Sharron Macready was a popular character through the 1980's.\", 'label': 'neutral', 'reason': ''}\r\n1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 \u2013 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage\/science fiction adventure series \"The Champions\". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': \"Bastedo didn't keep any pets because of her views on animal rights.\", 'label': 'neutral', 'reason': ''}\r\n1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 \u2013 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage\/science fiction adventure series \"The Champions\". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}\r\n1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 \u2013 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage\/science fiction adventure series \"The Champions\". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}\r\n```\r\nHere also, the dataset was generated successfuly even hough it had same keys without any warning.\r\n\r\nThe reason appears to stem from here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/56346791aed417306d054d89bd693d6b7eab17f7\/src\/datasets\/builder.py#L988\r\nHere, although it has access to every key, but it is not being checked and the example is written directly:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/56346791aed417306d054d89bd693d6b7eab17f7\/src\/datasets\/builder.py#L992\r\n\r\nI would like to take this issue if you allow me. Thank You!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2230\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2229","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2229\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2229\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2229\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2229","id":859810602,"node_id":"MDU6SXNzdWU4NTk4MTA2MDI=","number":2229,"title":"`xnli` dataset creating a tuple key while yielding instead of `str` or `int`","user":{"login":"NikhilBartwal","id":42388668,"node_id":"MDQ6VXNlcjQyMzg4NjY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42388668?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NikhilBartwal","html_url":"https:\/\/github.com\/NikhilBartwal","followers_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/followers","following_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/orgs","repos_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/repos","events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NikhilBartwal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str\/int, you can also fix them !\r\nthanks :)","@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!"],"created_at":1618579313000,"updated_at":1618822602000,"closed_at":1618822602000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/56346791aed417306d054d89bd693d6b7eab17f7\/datasets\/xnli\/xnli.py#L196\r\n\r\nSince, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. \r\nI'm up for sending a fix for this, I think we can simply use `file_idx + \"_\" + row_idx` as a unique key instead of a tuple.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2229\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2228","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2228\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2228\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2228\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2228","id":859795563,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz","number":2228,"title":"[WIP] Add ArrayXD support for fixed size list.","user":{"login":"jblemoine","id":22685854,"node_id":"MDQ6VXNlcjIyNjg1ODU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22685854?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jblemoine","html_url":"https:\/\/github.com\/jblemoine","followers_url":"https:\/\/api.github.com\/users\/jblemoine\/followers","following_url":"https:\/\/api.github.com\/users\/jblemoine\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jblemoine\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jblemoine\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jblemoine\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jblemoine\/orgs","repos_url":"https:\/\/api.github.com\/users\/jblemoine\/repos","events_url":"https:\/\/api.github.com\/users\/jblemoine\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jblemoine\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome thanks ! To fix the CI you just need to merge master into your branch.\r\nThe error is unrelated to your PR"],"created_at":1618578248000,"updated_at":1618837338000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2228","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2228","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2228.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2228.patch"},"body":"Add support for fixed size list for ArrayXD when shape is known . See https:\/\/github.com\/huggingface\/datasets\/issues\/2146\r\nSince offset are not stored anymore, the file size is now roughly equal to the actual data size. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2228\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2227","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2227\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2227\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2227\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2227","id":859771526,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx","number":2227,"title":"Use update_metadata_with_features decorator in class_encode_column method","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618576301000,"updated_at":1618580980000,"closed_at":1618580979000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2227","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2227","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2227.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2227.patch"},"body":"Following @mariosasko 's comment","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2227\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2226","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2226\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2226\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2226\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2226","id":859720302,"node_id":"MDU6SXNzdWU4NTk3MjAzMDI=","number":2226,"title":"Batched map fails when removing all columns","user":{"login":"villmow","id":2743060,"node_id":"MDQ6VXNlcjI3NDMwNjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2743060?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/villmow","html_url":"https:\/\/github.com\/villmow","followers_url":"https:\/\/api.github.com\/users\/villmow\/followers","following_url":"https:\/\/api.github.com\/users\/villmow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/villmow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/villmow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/villmow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/villmow\/orgs","repos_url":"https:\/\/api.github.com\/users\/villmow\/repos","events_url":"https:\/\/api.github.com\/users\/villmow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/villmow\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n\r\n# crashes\r\nds.map(\r\n lambda x: {\"a\": list(range(20))},\r\n remove_columns=ds.column_names,\r\n load_from_cache_file=False,\r\n num_proc=1,\r\n batched=True,\r\n)\r\n```","Thanks for reporting and for providing this code to reproduce the issue, this is really helpful !","I merged a fix, it should work on `master` now :)\r\nWe'll do a new release soon !"],"created_at":1618571821000,"updated_at":1618585841000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow\/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```\r\n\r\n_Originally posted by @villmow in https:\/\/github.com\/huggingface\/datasets\/issues\/2193#issuecomment-820230874_","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2226\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2225","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2225\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2225\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2225\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2225","id":858469561,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4","number":2225,"title":"fixed one instance of 'train' to 'test'","user":{"login":"alexwdong","id":46733535,"node_id":"MDQ6VXNlcjQ2NzMzNTM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46733535?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexwdong","html_url":"https:\/\/github.com\/alexwdong","followers_url":"https:\/\/api.github.com\/users\/alexwdong\/followers","following_url":"https:\/\/api.github.com\/users\/alexwdong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexwdong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexwdong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexwdong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexwdong\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexwdong\/repos","events_url":"https:\/\/api.github.com\/users\/alexwdong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexwdong\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! good catch\r\n\r\nCould you also update the metadata of this dataset ?\r\nYou can do so by running\r\n```\r\ndatasets-cli test .\/datasets\/newsgroup --all_configs --save_infos --ignore_verifications\r\n```\r\nThis should update the dataset_infos.json file that contains the size of all the splits for example.","Hi,\r\n`dataset_infos.json` should be updated now.\r\n"],"created_at":1618460800000,"updated_at":1618524590000,"closed_at":1618521549000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2225","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2225","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2225.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2225.patch"},"body":"I believe this should be 'test' instead of 'train'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2225\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2224","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2224\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2224\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2224\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2224","id":857983361,"node_id":"MDU6SXNzdWU4NTc5ODMzNjE=","number":2224,"title":"Raise error if Windows max path length is not disabled","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618412240000,"updated_at":1618412353000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"On startup, raise an error if Windows max path length is not disabled; ask the user to disable it.\r\n\r\nLinked to discussion in #2220.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2224\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2223","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2223\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2223\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2223\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2223","id":857870800,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE1MjE4MDIz","number":2223,"title":"Set test cache config","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> why a cache dir per test function does not work?\r\n\r\nProbably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets\/metrics modules.\r\nIf you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the python path during the test.\r\nIndeed if the module cache hasn't been initialized, then it's added to the python path by calling `init_dynamic_modules`:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/ba76012a19193a35053b9e20243ff40c2b4204ab\/src\/datasets\/load.py#L291-L291","@lhoestq, for the moment, this PR avoids populating the `~\/.cache` dir during training, which is already an improvement, isn't it?","Yes we can merge it this way if you're fine with it !\r\nThis is a good improvement","I will eventually try to implement a `cache_dir` per test function in another PR, but I think I should first fix some side effects in tests: each test function should be atomic and able to have its own `cache_dir` without being affected by the `cache_dir` set in other test functions.","Yes this would be ideal !"],"created_at":1618404924000,"updated_at":1618513885000,"closed_at":1618513885000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2223","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2223","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2223.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2223.patch"},"body":"Currently, running the tests populates the default cache directory `\"~\/.cache\"`.\r\n\r\nThis PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2223\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2222","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2222\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2222\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2222\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2222","id":857847231,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5","number":2222,"title":"Fix too long WindowsFileLock name","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.","Do you agree with handling the case where MAX_PATH is not disabled? If not, we can close this PR.\r\n\r\nIf so, would it work a deterministic lock path instead of random?","I'd rather not handle this at all, since there will be other places in the code where the limit will break things"],"created_at":1618403212000,"updated_at":1618412425000,"closed_at":1618411579000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2222","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2222","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2222.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2222.patch"},"body":"Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2222\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2221","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2221\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2221\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2221\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2221","id":857833770,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5","number":2221,"title":"Add SLR70 - SLR80 and SLR86 to OpenSLR dataset","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618402158000,"updated_at":1618408219000,"closed_at":1618408219000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2221","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2221","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2221.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2221.patch"},"body":"I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are:\r\nNigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2221\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2220","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2220\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2220\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2220\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2220","id":857774626,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz","number":2220,"title":"Fix infinite loop in WindowsFileLock","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["How is it possible to get an infinite loop ? Can you add more details ?","Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.\r\n\r\nIf other process has the file locked, then `PermissionError` is raised. In this case, `pass` is OK.","Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps:\/\/github.com\/benediktschmitt\/py-filelock\r\n\r\nUnless we have proper tests for this, I wouldn't recommend to change it","I'm pretty sure many things from the library could break for windows users that haven't disabled the max path length limit.\r\nMaybe it would be simpler to simply raise an error on startup. For exampe, for windows users the error could ask them to disable the limit if it's not been disabled yet ?"],"created_at":1618397398000,"updated_at":1618412390000,"closed_at":1618412374000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2220","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2220","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2220.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2220.patch"},"body":"Raise exception to avoid infinite loop.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2220\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2219","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2219\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2219\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2219\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2219","id":857321242,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3","number":2219,"title":"Added CUAD dataset","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while","@bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? ","@MohammedRakib you can check [#2257](https:\/\/github.com\/huggingface\/datasets\/pull\/2257)"],"created_at":1618347903000,"updated_at":1619274351000,"closed_at":1618563044000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2219","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2219","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2219.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2219.patch"},"body":"Dataset link : https:\/\/github.com\/TheAtticusProject\/cuad\/\r\n\r\nWorking on README.md currently.\r\n\r\nCloses #2084 and [#1](https:\/\/github.com\/TheAtticusProject\/cuad\/issues\/1). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2219\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2218","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2218\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2218\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2218\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2218","id":857238435,"node_id":"MDU6SXNzdWU4NTcyMzg0MzU=","number":2218,"title":"Duplicates in the LAMA dataset","user":{"login":"amarasovic","id":7276193,"node_id":"MDQ6VXNlcjcyNzYxOTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7276193?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/amarasovic","html_url":"https:\/\/github.com\/amarasovic","followers_url":"https:\/\/api.github.com\/users\/amarasovic\/followers","following_url":"https:\/\/api.github.com\/users\/amarasovic\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/amarasovic\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/amarasovic\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/amarasovic\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/amarasovic\/orgs","repos_url":"https:\/\/api.github.com\/users\/amarasovic\/repos","events_url":"https:\/\/api.github.com\/users\/amarasovic\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/amarasovic\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', split='train')\r\n>>> dataset = Dataset.from_pandas(dataset.to_pandas().drop_duplicates(subset=...)) # specify a subset of the columns to consider in a list or use all of the columns if None\r\n```\r\n\r\nNote that the same can be achieved with the `Dataset.filter` method but this would requrie some extra work (filter function, speed?).","Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper\/repository? In other words, will I get the same result if evaluate on the de-duplicated dataset loaded from HF's `datasets` as the results I'd get if I use the original data format and data processing script in https:\/\/github.com\/facebookresearch\/LAMA? ","So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation\r\n\r\nIf I understand correctly, to reproduce reported results, you would have to aggregate predictions for the several pieces of evidence provided for each relation (each unique `uuid`), but the original authors will know better \r\n\r\ncc @fabiopetroni "],"created_at":1618340389000,"updated_at":1618436547000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I observed duplicates in the LAMA probing dataset, see a minimal code below. \r\n\r\n```\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('lama')\r\nNo config specified, defaulting to: lama\/trex\r\nReusing dataset lama (\/home\/anam\/.cache\/huggingface\/datasets\/lama\/trex\/1.1.0\/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc)\r\n>>> train_dataset = dataset['train']\r\n>>> train_dataset[0]\r\n{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}\r\n>>> train_dataset[1]\r\n{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}\r\n```\r\n\r\nI checked the original data available at https:\/\/dl.fbaipublicfiles.com\/LAMA\/data.zip. This particular duplicated comes from:\r\n```\r\n{\"uuid\": \"40b2ed1c-0961-482e-844e-32596b6117c8\", \"obj_uri\": \"Q150\", \"obj_label\": \"French\", \"sub_uri\": \"Q441235\", \"sub_label\": \"Louis Jules Trochu\", \"predicate_id\": \"P103\", \"evidences\": [{\"sub_surface\": \"Louis Jules Trochu\", \"obj_surface\": \"French\", \"masked_sentence\": \"Louis Jules Trochu ([lwi \\u0292yl t\\u0281\\u0254\\u0283y]; 12 March 1815 \\u2013 7 October 1896) was a [MASK] military leader and politician.\"}, {\"sub_surface\": \"Louis Jules Trochu\", \"obj_surface\": \"French\", \"masked_sentence\": \"Louis Jules Trochu ([lwi \\u0292yl t\\u0281\\u0254\\u0283y]; 12 March 1815 \\u2013 7 October 1896) was a [MASK] military leader and politician.\"}]}\r\n``` \r\n\r\nWhat is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2218\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2217","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2217\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2217\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2217\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2217","id":857011314,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz","number":2217,"title":"Revert breaking change in cache_files property","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618323604000,"updated_at":1618410264000,"closed_at":1618410263000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2217","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2217","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2217.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2217.patch"},"body":"#2025 changed the format of `Dataset.cache_files`.\r\nBefore it was formatted like\r\n```python\r\n[{\"filename\": \"path\/to\/file.arrow\", \"start\": 0, \"end\": 1337}]\r\n```\r\nand it was changed to\r\n```python\r\n[\"path\/to\/file.arrow\"]\r\n```\r\nsince there's no start\/end offsets available anymore.\r\n\r\nTo make this less breaking, I'm setting the format back to a list of dicts:\r\n```python\r\n[{\"filename\": \"path\/to\/file.arrow\"}]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2217\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2216","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2216\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2216\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2216\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2216","id":856955534,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE0NDU0MjE1","number":2216,"title":"added real label for glue\/mrpc to test set","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618320020000,"updated_at":1618322000000,"closed_at":1618321999000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2216","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2216","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2216.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2216.patch"},"body":"Added real label to `glue.py` `mrpc` task for test split.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2216\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2215","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2215\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2215\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2215\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2215","id":856716791,"node_id":"MDExOlB1bGxSZXF1ZXN0NjE0MjUyNTEy","number":2215,"title":"Add datasets SLR35 and SLR36 to OpenSLR ","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\nCould you please help me, I got this error message in all \"ci\/circleci: run_dataset_script_tests_pyarrow*\" tests:\r\n```\r\n...\r\n \"\"\"Wrapper classes for various types of tokenization.\"\"\"\r\n \r\n from bleurt.lib import bert_tokenization\r\n import tensorflow.compat.v1 as tf\r\n> import sentencepiece as spm\r\nE ModuleNotFoundError: No module named 'sentencepiece'\r\n...\r\n```\r\nI am not sure why I do get it. Thanks.\r\n","Hi ! This issue appeared on master since the last update of `BLEURT`.\r\nI'm working on a fix. You can ignore this issue for this PR","> Hi ! This issue appeared on master since the last update of `BLEURT`.\r\n> I'm working on a fix. You can ignore this issue for this PR\r\n\r\nThanks for the info","Merging since the CI is fixed on master"],"created_at":1618302247000,"updated_at":1618322714000,"closed_at":1618322714000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2215","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2215","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2215.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2215.patch"},"body":"I would like to add [SLR35](https:\/\/openslr.org\/35\/) (18GB) and [SLR36](https:\/\/openslr.org\/36\/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2215\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2214","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2214\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2214\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2214\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2214","id":856333657,"node_id":"MDU6SXNzdWU4NTYzMzM2NTc=","number":2214,"title":"load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'","user":{"login":"nsaphra","id":414788,"node_id":"MDQ6VXNlcjQxNDc4OA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/414788?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nsaphra","html_url":"https:\/\/github.com\/nsaphra","followers_url":"https:\/\/api.github.com\/users\/nsaphra\/followers","following_url":"https:\/\/api.github.com\/users\/nsaphra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nsaphra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nsaphra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nsaphra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nsaphra\/orgs","repos_url":"https:\/\/api.github.com\/users\/nsaphra\/repos","events_url":"https:\/\/api.github.com\/users\/nsaphra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nsaphra\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```","There might be a bug in the conda version of `datasets` 1.2.1 where the datasets\/metric scripts are downloaded from `master` instead of the `1.2.1` repo.\r\n\r\nYou can try setting the env var `HF_SCRIPTS_VERSION=\"1.2.1\"` as a workaround. Let me know if that helps.","I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip installs in the same environments but this will have to do for now, until 1.5.0 is made available through conda.","Yep, seems to have fixed things! The conda package could really do with an update. Thanks!"],"created_at":1618259161000,"updated_at":1619191202000,"closed_at":1619191202000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm having the same problem as [Notebooks issue 10](https:\/\/github.com\/huggingface\/notebooks\/issues\/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.\r\n\r\n```python\r\n>>> from datasets import load_metric\r\n>>> metric = load_metric(\"glue\", \"sst2\")\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/ext3\/miniconda3\/lib\/python3.8\/site-packages\/datasets-1.2.1-py3.8.egg\/datasets\/load.py\", line 502, in load_metric\r\n File \"\/ext3\/miniconda3\/lib\/python3.8\/site-packages\/datasets-1.2.1-py3.8.egg\/datasets\/load.py\", line 66, in import_main_class\r\n File \"\/ext3\/miniconda3\/lib\/python3.8\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"\", line 1014, in _gcd_import\r\n File \"\", line 991, in _find_and_load\r\n File \"\", line 975, in _find_and_load_unlocked\r\n File \"\", line 671, in _load_unlocked\r\n File \"\", line 783, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"\/home\/ns4008\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/glue\/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de\/glue.py\", line 105, in \r\n @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)\r\nAttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2214\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2213","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2213\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2213\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2213\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2213","id":856025320,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2","number":2213,"title":"Fix lc_quad download checksum","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618237019000,"updated_at":1618437894000,"closed_at":1618407745000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2213","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2213","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2213.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2213.patch"},"body":"Fixes #2211 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2213\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2212","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2212\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2212\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2212\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2212","id":855999133,"node_id":"MDU6SXNzdWU4NTU5OTkxMzM=","number":2212,"title":"Can't reach \"https:\/\/storage.googleapis.com\/illuin\/fquad\/train.json.zip\" when trying to load fquad dataset","user":{"login":"hanss0n","id":21348833,"node_id":"MDQ6VXNlcjIxMzQ4ODMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21348833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hanss0n","html_url":"https:\/\/github.com\/hanss0n","followers_url":"https:\/\/api.github.com\/users\/hanss0n\/followers","following_url":"https:\/\/api.github.com\/users\/hanss0n\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hanss0n\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hanss0n\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hanss0n\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hanss0n\/orgs","repos_url":"https:\/\/api.github.com\/users\/hanss0n\/repos","events_url":"https:\/\/api.github.com\/users\/hanss0n\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hanss0n\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available","I saw this on their website when we request to download the dataset:\r\n![image](https:\/\/user-images.githubusercontent.com\/19718818\/114879600-fa458680-9e1e-11eb-9e05-f0963d68ff0f.png)\r\n\r\nCan we still request them link for the dataset and make a PR? @lhoestq @yjernite ","I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon !","They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ..."],"created_at":1618235396000,"updated_at":1621289826000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to load the [fquad dataset](https:\/\/huggingface.co\/datasets\/fquad) by running: \r\n\r\n```Python\r\nfquad = load_dataset(\"fquad\")\r\n```\r\n\r\nwhich produces the following error:\r\n\r\n```\r\nUsing custom data configuration default\r\n\r\nDownloading and preparing dataset fquad\/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to \/root\/.cache\/huggingface\/datasets\/fquad\/default\/0.1.0\/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nConnectionError Traceback (most recent call last)\r\n\r\n in ()\r\n----> 1 fquad = load_dataset(\"fquad\")\r\n\r\n11 frames\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 614 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n 615 _raise_if_offline_mode_is_enabled(f\"Tried to reach {url}\")\r\n--> 616 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 617 \r\n 618 # Try a second time\r\n\r\nConnectionError: Couldn't reach https:\/\/storage.googleapis.com\/illuin\/fquad\/train.json.zip\r\n```\r\n\r\nDoes anyone know why that is and how to fix it? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2212\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2211","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2211\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2211\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2211\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2211","id":855988410,"node_id":"MDU6SXNzdWU4NTU5ODg0MTA=","number":2211,"title":"Getting checksum error when trying to load lc_quad dataset","user":{"login":"hanss0n","id":21348833,"node_id":"MDQ6VXNlcjIxMzQ4ODMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21348833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hanss0n","html_url":"https:\/\/github.com\/hanss0n","followers_url":"https:\/\/api.github.com\/users\/hanss0n\/followers","following_url":"https:\/\/api.github.com\/users\/hanss0n\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hanss0n\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hanss0n\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hanss0n\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hanss0n\/orgs","repos_url":"https:\/\/api.github.com\/users\/hanss0n\/repos","events_url":"https:\/\/api.github.com\/users\/hanss0n\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hanss0n\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets\/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n","Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you! "],"created_at":1618234738000,"updated_at":1618407745000,"closed_at":1618407745000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm having issues loading the [lc_quad](https:\/\/huggingface.co\/datasets\/fquad) dataset by running:\r\n\r\n```Python\r\nlc_quad = load_dataset(\"lc_quad\")\r\n```\r\n\r\nwhich is giving me the following error:\r\n\r\n``` \r\nUsing custom data configuration default\r\n\r\nDownloading and preparing dataset lc_quad\/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to \/root\/.cache\/huggingface\/datasets\/lc_quad\/default\/2.0.0\/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n\r\n in ()\r\n----> 1 lc_quad = load_dataset(\"lc_quad\")\r\n\r\n3 frames\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 37 if len(bad_urls) > 0:\r\n 38 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 40 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 41 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/github.com\/AskNowQA\/LC-QuAD2.0\/archive\/master.zip']\r\n```\r\n\r\nDoes anyone know why this could be and how I fix it? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2211\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2210","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2210\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2210\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2210\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2210","id":855709400,"node_id":"MDU6SXNzdWU4NTU3MDk0MDA=","number":2210,"title":"dataloading slow when using HUGE dataset","user":{"login":"hwijeen","id":29157715,"node_id":"MDQ6VXNlcjI5MTU3NzE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29157715?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hwijeen","html_url":"https:\/\/github.com\/hwijeen","followers_url":"https:\/\/api.github.com\/users\/hwijeen\/followers","following_url":"https:\/\/api.github.com\/users\/hwijeen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hwijeen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hwijeen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hwijeen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hwijeen\/orgs","repos_url":"https:\/\/api.github.com\/users\/hwijeen\/repos","events_url":"https:\/\/api.github.com\/users\/hwijeen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hwijeen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.","Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "],"created_at":1618216382000,"updated_at":1618279385000,"closed_at":1618279385000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nWhen I use datasets with 600GB data, the dataloading speed increases significantly. \r\nI am experimenting with two datasets, and one is about 60GB and the other 600GB.\r\nSimply speaking, my code uses `datasets.set_format(\"torch\")` function and let pytorch-lightning handle ddp training.\r\nWhen looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?\r\n\r\n* 60GB data\r\n```\r\nAction \t| Mean duration (s)\t|Num calls \t| Total time (s) \t| Percentage % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nTotal \t| - \t|_ \t| 200.33 \t| 100 % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nrun_training_epoch \t| 71.994 \t|1 \t| 71.994 \t| 35.937 \t|\r\nrun_training_batch \t| 0.64373 \t|100 \t| 64.373 \t| 32.133 \t|\r\noptimizer_step_and_closure_0 \t| 0.64322 \t|100 \t| 64.322 \t| 32.108 \t|\r\ntraining_step_and_backward \t| 0.61004 \t|100 \t| 61.004 \t| 30.452 \t|\r\nmodel_backward \t| 0.37552 \t|100 \t| 37.552 \t| 18.745 \t|\r\nmodel_forward \t| 0.22813 \t|100 \t| 22.813 \t| 11.387 \t|\r\ntraining_step \t| 0.22759 \t|100 \t| 22.759 \t| 11.361 \t|\r\nget_train_batch \t| 0.066385 \t|100 \t| 6.6385 \t| 3.3138 \t|\r\n```\r\n\r\n* 600GB data\r\n```\r\nAction \t| Mean duration (s)\t|Num calls \t| Total time (s) \t| Percentage % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nTotal \t| - \t|_ \t| 3285.6 \t| 100 % \t|\r\n------------------------------------------------------------------------------------------------------------------------------------\r\nrun_training_epoch \t| 1397.9 \t|1 \t| 1397.9 \t| 42.546 \t|\r\nrun_training_batch \t| 7.2596 \t|100 \t| 725.96 \t| 22.095 \t|\r\noptimizer_step_and_closure_0 \t| 7.2589 \t|100 \t| 725.89 \t| 22.093 \t|\r\ntraining_step_and_backward \t| 7.223 \t|100 \t| 722.3 \t| 21.984 \t|\r\nmodel_backward \t| 6.9662 \t|100 \t| 696.62 \t| 21.202 \t|\r\nget_train_batch \t| 6.322 \t|100 \t| 632.2 \t| 19.241 \t|\r\nmodel_forward \t| 0.24902 \t|100 \t| 24.902 \t| 0.75789 \t|\r\ntraining_step \t| 0.2485 \t|100 \t| 24.85 \t| 0.75633 \t|\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2210\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2209","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2209\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2209\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2209\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2209","id":855638232,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2","number":2209,"title":"Add code of conduct to the project","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618211774000,"updated_at":1618250152000,"closed_at":1618250152000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2209","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2209","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2209.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2209.patch"},"body":"Add code of conduct to the project and link it from README and CONTRIBUTING.\r\n\r\nThis was already done in `transformers`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2209\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2208","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2208\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2208\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2208\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2208","id":855343835,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw","number":2208,"title":"Remove Python2 leftovers","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1618157283000,"updated_at":1618437936000,"closed_at":1618407651000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2208","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2208","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2208.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2208.patch"},"body":"This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2208\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2207","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2207\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2207\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2207\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2207","id":855267383,"node_id":"MDU6SXNzdWU4NTUyNjczODM=","number":2207,"title":"making labels consistent across the datasets","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features['label'].int2str(i)`.\r\n"],"created_at":1618135436000,"updated_at":1618408920000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nFor accessing the labels one can type \r\n```\r\n>>> a.features['label']\r\nClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)\r\n```\r\nThe labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction,\r\nit would be great to have the labels consistent.\r\n\r\nthanks \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2207\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2206","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2206\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2206\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2206\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2206","id":855252415,"node_id":"MDU6SXNzdWU4NTUyNTI0MTU=","number":2206,"title":"Got pyarrow error when loading a dataset while adding special tokens into the tokenizer","user":{"login":"yana-xuyan","id":38536635,"node_id":"MDQ6VXNlcjM4NTM2NjM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38536635?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yana-xuyan","html_url":"https:\/\/github.com\/yana-xuyan","followers_url":"https:\/\/api.github.com\/users\/yana-xuyan\/followers","following_url":"https:\/\/api.github.com\/users\/yana-xuyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yana-xuyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yana-xuyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yana-xuyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yana-xuyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/yana-xuyan\/repos","events_url":"https:\/\/api.github.com\/users\/yana-xuyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yana-xuyan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?","Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger."],"created_at":1618130409000,"updated_at":1618380386000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/xuyan\/anaconda3\/envs\/convqa\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1687, in _map_single\r\n writer.write(example)\r\n File \"\/home\/xuyan\/anaconda3\/envs\/convqa\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 296, in write\r\n self.write_on_file()\r\n File \"\/home\/xuyan\/anaconda3\/envs\/convqa\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 270, in write_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow\/array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"\/home\/xuyan\/anaconda3\/envs\/convqa\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 108, in __arrow_array__\r\n out = out.cast(pa.list_(self.optimized_int_type))\r\n File \"pyarrow\/array.pxi\", line 810, in pyarrow.lib.Array.cast\r\n File \"\/home\/xuyan\/anaconda3\/envs\/convqa\/lib\/python3.7\/site-packages\/pyarrow\/compute.py\", line 281, in cast\r\n return call_function(\"cast\", [arr], options)\r\n File \"pyarrow\/_compute.pyx\", line 465, in pyarrow._compute.call_function\r\n File \"pyarrow\/_compute.pyx\", line 294, in pyarrow._compute.Function.call\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127\r\n\r\nDo you have any idea about it?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2206\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2205","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2205\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2205\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2205\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2205","id":855207605,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw","number":2205,"title":"Updating citation information on LinCE readme","user":{"login":"gaguilar","id":5833357,"node_id":"MDQ6VXNlcjU4MzMzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5833357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gaguilar","html_url":"https:\/\/github.com\/gaguilar","followers_url":"https:\/\/api.github.com\/users\/gaguilar\/followers","following_url":"https:\/\/api.github.com\/users\/gaguilar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gaguilar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gaguilar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gaguilar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gaguilar\/orgs","repos_url":"https:\/\/api.github.com\/users\/gaguilar\/repos","events_url":"https:\/\/api.github.com\/users\/gaguilar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gaguilar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618111085000,"updated_at":1618250014000,"closed_at":1618250014000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2205","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2205","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2205.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2205.patch"},"body":"Hi!\r\n\r\nI just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset. \r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2205\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2204","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2204\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2204\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2204\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2204","id":855144431,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2","number":2204,"title":"Add configurable options to `seqeval` metric","user":{"login":"marrodion","id":44571847,"node_id":"MDQ6VXNlcjQ0NTcxODQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44571847?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/marrodion","html_url":"https:\/\/github.com\/marrodion","followers_url":"https:\/\/api.github.com\/users\/marrodion\/followers","following_url":"https:\/\/api.github.com\/users\/marrodion\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/marrodion\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/marrodion\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/marrodion\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/marrodion\/orgs","repos_url":"https:\/\/api.github.com\/users\/marrodion\/repos","events_url":"https:\/\/api.github.com\/users\/marrodion\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/marrodion\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1618084699000,"updated_at":1618494586000,"closed_at":1618494586000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2204","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2204","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2204.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2204.patch"},"body":"Fixes #2148\r\n\r\nAdds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.\r\n\r\n`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2204\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2203","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2203\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2203\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2203\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2203","id":855053595,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5","number":2203,"title":"updated banking77 train and test data","user":{"login":"hsali","id":6765330,"node_id":"MDQ6VXNlcjY3NjUzMzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6765330?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hsali","html_url":"https:\/\/github.com\/hsali","followers_url":"https:\/\/api.github.com\/users\/hsali\/followers","following_url":"https:\/\/api.github.com\/users\/hsali\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hsali\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hsali\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hsali\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hsali\/orgs","repos_url":"https:\/\/api.github.com\/users\/hsali\/repos","events_url":"https:\/\/api.github.com\/users\/hsali\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hsali\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?","Closing for inactivity. Feel free to re-open if you want to push this change"],"created_at":1618056610000,"updated_at":1619188419000,"closed_at":1619188419000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2203","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2203","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2203.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2203.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2203\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2202","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2202\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2202\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2202\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2202","id":854501109,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx","number":2202,"title":"Add classes GenerateMode, DownloadConfig and Version to the documentation","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617973099000,"updated_at":1618250280000,"closed_at":1618250279000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2202","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2202","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2202.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2202.patch"},"body":"Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.\r\n\r\nUpdate the docstring of `load_dataset` to create cross-reference links to the classes.\r\n\r\nRelated to #2187.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2202\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2201","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2201\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2201\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2201\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2201","id":854499563,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3","number":2201,"title":"Fix ArrowWriter overwriting features in ArrowBasedBuilder","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617972979000,"updated_at":1618234337000,"closed_at":1618234336000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2201","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2201","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2201.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2201.patch"},"body":"This should fix the issues with CSV loading experienced in #2153 and #2200.\r\n\r\nThe CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.\r\nThe writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.\r\n\r\nI fixed that and I updated the tests","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2201\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2200","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2200\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2200\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2200\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2200","id":854449656,"node_id":"MDU6SXNzdWU4NTQ0NDk2NTY=","number":2200,"title":"_prepare_split will overwrite DatasetBuilder.info.features","user":{"login":"Gforky","id":4157614,"node_id":"MDQ6VXNlcjQxNTc2MTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4157614?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Gforky","html_url":"https:\/\/github.com\/Gforky","followers_url":"https:\/\/api.github.com\/users\/Gforky\/followers","following_url":"https:\/\/api.github.com\/users\/Gforky\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Gforky\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Gforky\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Gforky\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Gforky\/orgs","repos_url":"https:\/\/api.github.com\/users\/Gforky\/repos","events_url":"https:\/\/api.github.com\/users\/Gforky\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Gforky\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201","> Hi ! This might be related to #2153\r\n> \r\n> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\n> I'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n> \r\n> EDIT: opened #2201\r\n\r\nGlad to hear that! Thank you for your fix, I'm new to huggingface, it's a fantastic project \ud83d\ude01"],"created_at":1617968833000,"updated_at":1622803055000,"closed_at":1622803055000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, here is my issue:\r\nI initialized a Csv datasetbuilder with specific features:\r\n```\r\ndef get_dataset_features(data_args):\r\n features = {}\r\n if data_args.text_features:\r\n features.update({text_feature: hf_features.Value(\"string\") for text_feature in data_args.text_features.strip().split(\",\")})\r\n if data_args.num_features:\r\n features.update({text_feature: hf_features.Value(\"float32\") for text_feature in data_args.num_features.strip().split(\",\")})\r\n if data_args.label_classes:\r\n features[\"label\"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(\",\"))\r\n else:\r\n features[\"label\"] = hf_features.Value(\"float32\")\r\n return hf_features.Features(features)\r\n\r\ndatasets = load_dataset(extension,\r\n data_files=data_files,\r\n sep=data_args.delimiter,\r\n header=data_args.header,\r\n column_names=data_args.column_names.split(\",\") if data_args.column_names else None,\r\n features=get_dataset_features(data_args=data_args))\r\n```\r\nThe `features` is printout as below before `builder_instance.as_dataset` is called:\r\n```\r\n{'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}\r\n````\r\n\r\nBut after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to:\r\n```\r\n{'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}\r\n```\r\n\r\nAfter digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`. \r\nBut `ArrowWriter` is initailized without passing `features`.\r\nSo my concern is:\r\nIt's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2200\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2199","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2199\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2199\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2199\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2199","id":854417318,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3","number":2199,"title":"Fix backward compatibility in Dataset.load_from_disk","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, could you please check if this makes sense? Thanks.","What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released","Yes, I have seen it is not released yet...\r\n\r\nYou are right! It was your awesome PR on Tables which renamed this. If there is no particular reason for this renaming, yes, we could switch it back to the previous `_indices_data_files`. ;)"],"created_at":1617966070000,"updated_at":1617983825000,"closed_at":1617983825000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2199","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2199","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2199.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2199.patch"},"body":"Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key \"_indices_data_files\".\r\n\r\nRelated to #2195.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2199\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2198","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2198\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2198\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2198\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2198","id":854357481,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz","number":2198,"title":"added file_permission in load_dataset","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https:\/\/github.com\/huggingface\/transformers\/pull\/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evolve.\r\n\r\n@bhavitvyamalik I'm closing the PR for now if you don't mind"],"created_at":1617961146000,"updated_at":1618582306000,"closed_at":1618582306000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2198","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2198","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2198.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2198.patch"},"body":"As discussed in #2065 I've added `file_permission` argument in `load_dataset`. \r\n\r\nAdded mainly 2 things here:\r\n1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)\r\n2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2198\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2197","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2197\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2197\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2197\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2197","id":854356559,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw","number":2197,"title":"fix missing indices_files in load_form_disk","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617961077000,"updated_at":1617962080000,"closed_at":1617962079000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2197","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2197","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2197.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2197.patch"},"body":"This should fix #2195\r\n\r\n`load_from_disk` was failing if there was no \"_indices_files\" field in state.json. This can happen if the dataset has no indices mapping","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2197\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2196","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2196\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2196\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2196\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2196","id":854126114,"node_id":"MDU6SXNzdWU4NTQxMjYxMTQ=","number":2196,"title":"`load_dataset` caches two arrow files?","user":{"login":"hwijeen","id":29157715,"node_id":"MDQ6VXNlcjI5MTU3NzE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29157715?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hwijeen","html_url":"https:\/\/github.com\/hwijeen","followers_url":"https:\/\/api.github.com\/users\/hwijeen\/followers","following_url":"https:\/\/api.github.com\/users\/hwijeen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hwijeen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hwijeen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hwijeen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hwijeen\/orgs","repos_url":"https:\/\/api.github.com\/users\/hwijeen\/repos","events_url":"https:\/\/api.github.com\/users\/hwijeen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hwijeen\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map\/filter\/cast\/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to load the dataset in RAM, even after many transforms","Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G\tcache-ed205e500a7dc44c.arrow`\r\n\r\nTo my observation, both `load_dataset` and `map` creates `cache-*` files, and I wonder what the `cache-*` file from `load_dataset` is for (as I believe the same information is stored in `json-train.arrow`.","This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. "],"created_at":1617940159000,"updated_at":1618205129000,"closed_at":1618205129000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am using datasets to load large json file of 587G.\r\nI checked the cached folder and found that there are two arrow files created:\r\n* `cache-ed205e500a7dc44c.arrow` - 355G\r\n* `json-train.arrow` - 582G\r\n\r\nWhy is the first file created?\r\nIf I delete it, would I still be able to `load_from_disk`?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2196\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2195","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2195\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2195\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2195\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2195","id":854070194,"node_id":"MDU6SXNzdWU4NTQwNzAxOTQ=","number":2195,"title":"KeyError: '_indices_files' in `arrow_dataset.py`","user":{"login":"samsontmr","id":15007950,"node_id":"MDQ6VXNlcjE1MDA3OTUw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15007950?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/samsontmr","html_url":"https:\/\/github.com\/samsontmr","followers_url":"https:\/\/api.github.com\/users\/samsontmr\/followers","following_url":"https:\/\/api.github.com\/users\/samsontmr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/samsontmr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/samsontmr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/samsontmr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/samsontmr\/orgs","repos_url":"https:\/\/api.github.com\/users\/samsontmr\/repos","events_url":"https:\/\/api.github.com\/users\/samsontmr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/samsontmr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...","Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues"],"created_at":1617932232000,"updated_at":1617962109000,"closed_at":1617962079000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.\r\n\r\nTrace:\r\n```\r\nTraceback (most recent call last):\r\n File \"load_data.py\", line 11, in \r\n dataset = load_from_disk(SRC)\r\n File \"\/opt\/conda\/envs\/py38\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 784, in load_from_disk\r\n return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n File \"\/opt\/conda\/envs\/py38\/lib\/python3.8\/site-packages\/datasets\/dataset_dict.py\", line 692, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n File \"\/opt\/conda\/envs\/py38\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 634, in load_from_disk\r\n if state[\"_indices_files\"]:\r\nKeyError: '_indices_files'\r\n```\r\n\r\nI believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/b70141e3c5149430951773aaa0155555c5fb3e76\/src\/datasets\/arrow_dataset.py#L634\r\n\r\nMay I suggest using `state.get()` instead of directly indexing the dictionary?\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2195\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2194","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2194\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2194\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2194\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2194","id":853909452,"node_id":"MDU6SXNzdWU4NTM5MDk0NTI=","number":2194,"title":"py3.7: TypeError: can't pickle _LazyModule objects","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https:\/\/github.com\/huggingface\/transformers\/pull\/11168\r\n"],"created_at":1617915768000,"updated_at":1617987410000,"closed_at":1617933177000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:\r\n\r\n```\r\ngit clone https:\/\/github.com\/huggingface\/transformers\r\ncd transformers\r\npip install -e .[testing]\r\n\r\nexport BS=1; rm -rf \/tmp\/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \\\r\nexamples\/language-modeling\/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \\\r\n--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \\\r\n--per_device_train_batch_size $BS --output_dir \/tmp\/test-clm --block_size 128 --logging_steps 1 \\\r\n--fp16\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"examples\/language-modeling\/run_clm.py\", line 453, in \r\n main()\r\n File \"examples\/language-modeling\/run_clm.py\", line 336, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py\", line 303, in map\r\n for k, dataset in self.items()\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py\", line 303, in \r\n for k, dataset in self.items()\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1259, in map\r\n update_data=update_data,\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 158, in wrapper\r\n self._fingerprint, transform, kwargs_for_fingerprint\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 105, in update_fingerprint\r\n hasher.update(transform_args[key])\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 57, in update\r\n self.m.update(self.hash(value).encode(\"utf-8\"))\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 53, in hash\r\n return cls.hash_default(value)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 46, in hash_default\r\n return cls.hash_bytes(dumps(value))\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 389, in dumps\r\n dump(obj, file)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 361, in dump\r\n Pickler(file, recurse=True).dump(obj)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 556, in save_function\r\n obj=obj,\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/home\/stas\/anaconda3\/lib\/python3.7\/pickle.py\", line 524, in save\r\n rv = reduce(self.proto)\r\nTypeError: can't pickle _LazyModule objects\r\n```\r\n```\r\n$ python --version\r\nPython 3.7.4\r\n\r\n$ python -m torch.utils.collect_env\r\nCollecting environment information...\r\nPyTorch version: 1.8.0.dev20210110+cu110\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.0\r\nROCM used to build PyTorch: N\/A\r\n\r\nOS: Ubuntu 20.04.2 LTS (x86_64)\r\nGCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0\r\nClang version: 10.0.0-4ubuntu1 \r\nCMake version: version 3.16.3\r\n```\r\n\r\nThanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2194\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2193","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2193\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2193\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2193\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2193","id":853725707,"node_id":"MDU6SXNzdWU4NTM3MjU3MDc=","number":2193,"title":"Filtering\/mapping on one column is very slow","user":{"login":"norabelrose","id":39116809,"node_id":"MDQ6VXNlcjM5MTE2ODA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39116809?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/norabelrose","html_url":"https:\/\/github.com\/norabelrose","followers_url":"https:\/\/api.github.com\/users\/norabelrose\/followers","following_url":"https:\/\/api.github.com\/users\/norabelrose\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/norabelrose\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/norabelrose\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/norabelrose\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/norabelrose\/orgs","repos_url":"https:\/\/api.github.com\/users\/norabelrose\/repos","events_url":"https:\/\/api.github.com\/users\/norabelrose\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/norabelrose\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arrow<->python conversions especially during writing.\r\n\r\nI'll let you know how it goes !","@lhoestq Thanks for the response\u2014 it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for single column operations?\r\n\r\nIf that's not a priority for the maintainers right now, I could try my hand at adding the feature, but I can't guarantee I would do a good job given my lack of familiarity with pyarrow.","Currently the optimal setup for single-column computations is probably to do something like\r\n```python\r\nresult = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n```\r\nThis has two advantages:\r\n- input_columns=\"my_col\" allows to only read the column \"my_col\"\r\n- remove_columns=dataset.column_names makes `map` only keep the output of your function `f`, and it drops the other columns of the dataset instead of keeping them.\r\n\r\nLet me know if it improves speed on your side.\r\n\r\nYou can also get more speed by using `batched=True` and setting `num_proc=` for multiprocessing","Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow\/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```","Hi ! Can you open a separate issue for that ?\r\nAlso if you could provide a google colab or a sample code to reproduce this issue that would be helpful.\r\nOn my side I was not able to reproduce this error.","@lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and then discarding everything except the selected column, instead of exploiting the columnar data format to only load the selected column.\r\n\r\nMy code is like this:\r\n```\r\n self.dataset = self.dataset.sort('num_tokens')\r\n batch_dataset = self.dataset.map(\r\n\tcompute_uniform_sized_batches,\r\n\tbatched=True, batch_size=10_000, num_proc=10, input_columns=['num_tokens'],\r\n\tremove_columns=get_columns_all_equal(self.dataset),\r\n\twith_indices=True,\r\n\tfn_kwargs=dict(max_size=tokens_per_batch)\r\n)\r\nself.batches = {\r\n\tname: list(zip(split['start'], split['length']))\r\n\tfor name, split in batch_dataset.items()\r\n}\r\n```\r\nI find that the processes with higher IDs take significantly longer to complete, presumably because the dataset is sorted by article length and they're loading the entire article text into memory, instead of just the 'num_tokens' column.\r\n\r\nI should note that my batching procedure would work best if I just used `batch_size=None` and loaded the whole column into memory at once, but I found that this was intolerably slow and gave me no progress information, so I'm using the less than ideal `batch_size=10_000`.","Hi @norabelrose ! I'm glad you managed to make this work on your side.\r\nRegarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.\r\n\r\nIn the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the batch as a python dict, then it updates it using the output of your mapping function, and finally it removes columns from `remove_columns`. Therefore for a moment some columns are loaded in memory even if you remove them or don't use them for your mapping function.\r\n\r\nIt would be nice to have a way to optimize memory for cases such as yours !","@lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:\r\n- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`\r\n- change `Dataset._getitem()` so that it passes `self._data.drop(drop_columns)` to the `query_table()` function whenever `format_columns` is non-None and `output_all_columns` is False, instead of `self._data` itself","Looks like a great direction :)\r\nNote that `query_table` doesn't bring data into memory. Only `format_table` does.\r\nAlso the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:\r\n```python\r\n# before the `map` main for loop\r\ninput_columns = input_columns if input_columns is not None else self.column_names\r\nif not self._output_all_columns:\r\n columns = [col for col in input_columns if self._format_columns is None or col in self._format_columns]\r\n input_dataset = self.with_format(\r\n type=self._format_type,\r\n columns=columns\r\n )\r\nelse:\r\n # in this case we could find a way to filter both format_columns and unformatted columns eventually\r\n input_dataset = self\r\n# then input_dataset can be used in the main for loop of `map`\r\n```\r\n\r\nEDIT: oh and regarding streaming format versus file format for arrow, we plan to start using the file format #1933 at one point (though I'm not sure if it would improve performance)","Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpolation search was implemented, so it may have had more to do with the slow ChunkedArray slice implementation than anything else.\r\n\r\nIf `query_table` is I\/O free then the fix may be as simple as just adding this to line 1779 of arrow_dataset.py:\r\n```python\r\n# Only load the columns we actually need\r\nif input_columns:\r\n stack.enter_context(self.formatted_as(\r\n self._format_type,\r\n columns=input_columns,\r\n output_all_columns=False,\r\n **self._format_kwargs\r\n ))\r\n```\r\nIt's not clear to me why the `[col for col in input_columns if self._format_columns is None or col in self._format_columns]` check would be necessary\u2014 it seems like either `input_columns` should simply temporarily override the `_format_columns` within the `map` operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within `map`, but maybe I'm just missing it.","`query_table` simply slices\/concatenates parts of the table. The actual data inside the table is not brought in memory.\r\nAlso I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.\r\n\r\n> It's not clear to me why the [col for col in input_columns if self._format_columns is None or col in self._format_columns] check would be necessary\u2014 it seems like either input_columns should simply temporarily override the _format_columns within the map operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within map, but maybe I'm just missing it.\r\n\r\nActually yes we can just use input_columns. And we do need to add a check to make sure there are not conflicts or this could lead to confusing errors.","That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take somewhere on the order of 10 minutes."],"created_at":1617905774000,"updated_at":1619453639000,"closed_at":1619453639000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm currently using the `wikipedia` dataset\u2014 I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.\r\n\r\nI want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that\u2014 I'm not very familiar with the pyarrow API.\r\n\r\nI know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.\r\n\r\nPS: This is definitely not a \"dataset request.\" I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2193\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2192","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2192\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2192\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2192\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2192","id":853547910,"node_id":"MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0","number":2192,"title":"Fix typo in huggingface hub","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617892944000,"updated_at":1617896861000,"closed_at":1617896860000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2192","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2192","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2192.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2192.patch"},"body":"pip knows how to resolve to `huggingface_hub`, but conda doesn't!\r\n\r\nThe `packaging` dependency is also required for the build to complete.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2192\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2191","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2191\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2191\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2191\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2191","id":853364204,"node_id":"MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0","number":2191,"title":"Refactorize tests to use Dataset as context manager","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2851292821,"node_id":"MDU6TGFiZWwyODUxMjkyODIx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/refactoring","name":"refactoring","color":"B67A40","default":false,"description":"Restructuring existing code without changing its external behavior"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/1","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1\/labels","id":6644198,"node_id":"MDk6TWlsZXN0b25lNjY0NDE5OA==","number":1,"title":"1.6","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":4,"state":"closed","created_at":1617973671000,"updated_at":1618937446000,"due_on":1618556400000,"closed_at":1618937446000},"comments":["I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.","@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other PRs with additional refactorings, before I get conflicts again with the master branch...","There are still some conflicts that prevent merging.\r\nMoreover I noticed that you added one fixture per method of the Dataset object to be mocked. The code of all these fixtures is pretty much the same, feel free to factorize them into one fixture.\r\n\r\nAlso feel free to create another branch from `master` if you don't want to fix the conflicts of this branch.\r\nLet me know if I can help you on this","@lhoestq, yes, the new conflicts appeared after today merge commits on master...\r\n\r\nI am definitely going to split this PR into smaller ones in order to avoid having to resolve many conflicts after each commit on master. There are lots of conflicts and these are painful to resolve."],"created_at":1617880864000,"updated_at":1618818791000,"closed_at":1618818790000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2191","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2191","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2191.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2191.patch"},"body":"Refactorize Dataset tests to use Dataset as context manager.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2191\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2190","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2190\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2190\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2190\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2190","id":853181564,"node_id":"MDU6SXNzdWU4NTMxODE1NjQ=","number":2190,"title":"News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs","user":{"login":"anassalamah","id":8571003,"node_id":"MDQ6VXNlcjg1NzEwMDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8571003?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anassalamah","html_url":"https:\/\/github.com\/anassalamah","followers_url":"https:\/\/api.github.com\/users\/anassalamah\/followers","following_url":"https:\/\/api.github.com\/users\/anassalamah\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anassalamah\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anassalamah\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anassalamah\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anassalamah\/orgs","repos_url":"https:\/\/api.github.com\/users\/anassalamah\/repos","events_url":"https:\/\/api.github.com\/users\/anassalamah\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anassalamah\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```","Hello @albertvillanova, \r\n\r\nThanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/8571003\/114169966-ec819400-993a-11eb-8a67-930f9a9b2290.png)\r\n"],"created_at":1617868423000,"updated_at":1621850635000,"closed_at":1621850635000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I used load_dataset to load the news_commentary dataset for \"ar-en\" translation pairs but found translations from Arabic to Hindi. \r\n\r\n```\r\ntrain_ds = load_dataset(\"news_commentary\", \"ar-en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", \"ar-en\", split='train[98%:]')\r\n\r\n# filtering out examples that are not ar-en translations but ar-hi\r\nval_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)\r\n```\r\n\r\n* I'm fairly new to using datasets so I might be doing something wrong","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2190\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2189","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2189\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2189\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2189\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2189","id":853052891,"node_id":"MDU6SXNzdWU4NTMwNTI4OTE=","number":2189,"title":"save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"],"created_at":1617856973000,"updated_at":1618408625000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"As you can see, it saves the entire dataset.\r\n\r\n@lhoestq \r\n\r\nYou can check by going through the following example,\r\n\r\n```\r\nfrom datasets import load_from_disk,concatenate_datasets\r\n\r\nloaded_data=load_from_disk('\/home\/gsir059\/HNSW-ori\/my_knowledge_dataset')\r\nn=20\r\nkb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]\r\nfinal_dataset=concatenate_datasets([kb_list[1],kb_list[2]])\r\nfinal_dataset.save_to_disk('\/home\/gsir059\/haha\/k.arrow')\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2189\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2188","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2188\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2188\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2188\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2188","id":853044166,"node_id":"MDU6SXNzdWU4NTMwNDQxNjY=","number":2188,"title":"Duplicate data in Timit dataset","user":{"login":"BHM-RB","id":78190188,"node_id":"MDQ6VXNlcjc4MTkwMTg4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/78190188?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BHM-RB","html_url":"https:\/\/github.com\/BHM-RB","followers_url":"https:\/\/api.github.com\/users\/BHM-RB\/followers","following_url":"https:\/\/api.github.com\/users\/BHM-RB\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BHM-RB\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BHM-RB\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BHM-RB\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BHM-RB\/orgs","repos_url":"https:\/\/api.github.com\/users\/BHM-RB\/repos","events_url":"https:\/\/api.github.com\/users\/BHM-RB\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BHM-RB\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```","Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"],"created_at":1617855714000,"updated_at":1617883999000,"closed_at":1617883999000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I ran a simple code to list all texts in Timit dataset and the texts were all the same.\r\nIs this dataset corrupted?\r\n**Code:**\r\ntimit = load_dataset(\"timit_asr\")\r\nprint(*timit['train']['text'], sep='\\n')\r\n**Result:**\r\nWould such an act of refusal be useful?\r\nWould such an act of refusal be useful?\r\nWould such an act of refusal be useful?\r\nWould such an act of refusal be useful?\r\n...\r\n...\r\nWould such an act of refusal be useful?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2188\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2187","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2187\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2187\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2187\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2187","id":852939736,"node_id":"MDU6SXNzdWU4NTI5Mzk3MzY=","number":2187,"title":"Question (potential issue?) related to datasets caching","user":{"login":"ioana-blue","id":17202292,"node_id":"MDQ6VXNlcjE3MjAyMjky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17202292?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ioana-blue","html_url":"https:\/\/github.com\/ioana-blue","followers_url":"https:\/\/api.github.com\/users\/ioana-blue\/followers","following_url":"https:\/\/api.github.com\/users\/ioana-blue\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ioana-blue\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ioana-blue\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ioana-blue\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ioana-blue\/orgs","repos_url":"https:\/\/api.github.com\/users\/ioana-blue\/repos","events_url":"https:\/\/api.github.com\/users\/ioana-blue\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ioana-blue\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out loud here...)\r\n\r\nIf this is the case, it may be ok for my use case (have to think about it more), still a bit surprising given that datasets caching is disabled (or so I hope) by the lines I pasted above. ","Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.\r\nHowever `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.\r\n\r\nIndeed from the documentation:\r\n> datasets.set_caching_enabled(boolean: bool)\r\n\r\n> When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it\u2019s already been computed.\r\n> Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.\r\n> If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:\r\n> - cache files are always recreated\r\n> - cache files are written to a temporary directory that is deleted when session closes\r\n> - cache files are named using a random hash instead of the dataset fingerprint - use datasets.Dataset.save_to_disk() to save a transformed dataset or it will be deleted when session closes\r\n> - caching doesn\u2019t affect datasets.load_dataset(). If you want to regenerate a dataset from scratch you should use the download_mode parameter in datasets.load_dataset().","Thank you for the clarification. \r\n\r\nThis is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.GenerateMode) \u2013 select the download\/generate mode - Default to REUSE_DATASET_IF_EXISTS` => it almost sounds that it could reload prepared dataset files. Where are these files stored? I guess not in the temporary directory that is removed... \r\n\r\nI find this type of api design error-prone. When I see as a programmer `datasets.set_caching_enabled(False)` I expect no reuse of anything in the cache. ","It would be nice if the documentation elaborated on all the possible values for `download_mode` and\/or a link to `datasets.GenerateMode`. \r\nThis info here:\r\n```\r\n \"\"\"`Enum` for how to treat pre-existing downloads and data.\r\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\r\n raw downloads and the prepared dataset if they exist.\r\n The generations modes:\r\n | | Downloads | Dataset |\r\n | -----------------------------------|-----------|---------|\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n```","I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What information is used to create the directory\/filenames where the files are stored?\r\n\r\nI'm concerned about the following scenario: if I have a file, let's say `train.csv` at path `the_path`, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate `train.csv` at the same path `the_path`. Is there enough information in the temporary name\/hash to *not* reload the *old* prepared dataset (e.g., timestamp of the file)? Or is it going to reload the *old* prepared file? ","Thanks for the feedback, we'll work in improving this aspect of the documentation.\r\n\r\n> Where are these files stored? I guess not in the temporary directory that is removed...\r\n\r\nWe're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By default the file is located in the ~\/.cache\/huggingface\/datasets\/\/\/ directory.\r\n\r\n> What information is used to create the directory\/filenames where the files are stored?\r\n\r\nThe config_id contains a hash that takes into account:\r\n- the dataset loader used and its source code (e.g. the \"csv\" loader)\r\n- the arguments passed to the loader (e.g. the csv delimiter)\r\n- metadata of the local data files if any (e.g. their timestamps)\r\n\r\n> I'm concerned about the following scenario: if I have a file, let's say train.csv at path the_path, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate train.csv at the same path the_path. Is there enough information in the temporary name\/hash to not reload the old prepared dataset (e.g., timestamp of the file)? Or is it going to reload the old prepared file?\r\n\r\nYes the timestamp of the local csv file is taken into account. If you edit your csv file, the config_id will change and loading the dataset will create a new arrow file.","Thank you for all your clarifications, really helpful! \r\n\r\nIf you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. ","That makes total sense indeed !\r\nI think we can do the change","I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in the naming convention and\/or file access\/locking that you're using prevent race conditions between the concurrent jobs on the caching of the local dataset they all use?\r\n\r\nI noticed some errors (can provide more details if helpful) in load_dataset\/prepare_split that lead to my question above. \r\n\r\nLet me know if my question is clear, I can elaborate more if needed @lhoestq Thank you!","I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same dataset and forcing redownload, they may step on each other foot\/caching of the dataset. ","We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.\r\nAlso directories that are being written use a suffix \".incomplete\" so that reading is not possible on a dataset being written.\r\n\r\nDo you think you could provide a simple code to reproduce the race condition you experienced ?","I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownloading of the dataset, I've been running hundreds of experiments before and didn't have a problem before I forced the redownload). I also can provide samples of the different stack errors I get and some details about the level of concurrency of jobs I was running. I can also try to imagine how the race manifests (I'm fairly sure that it's a combo of one job cleaning up and another job being in the middle of the run).\r\n\r\nHowever, I have to cleanup all this to make sure I'm no spilling any info I shouldn't be spilling. I'll try to do it by the end of the week, if you think all this is helpful. \r\n\r\nFor now, I have a workaround. Don't use forcing redownloading. And to be ultra careful (although I don't think this is a problem), I run a series of jobs that will prepare the datasets and I know there is no concurrency wrt the dataset. Once that's done (and I believe even having multiple jobs loading the datasets at the same time doesn't create problems, as long as REUSE_DATASET_IF_EXISTS is the policy for loading the dataset, so the filelock mechanism you're using is working in that scenario), the prepared datasets will be reused, no race possible in any way. \r\n\r\nThanks for all the details you provided, it helped me understand the underlying implementation and coming up with workarounds when I ran into issues. "],"created_at":1617840988000,"updated_at":1618412158000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I thought I had disabled datasets caching in my code, as follows:\r\n```\r\nfrom datasets import set_caching_enabled\r\n...\r\ndef main():\r\n\r\n # disable caching in datasets\r\n set_caching_enabled(False)\r\n```\r\nHowever, in my log files I see messages like the following:\r\n\r\n```\r\n04\/07\/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877\r\n04\/07\/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx\/cache-transformers\/datasets\/csv\/default-888a87931cbc5877\/0.0.0\/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93\r\n```\r\nCan you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2187\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2186","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2186\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2186\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2186\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2186","id":852840819,"node_id":"MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0","number":2186,"title":"GEM: new challenge sets","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @sebastiangehrmann"],"created_at":1617831547000,"updated_at":1617832595000,"closed_at":1617832595000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2186","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2186","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2186.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2186.patch"},"body":"This PR updates the GEM dataset to:\r\n- remove extraneous fields in WikiAuto after https:\/\/github.com\/huggingface\/datasets\/pull\/2171 fixed the source\r\n- add context and services to Schema Guided Dialog\r\n- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2186\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2185","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2185\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2185\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2185\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2185","id":852684395,"node_id":"MDU6SXNzdWU4NTI2ODQzOTU=","number":2185,"title":".map() and distributed training","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seems to be slower at the moment (#1992), hope this helps you.","Thanks @hwijeen for the workaround, feels a bit prototypical but it works! (it seems files are written twice then though)\r\n\r\n(I haven't observed slowness using multiprocessed map function but I could be wrong)","To my understanding, files are written twice anyhow(one after load_dataset, another aftet map). It's just that you now have it at the location where you can see, whereas it was secretlely saved at caching folder(.cache\/huggingface\/datasets by default)! Correct me if I'm wrong!","Slowness in multiprocessing has been observed in certain environments but not others. We're investigating ;)","So to answer my initial question, I was just doing something stupid as I was not re-giving the `preprocessing_num_workers` arguments when launching the distributed training (and it was then set to `None`). I initially thought the hash was computed only with the `tokenize_function` but it's all arguments. Thanks @lhoestq for clarifying!"],"created_at":1617819734000,"updated_at":1617982711000,"closed_at":1617982711000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI have a question regarding distributed training and the `.map` call on a dataset.\r\n\r\nI have a local dataset \"my_custom_dataset\" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.\r\n`dataset` is then tokenized:\r\n```python\r\ndatasets = load_from_disk(dataset_path=my_path)\r\n\r\n[...]\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[text_column_name])\r\n\r\nlogger.info(\"Mapping dataset to tokenized dataset.\")\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n)\r\n```\r\nI am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path\/train` (there is only a train split).\r\nWhen I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect.\r\n\r\nEverything so far was done by launching a **single process script**.\r\nI now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files. \r\n\r\nI tried adding the `cache_file_name` argument: `cache_file_name={\"train\": my_path\/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it.\r\n\r\n**My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training.\r\n\r\n- I am following the same structure as the examples of transformers (more specifically [run_clm.py](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_clm.py) in my case)\r\n- I am using 1.5.0 version of datasets if that matters.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2185\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2184","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2184\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2184\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2184\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2184","id":852597258,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0","number":2184,"title":"Implementation of class_encode_column","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Made the required changes @lhoestq , sorry it took so much time!"],"created_at":1617814063000,"updated_at":1618573477000,"closed_at":1618572419000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2184","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2184","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2184.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2184.patch"},"body":"Addresses #2176 \r\n\r\nI'm happy to discuss the API and internals!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2184\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2183","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2183\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2183\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2183\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2183","id":852518411,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz","number":2183,"title":"Fix s3fs tests for py36 and py37+","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617808631000,"updated_at":1617872085000,"closed_at":1617872084000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2183","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2183","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2183.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2183.patch"},"body":"Recently several changes happened:\r\n1. latest versions of `fsspec` require python>3.7 for async features\r\n2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager\r\n\r\nThis PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`.\r\n\r\ncc @philschmid ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2183\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2182","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2182\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2182\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2182\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2182","id":852384872,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy","number":2182,"title":"Set default in-memory value depending on the dataset size","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/1","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1\/labels","id":6644198,"node_id":"MDk6TWlsZXN0b25lNjY0NDE5OA==","number":1,"title":"1.6","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":4,"state":"closed","created_at":1617973671000,"updated_at":1618937446000,"due_on":1618556400000,"closed_at":1618937446000},"comments":["I ping @krandiash to keep him up to date.","TODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~","@lhoestq I have a question, regarding:\r\n> Also maybe we should add a warning if someone tries to specify cache_file_name= in map, filter etc. on a dataset that is in memory, since the computation is not going to be cached in this case.\r\n\r\n- It might be the case that the user has an in-memory dataset and might want to use `map` and cache it, by passing `cache_file_name=`\r\n- This is indeed allowed by the library and works as expected: the dataset is cached.\r\n\r\nWhy adding a warning?","Yes right, I meant if `load_from_cache_file` is set to True and `cache_file_name ` is None. my bad :p"],"created_at":1617800418000,"updated_at":1618928412000,"closed_at":1618913044000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2182","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2182","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2182.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2182.patch"},"body":"Set a default value for `in_memory` depending on the size of the dataset to be loaded.\r\n\r\nClose #2179.\r\n\r\nTODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2182\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2181","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2181\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2181\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2181\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2181","id":852261607,"node_id":"MDU6SXNzdWU4NTIyNjE2MDc=","number":2181,"title":"Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)","user":{"login":"hwijeen","id":29157715,"node_id":"MDQ6VXNlcjI5MTU3NzE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29157715?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hwijeen","html_url":"https:\/\/github.com\/hwijeen","followers_url":"https:\/\/api.github.com\/users\/hwijeen\/followers","following_url":"https:\/\/api.github.com\/users\/hwijeen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hwijeen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hwijeen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hwijeen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hwijeen\/orgs","repos_url":"https:\/\/api.github.com\/users\/hwijeen\/repos","events_url":"https:\/\/api.github.com\/users\/hwijeen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hwijeen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well as the size of individual chunks in the dataset.\r\n\r\nYou can also try with bigger block sizes if needed","Hi @lhoestq! Thank you for your prompt reply.\r\nI have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.\r\n\r\nCould you give me a bit of background on why block size needs to be exactly calibrated?\r\nTo my understanding, small block sized should run just fine despite its slowness..\r\n\r\n\r\n","We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.\r\nThis issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.\r\nSo with a big value for chunk_size this should have worked unless you have one extremely long line in your file.\r\n\r\nAlso what version of pyarrow are you using ?\r\n\r\nFInally I wonder if it could be an issue on pyarrow's side when using big json files. (I haven't tested big json files like yours)","I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.\r\n\r\nYour point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know. \r\n\r\nHere are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it would be wonderful if datasesets could give a clear guide on how to play with large datasets! (I am suddenly experiencing various issue when working with large datasets.. e.g. #1992 )\r\n```python\r\n return paj.ReadOptions(use_threads=self.use_threads, block_size=self.block_size)\r\n File \"pyarrow\/_json.pyx\", line 56, in pyarrow._json.ReadOptions.__init__\r\n File \"pyarrow\/_json.pyx\", line 81, in pyarrow._json.ReadOptions.block_size.__set__\r\nOverflowError: value too large to convert to int32_t\r\n```\r\n\r\n```python\r\n\r\nline 83, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow\/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```","I am getting the same error. When I tweak the block_size, I also find:\r\n`OverflowError: value too large to convert to int32_t`\r\nand \r\n`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`\r\n","I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:\r\n```python\r\n[\r\n {'key': \"a\", 'value': ['one', 'two', 'three']},\r\n {'key': \"b\", 'value': ['four', 'five', 'six']}\r\n]\r\n```\r\nI changed to:\r\n\r\n```python\r\n {'key': \"a\", 'value': 'one\\ntwo\\nthree'},\r\n {'key': \"b\", 'value': 'four\\nfive\\nsix']}\r\n```\r\nand that worked!\r\n\r\nI used the following to reformat my json file:\r\n```python\r\nwith open(file_name, \"w\", encoding=\"utf-8\") as f:\r\n for item in list_:\r\n f.write(json.dumps(item) + \"\\n\")\r\n```\r\nThis works with `block_size_10MB = 10 << 20` or without specifying `block_size`.","Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.\r\n\r\nIndeed, those are different JSON-like formats:\r\n- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brackets `[...]`)\r\n- the second one is called **JSON Lines**: the entire file content is not JSON-valid, but only every line (newline-delimited) is JSON-valid\r\n\r\nCurrently PyArrow only supports **JSON Lines** format: \r\n- https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.json.read_json.html\r\n > Currently only the line-delimited JSON format is supported.\r\n- https:\/\/arrow.apache.org\/docs\/python\/json.html\r\n > Arrow supports reading columnar data from line-delimited JSON files.","Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!\r\nHowever, the problem I described above happened when I was dealing with jsonl files \ud83d\ude3f\r\nAlthough I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case.","I see... I guess there is another problem going one then, related to the size."],"created_at":1617791206000,"updated_at":1618211755000,"closed_at":1618211755000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.\r\nWhen loading a huge json file of 500GB, pyarrow complains as follows:\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/user\/.pyenv\/versions\/3.7.9\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in incomplete_dir\r\n yield tmp_dir\r\n File \"\/home\/user\/.pyenv\/versions\/3.7.9\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/user\/.pyenv\/versions\/3.7.9\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 650, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/user\/.pyenv\/versions\/3.7.9\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1027, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"\/home\/user\/.pyenv\/versions\/3.7.9\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"\/app\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/json\/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641\/json.py\", line 83, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow\/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)\r\n```\r\nWhen using only a small portion of the sample file, say first 100 lines, it works perfectly well..\r\n\r\nI see that it is the error from pyarrow, but could you give me a hint or possible solutions?\r\n#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2181\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2180","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2180\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2180\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2180\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2180","id":852258635,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2","number":2180,"title":"Add tel to xtreme tatoeba","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617790995000,"updated_at":1617810635000,"closed_at":1617810634000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2180","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2180","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2180.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2180.patch"},"body":"This should fix issue #2149 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2180\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2179","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2179\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2179\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2179\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2179","id":852237957,"node_id":"MDU6SXNzdWU4NTIyMzc5NTc=","number":2179,"title":"Load small datasets in-memory instead of using memory map","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1617789496000,"updated_at":1618913044000,"closed_at":1618913043000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently all datasets are loaded using memory mapping by default in `load_dataset`.\r\nHowever this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:\r\n- its memory footprint would be small so it's ok\r\n- in-memory computations\/queries would be faster\r\n- the caching on-disk would be disabled, making computations even faster (no I\/O bound because of the disk)\r\n- but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed.\r\n\r\nTherefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2179\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2178","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2178\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2178\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2178\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2178","id":852215058,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1","number":2178,"title":"Fix cast memory usage by using map on subtables","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/1","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1\/labels","id":6644198,"node_id":"MDk6TWlsZXN0b25lNjY0NDE5OA==","number":1,"title":"1.6","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":4,"state":"closed","created_at":1617973671000,"updated_at":1618937446000,"due_on":1618556400000,"closed_at":1618937446000},"comments":["I addressed your comments about the docstrings and the output validation :)","I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.","Thanks @lhoestq and @albertvillanova !"],"created_at":1617787850000,"updated_at":1618928444000,"closed_at":1618306096000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2178","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2178","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2178.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2178.patch"},"body":"The `cast` operation on a pyarrow Table may create new arrays in memory.\r\nThis is an issue since users expect memory mapped datasets to not fill up the RAM.\r\n\r\nTo fix that I used `map` to write a new arrow file on disk when cast is used.\r\nTo make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.\r\n\r\nedit: we'll use the same mechanism for `filter`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2178\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2177","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2177\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2177\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2177\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2177","id":852065307,"node_id":"MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx","number":2177,"title":"add social thumbnial","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617777606000,"updated_at":1617783361000,"closed_at":1617783361000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2177","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2177","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2177.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2177.patch"},"body":"# What does this PR do?\r\n\r\nI added OpenGraph\/ Twitter Card support to the docs to create nice social thumbnails.\r\n\r\n![Bildschirmfoto 2021-04-07 um 08 36 50](https:\/\/user-images.githubusercontent.com\/32632186\/113821698-bac2ce80-977c-11eb-81aa-d8f16355857e.png)\r\n\r\nTo be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https:\/\/github.com\/readthedocs\/readthedocs.org\/issues\/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https:\/\/github.com\/wpilibsuite\/sphinxext-opengraph\/tree\/main).\r\n\r\nP.S. It seemed that `make style` never ran for `docs\/` i hope the changes are okay otherwise I'll revert it. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2177\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2176","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2176\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2176\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2176\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2176","id":851865795,"node_id":"MDU6SXNzdWU4NTE4NjU3OTU=","number":2176,"title":"Converting a Value to a ClassLabel","user":{"login":"nelson-liu","id":7272031,"node_id":"MDQ6VXNlcjcyNzIwMzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7272031?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nelson-liu","html_url":"https:\/\/github.com\/nelson-liu","followers_url":"https:\/\/api.github.com\/users\/nelson-liu\/followers","following_url":"https:\/\/api.github.com\/users\/nelson-liu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nelson-liu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nelson-liu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nelson-liu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nelson-liu\/orgs","repos_url":"https:\/\/api.github.com\/users\/nelson-liu\/repos","events_url":"https:\/\/api.github.com\/users\/nelson-liu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nelson-liu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class_names))\r\ndset = dset.map(lambda str_value: {col_name: class_feature.str2int(str_value)}, input_columns=col_name)\r\n\r\ndset = dset.cast(features.Features({\r\n ...\r\n col_name: class_feature\r\n})\r\n```\r\n"],"created_at":1617749656000,"updated_at":1618827034000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi!\r\n\r\nIn the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`\r\n\r\nWould it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2176\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2175","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2175\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2175\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2175\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2175","id":851836096,"node_id":"MDU6SXNzdWU4NTE4MzYwOTY=","number":2175,"title":"dataset.search_batch() function outputs all -1 indices sometime.","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Actually, I found the answer [here](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.","@lhoestq @patrickvonplaten \r\n\r\nI also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.\r\n\r\nplease check [def get_doc_dicts function](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/src\/transformers\/models\/rag\/retrieval_rag.py#L222)\r\n\r\n\r\nDoes the use of the HNSW guarantee to retrieve valid indexes always? \r\n\r\n","Hi !\r\nNo it happens sometimes to return -1, especially if your dataset is small.\r\nIf your dataset is big enough it shouldn't happen in my experience.\r\n\r\nIdeally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code ","I also checked with some indexes it returns more -1s. Specially with IVF\nwhen nprobr is very low. It doesn't happen when using HNSW though. But at\nthe moment if it happens, dataset will always return the last element.\nMaybe we should change it to repeat the most last valid retrieved doc id.\nWhat do you think?\n\nOn Wed, Apr 7, 2021, 21:09 Quentin Lhoest ***@***.***> wrote:\n\n> Hi !\n> No it happens sometimes to return -1, especially if your dataset is small.\n> If your dataset is big enough it shouldn't happen.\n>\n> Ideally we should ignore all the -1 that are returned. It should be\n> possible to change that in RAG's code\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :)","Sure. Will push everything together with RAG end to end. :) thanks a lot.\n\nOn Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:\n\n> That would be an easy way to workaround this issue. Feel free to open a PR\n> on transformers and ping me ! :)\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n"],"created_at":1617745849000,"updated_at":1618575676000,"closed_at":1618575675000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, \"IVF65536_HNSW32,Flat\")**.\r\n\r\nDuring the retrieval phase exactly in [this line of retrieval_rag.py](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/src\/transformers\/models\/rag\/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. \r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/16892570\/113782387-37a67600-9786-11eb-9c29-acad661a9648.png)\r\n\r\n\r\nHere, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ?\r\n\r\nIs this a problem of the index, where the faiss can't find any similar vector?\r\nIs there documentation on the output index being -1?\r\n\r\n@lhoestq \r\n ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2175\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2174","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2174\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2174\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2174\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2174","id":851383675,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2","number":2174,"title":"Pin docutils for better doc","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617712820000,"updated_at":1617713753000,"closed_at":1617713753000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2174","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2174","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2174.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2174.patch"},"body":"The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/35901082\/113711773-5be55280-96b3-11eb-9b3b-9794f17709aa.png)\r\n\r\nWe had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx).\r\n\r\nYou can see the version after the change [here](https:\/\/32769-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/index.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2174\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2173","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2173\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2173\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2173\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2173","id":851359284,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2","number":2173,"title":"Add OpenSLR dataset","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617710914000,"updated_at":1618246486000,"closed_at":1618246486000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2173","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2173","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2173.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2173.patch"},"body":"OpenSLR (https:\/\/openslr.org\/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2173\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2172","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2172\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2172\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2172\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2172","id":851229399,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx","number":2172,"title":"Pin fsspec lower than 0.9.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617700749000,"updated_at":1617702567000,"closed_at":1617702566000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2172","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2172","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2172.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2172.patch"},"body":"Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/5312\/workflows\/490f3240-cd1c-4dd1-bb60-b416771c5584\/jobs\/32734) for example)\r\n\r\nI'm pinning `fsspec` until this has been resolved","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2172\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2171","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2171\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2171\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2171\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2171","id":851090662,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw","number":2171,"title":"Fixed the link to wikiauto training data.","user":{"login":"mounicam","id":11708999,"node_id":"MDQ6VXNlcjExNzA4OTk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11708999?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mounicam","html_url":"https:\/\/github.com\/mounicam","followers_url":"https:\/\/api.github.com\/users\/mounicam\/followers","following_url":"https:\/\/api.github.com\/users\/mounicam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mounicam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mounicam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mounicam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mounicam\/orgs","repos_url":"https:\/\/api.github.com\/users\/mounicam\/repos","events_url":"https:\/\/api.github.com\/users\/mounicam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mounicam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also you can ignore the CI failing on `docs`, this has been fixed on master :)","@lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR!","Ok !"],"created_at":1617693191000,"updated_at":1617725142000,"closed_at":1617725109000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2171","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2171","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2171.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2171.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2171\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2170","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2170\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2170\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2170\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2170","id":850913228,"node_id":"MDU6SXNzdWU4NTA5MTMyMjg=","number":2170,"title":"Wikipedia historic dumps are deleted but hf\/datasets hardcodes dump date","user":{"login":"leezu","id":946903,"node_id":"MDQ6VXNlcjk0NjkwMw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/946903?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leezu","html_url":"https:\/\/github.com\/leezu","followers_url":"https:\/\/api.github.com\/users\/leezu\/followers","following_url":"https:\/\/api.github.com\/users\/leezu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leezu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leezu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leezu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leezu\/orgs","repos_url":"https:\/\/api.github.com\/users\/leezu\/repos","events_url":"https:\/\/api.github.com\/users\/leezu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leezu\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https:\/\/dumps.wikimedia.org\/enwiki\/).\r\n\r\nThis is not a proper fix however as all the files will still have '20200501' in their file names."],"created_at":1617678798000,"updated_at":1623805850000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Wikimedia does not keep all historical dumps. For example, as of today https:\/\/dumps.wikimedia.org\/kowiki\/ only provides\r\n\r\n```\r\n20201220\/ 02-Feb-2021 01:36 -\r\n20210101\/ 21-Feb-2021 01:26 -\r\n20210120\/ 02-Mar-2021 01:25 -\r\n20210201\/ 21-Mar-2021 01:26 -\r\n20210220\/ 02-Apr-2021 01:26 -\r\n20210301\/ 03-Mar-2021 08:10 -\r\n20210320\/ 21-Mar-2021 18:13 -\r\n20210401\/ 03-Apr-2021 10:08 -\r\nlatest\/ 03-Apr-2021 10:08 -\r\n```\r\n\r\nHowever, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets:\r\n\r\n```\r\nValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']\r\n```\r\n\r\nThe cached datasets:\r\n\r\n```\r\n% aws s3 --no-sign-request --endpoint-url https:\/\/storage.googleapis.com ls s3:\/\/huggingface-nlp\/cache\/datasets\/wikipedia\/\r\n PRE 20200501.de\/\r\n PRE 20200501.en\/\r\n PRE 20200501.fr\/\r\n PRE 20200501.frr\/\r\n PRE 20200501.it\/\r\n PRE 20200501.simple\/\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2170\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2169","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2169\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2169\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2169\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2169","id":850456180,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA5MDI2ODUz","number":2169,"title":"Updated WER metric implementation to avoid memory issues","user":{"login":"diego-fustes","id":5707233,"node_id":"MDQ6VXNlcjU3MDcyMzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5707233?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/diego-fustes","html_url":"https:\/\/github.com\/diego-fustes","followers_url":"https:\/\/api.github.com\/users\/diego-fustes\/followers","following_url":"https:\/\/api.github.com\/users\/diego-fustes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/diego-fustes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/diego-fustes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/diego-fustes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/diego-fustes\/orgs","repos_url":"https:\/\/api.github.com\/users\/diego-fustes\/repos","events_url":"https:\/\/api.github.com\/users\/diego-fustes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/diego-fustes\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for suggesting this fix \r\nUnfortunately it looks like it's already been fixed by #2111 \r\n\r\nFeel free to share your thoughts about this PR !\r\n\r\nI'm closing this one if you don't mind."],"created_at":1617637400000,"updated_at":1617721378000,"closed_at":1617721378000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2169","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2169","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2169.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2169.patch"},"body":"This is in order to fix this issue:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/issues\/2078\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2169\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2168","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2168\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2168\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2168\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2168","id":849957941,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5","number":2168,"title":"Preserve split type when realoding dataset","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for diving into this !\r\n\r\nBefore going further, I just want to make sure if using `eval` is the right solution\r\nPersonally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's possible to change this aspect instead.\r\n\r\nMaybe it would be better to convert the `_RelativeInstruction` to a string (or \"specs\") ?\r\nIt looks like `ReadInstruction.from_spec` already exists, but not the other way around.\r\nThe specs are the string representation of instructions. For example: `train+validation[:50%]`.\r\n\r\nLet me know what you think ! And thanks again, this issue has been here for a while now ^^","@lhoestq Yes, before going with `eval`, I thought about this approach with the \"spec\". The only issue with this approach is that we have to come up with a represenation for the `rounding` arg.\r\n\r\nWhat do you think about this (maybe too verbose)?\r\n```python\r\n>>> print(ReadInstruction(\"train\", rounding=\"pct1_dropremainder\", from_=10, to=30).to_spec())\r\ntrain[10:30](pct1_dropremainder)","Good idea !\r\n\r\nFirst we must note that the rounding is only used for percentage instructions.\r\nFor absolute instructions there's no rounding ambiguity.\r\n\r\nBy default the rounding is set to `closest`. For example if you have a train set of 999 examples and if you provide an instruction spec `\"train[:1%]\"`, you're going to get the first ten examples (while the `pct1_dropremainder ` rounding would return 9 examples).\r\n\r\nCurrently there's no way to get an instruction with a `pct1_dropremainder` rounding strategy from an instruction spec.\r\nSo we can either drop the support of `pct1_dropremainder` or define a way to use this strategy from a spec.\r\nI don't think dropping `pct1_dropremainder` would be a good idea since it allows to load each percent to all have the same number of examples (even the last one). Therefore I think your suggestion makes total sense and we should add a representation of this rounding strategy.\r\n\r\nI like what you suggested `train[10%:30%](pct1_dropremainder)` is fine, and it seems compatible with the regex that parses the instructions specs.","@lhoestq I've made the changes as you suggested. Ready for the review.","@lhoestq I've added a test and addressed the comments.\r\n\r\nAdditionally, `ReadInstruction` is converted to its spec form in `builder.py` to avoid a circular import that would happen if this logic was in `arrow_reader.py`. If you think it's better to have this logic in `arrow_reader.py`, the import can be delayed by putting it inside a function."],"created_at":1617569181000,"updated_at":1618829825000,"closed_at":1618823335000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2168","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2168","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2168.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2168.patch"},"body":"Fixes #2167 \r\n\r\nUsing `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.\r\n\r\nIn terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:\r\n```python\r\nfrom . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction\r\nfrom . import splits # gives us access to NamedSplit\r\n```\r\n\r\nand then define the `eval` globals as follows:\r\n```python\r\n{**arrow_reader.__dict__, **splits.__dict__}\r\n```\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2168\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2167","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2167\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2167\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2167\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2167","id":849944891,"node_id":"MDU6SXNzdWU4NDk5NDQ4OTE=","number":2167,"title":" Split type not preserved when reloading the dataset","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617564594000,"updated_at":1618823335000,"closed_at":1618823335000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"A minimal reproducible example:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dset = load_dataset(\"sst\", split=\"train\")\r\n>>> dset.save_to_disk(\"sst\")\r\n>>> type(dset.split)\r\n\r\n>>> dset = Dataset.load_from_disk(\"sst\")\r\n>>> type(dset.split) # NamedSplit expected\r\n\r\n```\r\n\r\nIt seems like this bug was introduced in #2025.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2167\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2166","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2166\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2166\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2166\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2166","id":849778545,"node_id":"MDU6SXNzdWU4NDk3Nzg1NDU=","number":2166,"title":"Regarding Test Sets for the GEM datasets","user":{"login":"vyraun","id":17217068,"node_id":"MDQ6VXNlcjE3MjE3MDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17217068?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vyraun","html_url":"https:\/\/github.com\/vyraun","followers_url":"https:\/\/api.github.com\/users\/vyraun\/followers","following_url":"https:\/\/api.github.com\/users\/vyraun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vyraun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vyraun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vyraun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vyraun\/orgs","repos_url":"https:\/\/api.github.com\/users\/vyraun\/repos","events_url":"https:\/\/api.github.com\/users\/vyraun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vyraun\/received_events","type":"User","site_admin":false},"labels":[{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the test sets but shouldn't really be used for benchmark submissions)\r\n\r\ncc @sebastiangehrmann","Oh okay, thanks @yjernite ! "],"created_at":1617501765000,"updated_at":1617696792000,"closed_at":1617696792000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https:\/\/gem-benchmark.com\/shared_task)? \r\n\r\ne.g.\r\n\r\n```\r\nfrom datasets import load_dataset\r\nDATASET_NAME=\"common_gen\"\r\ndata = load_dataset(\"gem\", DATASET_NAME)\r\n```\r\n\r\nThe test set doesn't have the target or references.\r\n\r\n```\r\ndata['test'][0]\r\n{'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''}\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2166\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2165","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2165\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2165\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2165\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2165","id":849771665,"node_id":"MDU6SXNzdWU4NDk3NzE2NjU=","number":2165,"title":"How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset","user":{"login":"y-rokutan","id":24562381,"node_id":"MDQ6VXNlcjI0NTYyMzgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24562381?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/y-rokutan","html_url":"https:\/\/github.com\/y-rokutan","followers_url":"https:\/\/api.github.com\/users\/y-rokutan\/followers","following_url":"https:\/\/api.github.com\/users\/y-rokutan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/y-rokutan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/y-rokutan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/y-rokutan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/y-rokutan\/orgs","repos_url":"https:\/\/api.github.com\/users\/y-rokutan\/repos","events_url":"https:\/\/api.github.com\/users\/y-rokutan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/y-rokutan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r\n\r\n def __len__(self):\r\n return len(self.dset)\r\n\r\ntrain_ds = HFDataset(train_ds)\r\n```\r\n@lhoestq Since the Arrow Dataset already provides `__getitem__` and `__len__`, I think we could use the [virtual subclass](https:\/\/docs.python.org\/3\/library\/abc.html#abc.ABCMeta.register) mechanism from the `abc` module to elegantly solve this issue. This mechanism would allow the Arrow Dataset to be used in place of the Torch Dataset because the `isinstance(instance of Arrow Dataset, TorchDataset)` check would return True (DeepSpeed has this check [here](https:\/\/github.com\/microsoft\/DeepSpeed\/blob\/ab5534fc4c0f8ca21ada321f9730d723aa31288b\/deepspeed\/runtime\/engine.py#L823)).\r\n\r\nAnd it requires a minimal change in the `arrow_dataset.py` file:\r\n```python\r\nif config.TORCH_AVAILABLE:\r\n from torch.utils.data import Dataset as TorchDataset\r\n TorchDataset.register(Dataset)\r\n```","Interesting ! Thanks for sharing this @mariosasko . I like the idea\r\nThis looks like something we should add IMO","@mariosasko \r\nThx for your code!\r\nIt perfectly works with a small modification for HF NLP dataset:\r\n```\r\noriginal_ds = nlp.load_dataset('scientific_papers', 'arxiv')\r\ntrain_ds = HFDataset(train_ds['train']) # needs splitting\r\n```","@lhoestq Sadly, from Python 3.7 onwards `torch.utils.data.Dataset` doesn't support the virtual subclass mechanism due to `typing.Generic` type no longer having `abc.ABCMeta` as its metaclass.\r\n\r\nWith that in mind, another option is to remove a direct type check (`isinstance(dataset, torch.utils.data.Dataset)`) in `deepspeed.initalize` and to rewrite the checks in a manner similar to `torch.utils.data.DataLoader` ([link](https:\/\/github.com\/pytorch\/pytorch\/blob\/b80c6f863f2327c712c478f67c248b94d66b65ac\/torch\/utils\/data\/dataloader.py#L197-L239)). This is exactly why the `DataLoader` works with arbitrary objects that provide `__getitem__` and `__len__` (and in our case, the `ArrowDataset`). By doing so, their code wouldn't be any stricter in comparison to the `DataLoader`.\r\n\r\nSo if you agree, I can open an issue in their repo and fix this if they like the idea.","That makes sense ! Feel free to open an issue on their repo and discuss this idea","@y-rokutan Hi, now if you install `deepspeed` from master (this feature will be available in the next official release), the code should work without subclassing. Let us know if you still have any issues.","Worth mentioning that any function that expects a `torch..Dataset` (like `torch..DataLoader`) will fail a mypy-esque typecheck if a `datasets.Dataset` is passed, even though it implements the interface correctly (I think). The virtual subclass idea was a good one- I wonder if there's another workaround given the Generic issue. What we're really talking about is something similar to the structural subtyping semantics that `typing.Protocol` defines. If `torch..DataLoader` accepted anything that supports `__getitem__` and `__len__` methods this would be much easier. Not sure if there's a way to do this without the wrapper from the perspective of `datasets`."],"created_at":1617498108000,"updated_at":1629820535000,"closed_at":1617807964000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm trying to pretraine deep-speed model using HF arxiv dataset like:\r\n```\r\ntrain_ds = nlp.load_dataset('scientific_papers', 'arxiv')\r\ntrain_ds.set_format(\r\n type=\"torch\",\r\n columns=[\"input_ids\", \"attention_mask\", \"global_attention_mask\", \"labels\"],\r\n )\r\nengine, _, _, _ = deepspeed.initialize(\r\n args=args,\r\n model=model,\r\n model_parameters=[p for p in model.parameters() if p.requires_grad],\r\n training_data=train_ds)\r\n```\r\nbut deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2165\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2164","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2164\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2164\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2164\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2164","id":849739759,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3","number":2164,"title":"Replace assertTrue(isinstance with assertIsInstance in tests","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617484022000,"updated_at":1617720069000,"closed_at":1617720068000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2164","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2164","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2164.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2164.patch"},"body":"Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2164\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2163","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2163\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2163\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2163\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2163","id":849669366,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz","number":2163,"title":"Concat only unique fields in DatasetInfo.from_merge","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @mariosasko,\r\nJust came across this PR and I was wondering if we can use\r\n`description = \"\\n\\n\".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`\r\n\r\nThis will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but it's available 3.7+ onwards","Hi,\r\n\r\nlet's see what @lhoestq thinks. Although my approach adds more code, it's more readable IMO.","Yeah, that's true. Your approach is more readable."],"created_at":1617460290000,"updated_at":1617720000000,"closed_at":1617719999000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2163","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2163","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2163.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2163.patch"},"body":"I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.\r\n\r\nFixes #2103 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2163\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2162","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2162\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2162\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2162\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2162","id":849129201,"node_id":"MDU6SXNzdWU4NDkxMjkyMDE=","number":2162,"title":"visualization for cc100 is broken ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?","Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself but not sure\n> Did you try loading cc100 on your machine ?\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n"],"created_at":1617358273000,"updated_at":1617800467000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nvisualization through dataset viewer for cc100 is broken\r\nhttps:\/\/huggingface.co\/datasets\/viewer\/\r\n\r\nthanks a lot\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2162\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2161","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2161\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2161\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2161\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2161","id":849127041,"node_id":"MDU6SXNzdWU4NDkxMjcwNDE=","number":2161,"title":"any possibility to download part of large datasets only?","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Not yet but it\u2019s on the short\/mid-term roadmap (requested by many indeed).","oh, great, really awesome feature to have, thank you very much for the great, fabulous work","We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)","thanks a lot Quentin, this would be really really a great feature to have\n\nOn Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> We'll work on dataset streaming soon. This should allow you to only load\n> the examples you need ;)\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","Is streaming completed? On the 1.8.0 docs it is mentioned (https:\/\/huggingface.co\/docs\/datasets\/dataset_streaming.html), but when following the example I get the following error:\r\n\r\n```\r\n>>> dataset2 = load_dataset(\"amazon_us_reviews\", \"Pet_Products_v1_00\", split='train', streaming=True)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n in ()\r\n----> 1 en_dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n\r\n3 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 339 if value is not None:\r\n 340 if not hasattr(builder_config, key):\r\n--> 341 raise ValueError(f\"BuilderConfig {builder_config} doesn't have a '{key}' key.\")\r\n 342 setattr(builder_config, key, value)\r\n 343 \r\n\r\nValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key.\r\n```\r\n\r\nUPDATE: Managed to get streaming working by building from source and installing the additional `datasets[streaming]` package:\r\n\r\n```\r\n!pip install git+https:\/\/github.com\/huggingface\/datasets.git\r\n!pip install datasets[streaming]\r\n```","Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)"],"created_at":1617358006000,"updated_at":1625239169000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nSome of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled\/unshuffled data without going through first downloading the whole data then sampling? thanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2161\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2160","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2160\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2160\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2160\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2160","id":849052921,"node_id":"MDU6SXNzdWU4NDkwNTI5MjE=","number":2160,"title":"data_args.preprocessing_num_workers almost freezes ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 172\/1583 [00:46<06:21, 3.70ba\/s]\r\n#4: 9%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 143\/1583 [00:46<07:46, 3.09ba\/s]\r\n#7: 6%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588 | 98\/1583 [00:45<11:34, 2.14ba\/s]\r\n#5: 8%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258d | 124\/1583 [00:46<09:03, 2.68ba\/s]\r\n#6: 7%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f \r\n```","closing since I cannot reproduce it again, thanks "],"created_at":1617350173000,"updated_at":1617358472000,"closed_at":1617358471000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi @lhoestq \r\n\r\nI am running this code from huggingface transformers https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm.py \r\n\r\nto speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up.\r\n\r\nthanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2160\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2159","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2159\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2159\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2159\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2159","id":848851962,"node_id":"MDU6SXNzdWU4NDg4NTE5NjI=","number":2159,"title":"adding ccnet dataset","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["closing since I think this is cc100, just the name has been changed. thanks "],"created_at":1617319716000,"updated_at":1617357919000,"closed_at":1617357919000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** ccnet\r\n\r\n- **Description:** \r\nCommon Crawl\r\n\r\n- **Paper:** \r\nhttps:\/\/arxiv.org\/abs\/1911.00359\r\n\r\n- **Data:** \r\nhttps:\/\/github.com\/facebookresearch\/cc_net\r\n\r\n- **Motivation:**\r\nthis is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nthanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2159\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2158","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2158\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2158\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2158\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2158","id":848506746,"node_id":"MDU6SXNzdWU4NDg1MDY3NDY=","number":2158,"title":"viewer \"fake_news_english\" error","user":{"login":"emanuelevivoli","id":9447991,"node_id":"MDQ6VXNlcjk0NDc5OTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9447991?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/emanuelevivoli","html_url":"https:\/\/github.com\/emanuelevivoli","followers_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/followers","following_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/orgs","repos_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/repos","events_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/emanuelevivoli\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly"],"created_at":1617286400000,"updated_at":1617791169000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When I visit the [Huggingface - viewer](https:\/\/huggingface.co\/datasets\/viewer\/) web site, under the dataset \"fake_news_english\" I've got this error:\r\n\r\n> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance'\r\n\r\nas well as the error Traceback.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2158\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2157","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2157\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2157\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2157\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2157","id":847205239,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx","number":2157,"title":"updated user permissions based on umask","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617219509000,"updated_at":1617693559000,"closed_at":1617693559000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2157","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2157","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2157.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2157.patch"},"body":"Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2157\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2156","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2156\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2156\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2156\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2156","id":847198295,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky","number":2156,"title":"User permissions","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617219228000,"updated_at":1617219264000,"closed_at":1617219264000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2156","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2156","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2156.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2156.patch"},"body":"Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2156\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2155","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2155\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2155\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2155\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2155","id":846786897,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4","number":2155,"title":"Add table classes to the documentation","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! \ud83d\ude04 "],"created_at":1617201370000,"updated_at":1617295590000,"closed_at":1617205328000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2155","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2155","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2155.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2155.patch"},"body":"Following #2025 , I added the table classes to the documentation\r\n\r\ncc @albertvillanova ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2155\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2154","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2154\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2154\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2154\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2154","id":846763960,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1","number":2154,"title":"Adding the NorNE dataset for Norwegian POS and NER","user":{"login":"versae","id":173537,"node_id":"MDQ6VXNlcjE3MzUzNw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/173537?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/versae","html_url":"https:\/\/github.com\/versae","followers_url":"https:\/\/api.github.com\/users\/versae\/followers","following_url":"https:\/\/api.github.com\/users\/versae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/versae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/versae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/versae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/versae\/orgs","repos_url":"https:\/\/api.github.com\/users\/versae\/repos","events_url":"https:\/\/api.github.com\/users\/versae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/versae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome!"],"created_at":1617200570000,"updated_at":1617269220000,"closed_at":1617268568000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2154","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2154","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2154.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2154.patch"},"body":"NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokm\u00e5l and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.\r\n\r\nSee #1720.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2154\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2153","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2153\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2153\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2153\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2153","id":846181502,"node_id":"MDU6SXNzdWU4NDYxODE1MDI=","number":2153,"title":"load_dataset ignoring features","user":{"login":"GuillemGSubies","id":37592763,"node_id":"MDQ6VXNlcjM3NTkyNzYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37592763?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/GuillemGSubies","html_url":"https:\/\/github.com\/GuillemGSubies","followers_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/followers","following_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/orgs","repos_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/repos","events_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/GuillemGSubies\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201","Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.","Hi :) We're indeed working on tutorials that we will add to the docs !"],"created_at":1617179409000,"updated_at":1630077838000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. \r\n\r\nI'm using datasets 1.5.0\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/37592763\/113114369-8f376580-920b-11eb-900d-94365b59f04b.png)\r\n\r\nAs you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work.\r\n\r\nCode to reproduce:\r\n\r\n```python\r\nimport datasets\r\ndata_location = \"\/data\/prueba_multiclase\"\r\nfeatures = datasets.Features(\r\n {\"texto\": datasets.Value(\"string\"), \"label\": datasets.features.ClassLabel(names=[\"false\", \"true\"])}\r\n )\r\ndataset = datasets.load_dataset(\r\n \"csv\", data_files=data_location, delimiter=\"\\t\", features=features\r\n )\r\n```\r\n\r\nDataset I used:\r\n\r\n\r\n[prueba_multiclase.zip](https:\/\/github.com\/huggingface\/datasets\/files\/6235022\/prueba_multiclase.zip) (it has to be unzipped)\r\n\r\n\r\nThank you! \u2764\ufe0f \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2153\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2152","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2152\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2152\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2152\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2152","id":845751273,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz","number":2152,"title":"Update README.md","user":{"login":"JieyuZhao","id":22306304,"node_id":"MDQ6VXNlcjIyMzA2MzA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22306304?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JieyuZhao","html_url":"https:\/\/github.com\/JieyuZhao","followers_url":"https:\/\/api.github.com\/users\/JieyuZhao\/followers","following_url":"https:\/\/api.github.com\/users\/JieyuZhao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JieyuZhao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JieyuZhao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JieyuZhao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JieyuZhao\/orgs","repos_url":"https:\/\/api.github.com\/users\/JieyuZhao\/repos","events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617160879000,"updated_at":1617272437000,"closed_at":1617272436000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2152","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2152","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2152.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2152.patch"},"body":"Updated some descriptions of Wino_Bias dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2152\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2151","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2151\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2151\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2151\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2151","id":844886081,"node_id":"MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw","number":2151,"title":"Add support for axis in concatenate datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/1","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/1\/labels","id":6644198,"node_id":"MDk6TWlsZXN0b25lNjY0NDE5OA==","number":1,"title":"1.6","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":4,"state":"closed","created_at":1617973671000,"updated_at":1618937446000,"due_on":1618556400000,"closed_at":1618937446000},"comments":["@lhoestq I am going to implement the consolidation step you mentioned in #1870.","@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?\r\n\r\nI mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:\r\n```\r\nblocks = [in_memory_1, memory_mapped, in_memory_2]\r\n```\r\nI could reorder the list:\r\n```\r\nblocks = [in_memory_1, in_memory_2, memory_mapped]\r\n```\r\nso that the first 2 can be consolidated into a single one:\r\n```\r\nblocks = [in_memory_3, memory_mapped]\r\n```","I think the order is important, users won't expect the dataset to be \"shuffled\" when they add a new item","> I think the order is important, users won't expect the dataset to be \"shuffled\" when they add a new item\r\n\r\nOK, therefore I leave `_consolidate_blocks` as it is, which currently keeps the order of the blocks (no shuffling).","Thank you guys for implementing this. Minor thing I noticed in the [documentation](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html#datasets.concatenate_datasets): it says \"Converts a list of Dataset with **the same schema** into a single Dataset\". With the addition of the axis parameter, perhaps this should be reworded, no?"],"created_at":1617123524000,"updated_at":1624470062000,"closed_at":1618848438000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2151","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2151","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2151.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2151.patch"},"body":"Add support for `axis` (0 or 1) in `concatenate_datasets`.\r\n\r\nClose #853.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2151\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2150","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2150\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2150\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2150\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2150","id":844776448,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx","number":2150,"title":"Allow pickling of big in-memory tables","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617119516000,"updated_at":1617187035000,"closed_at":1617187034000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2150","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2150","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2150.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2150.patch"},"body":"This should fix issue #2134 \r\n\r\nPickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).\r\nFor big tables, we have to write them on disk and only pickle the path to the table.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2150\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2149","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2149\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2149\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2149\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2149","id":844734076,"node_id":"MDU6SXNzdWU4NDQ3MzQwNzY=","number":2149,"title":"Telugu subset missing for xtreme tatoeba dataset","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this"],"created_at":1617117994000,"updated_at":1617791015000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"from nlp import load_dataset\r\ntrain_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']\r\nValueError: BuilderConfig tatoeba.tel not found.\r\n\r\nbut language tel is actually included in xtreme:\r\nhttps:\/\/github.com\/google-research\/xtreme\/blob\/master\/utils_preprocess.py\r\ndef tatoeba_preprocess(args):\r\n lang3_dict = {\r\n 'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn',\r\n 'deu':'de', 'ell':'el', 'spa':'es', 'est':'et',\r\n 'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr',\r\n 'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id',\r\n 'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka',\r\n 'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr',\r\n 'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw',\r\n 'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here\r\n 'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh',\r\n 'eng':'en',\r\n }","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2149\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2148","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2148\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2148\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2148\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2148","id":844700910,"node_id":"MDU6SXNzdWU4NDQ3MDA5MTA=","number":2148,"title":"Add configurable options to `seqeval` metric","user":{"login":"marrodion","id":44571847,"node_id":"MDQ6VXNlcjQ0NTcxODQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44571847?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/marrodion","html_url":"https:\/\/github.com\/marrodion","followers_url":"https:\/\/api.github.com\/users\/marrodion\/followers","following_url":"https:\/\/api.github.com\/users\/marrodion\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/marrodion\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/marrodion\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/marrodion\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/marrodion\/orgs","repos_url":"https:\/\/api.github.com\/users\/marrodion\/repos","events_url":"https:\/\/api.github.com\/users\/marrodion\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/marrodion\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `importlib`:\r\n```python\r\nif scheme:\r\n scheme = importlib.import_module(f\"seqeval.scheme.{scheme}\")\r\n```\r\n\r\nFeel free to create a Pull Request to make this contribution."],"created_at":1617116646000,"updated_at":1618494586000,"closed_at":1618494586000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Right now `load_metric(\"seqeval\")` only works in the default mode of evaluation (equivalent to conll evaluation).\r\n\r\nHowever, seqeval library [supports](https:\/\/github.com\/chakki-works\/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute`\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/85cf7ff920c90ca2e12bedca12b36d2a043c3da2\/metrics\/seqeval\/seqeval.py#L109\r\n\r\nThings that would be relevant are, for example, supporting `mode=\"strict\", scheme=IOB2` to count only full entity match as a true positive and omit partial matches.\r\n\r\nThe only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. \r\n\r\nIt can be solved naively with mapping like `{\"IOB2\": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation.\r\n\r\nIf that makes sense, I am happy to implement the change.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2148\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2147","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2147\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2147\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2147\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2147","id":844687831,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4","number":2147,"title":"Render docstring return type as inline","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617116143000,"updated_at":1617196265000,"closed_at":1617196265000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2147","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2147","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2147.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2147.patch"},"body":"This documentation setting will avoid having the return type in a separate line under `Return type`. \r\n\r\nSee e.g. current docs for `Dataset.to_csv`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2147\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2146","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2146\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2146\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2146\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2146","id":844673244,"node_id":"MDU6SXNzdWU4NDQ2NzMyNDQ=","number":2146,"title":"Dataset file size on disk is very large with 3D Array","user":{"login":"jblemoine","id":22685854,"node_id":"MDQ6VXNlcjIyNjg1ODU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22685854?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jblemoine","html_url":"https:\/\/github.com\/jblemoine","followers_url":"https:\/\/api.github.com\/users\/jblemoine\/followers","following_url":"https:\/\/api.github.com\/users\/jblemoine\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jblemoine\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jblemoine\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jblemoine\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jblemoine\/orgs","repos_url":"https:\/\/api.github.com\/users\/jblemoine\/repos","events_url":"https:\/\/api.github.com\/users\/jblemoine\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jblemoine\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for example). Since these encodings are made for compression, the resulting tfrecord is smaller that the arrow file.\r\n\r\nWe are working on adding a similar feature in `datasets`: the ability to store the encoded data instead of the raw integers for images, but also for audio data. This way, arrow files will have similar sizes as tfrecords for images.","Thanks for the prompt response. You're right about the encoding, I have the `tfds.features.Image` feature type you mentioned.\r\nHowever, as described in the `dataset_info.json`, my dataset is made of 1479 (224x224x3) images. 1479 x 224 x 224 x 3 = 222630912 bytes which is far from the actual size 520803408 bytes. \r\n\r\nAnyway I look forward to the Image feature type in `datasets`. ","@lhoestq I changed the data structure so I have a 2D Array feature type instead of a 3D Array by grouping the two last dimensions ( a 224x672 2D Array instead of a 224x224x3 3D Array). The file size is now 223973964 bytes, nearly half the previous size! Which is around of what I would expect.\r\nI found similar behavior in existing `datasets` collection, when comparing black and white vs color image, for example MNIST vs CIFAR. ","Interesting !\r\nThis may be because of the offsets that are stored with the array data.\r\n\r\nCurrently the offsets are stored even if the `shape` of the arrays is fixed. This was needed because of some issues with pyarrow a few months ago. I think these issues have been addressed now, so we can probably try to remove them to make the file lighter.\r\n\r\nIdeally in your case the floats data should be 220 MB for both Array2D and Array3D","Yeah for sure, can you be a bit more specific about where the offset is stored in the code base ? And any reference to pyarrow issues if you have some. I would be very interested in contributing to `datasets` by trying to fix this issue. ","Pyarrow has two types of lists: variable length lists and fixed size lists.\r\nCurrently we store the ArrayXD data as variable length lists. They take more disk space because they must store both actual data and offsets.\r\nIn the `datasets` code this is done here:\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/dbac87c8a083f806467f5afc4ec9b401a7e4c15c\/src\/datasets\/features.py#L346-L352\r\n\r\nTo use a fixed length list, one should use the `list_size` argument of `pyarrow.list_()`.\r\nI believe this would work directly modulo some changes in the numpy conversion here:\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/dbac87c8a083f806467f5afc4ec9b401a7e4c15c\/src\/datasets\/features.py#L381-L395"],"created_at":1617115569000,"updated_at":1618578422000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, \r\n\r\nI have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. \r\n\r\nThe actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. \r\n\r\n`{\r\n \"description\": \"\",\r\n \"citation\": \"\",\r\n \"homepage\": \"\",\r\n \"license\": \"\",\r\n \"features\": {\r\n \"image\": {\r\n \"shape\": [224, 224, 3],\r\n \"dtype\": \"uint8\",\r\n \"id\": null,\r\n \"_type\": \"Array3D\",\r\n }\r\n },\r\n \"post_processed\": null,\r\n \"supervised_keys\": null,\r\n \"builder_name\": \"shot_type_image_dataset\",\r\n \"config_name\": \"default\",\r\n \"version\": {\r\n \"version_str\": \"0.0.0\",\r\n \"description\": null,\r\n \"major\": 0,\r\n \"minor\": 0,\r\n \"patch\": 0,\r\n },\r\n \"splits\": {\r\n \"train\": {\r\n \"name\": \"train\",\r\n \"num_bytes\": 520803408,\r\n \"num_examples\": 1479,\r\n \"dataset_name\": \"shot_type_image_dataset\",\r\n }\r\n },\r\n \"download_checksums\": {\r\n \"\": {\r\n \"num_bytes\": 16940447118,\r\n \"checksum\": \"5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03\",\r\n }\r\n },\r\n \"download_size\": 16940447118,\r\n \"post_processing_size\": null,\r\n \"dataset_size\": 520803408,\r\n \"size_in_bytes\": 17461250526,\r\n}`\r\n\r\nI have created the same dataset with tensorflow_dataset and it takes only 125MB on disk.\r\n\r\nI am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records.\r\n\r\nThis might be a problem for large dataset. \r\n\r\nThanks for your help. \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2146\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2145","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2145\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2145\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2145\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2145","id":844603518,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2","number":2145,"title":"Implement Dataset add_column","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/3","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/3","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/3\/labels","id":6644287,"node_id":"MDk6TWlsZXN0b25lNjY0NDI4Nw==","number":3,"title":"1.7","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":3,"state":"closed","created_at":1617974191000,"updated_at":1622478053000,"due_on":1620975600000,"closed_at":1622478053000},"comments":["#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)"],"created_at":1617112934000,"updated_at":1619707844000,"closed_at":1619707843000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2145","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2145","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2145.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2145.patch"},"body":"Implement `Dataset.add_column`.\r\n\r\nClose #1954.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2145\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2144","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2144\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2144\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2144\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2144","id":844352067,"node_id":"MDU6SXNzdWU4NDQzNTIwNjc=","number":2144,"title":"Loading wikipedia 20200501.en throws pyarrow related error","user":{"login":"TomPyonsuke","id":26637405,"node_id":"MDQ6VXNlcjI2NjM3NDA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26637405?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TomPyonsuke","html_url":"https:\/\/github.com\/TomPyonsuke","followers_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/followers","following_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/orgs","repos_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/repos","events_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TomPyonsuke\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='\/usr\/local\/workspace\/NAS_NLP\/cache')\r\n```","Hi ! It looks like the arrow file in the folder\r\n`\/usr\/local\/workspace\/NAS_NLP\/cache\/wikipedia\/20200501.en\/1.0.0\/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n\r\nCan you take a look and check that it's 18.3GB ?\r\n\r\nIf not, then maybe you need to redownload it:\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='\/usr\/local\/workspace\/NAS_NLP\/cache', download_mode=\"force_redownload\")\r\n```","> Hi ! It looks like the arrow file in the folder\r\n> `\/usr\/local\/workspace\/NAS_NLP\/cache\/wikipedia\/20200501.en\/1.0.0\/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n> \r\n> Can you take a look and check that it's 18.3GB ?\r\n> \r\n> If not, then maybe you need to redownload it:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> ds = load_dataset('wikipedia', '20200501.en', cache_dir='\/usr\/local\/workspace\/NAS_NLP\/cache', download_mode=\"force_redownload\")\r\n> ```\r\n\r\nHi Ihoestq, thanks for the reply! Actually i think my issue is i couldn't download the dataset beyond 10.7G. It feels like the whole dataset is split into different volumes and after the first one was downloaded it crashed before proceeding to the next one. I did try 'force_redownload' mode but still got the same issue.","I just tried on my side and got no issues.\r\nWhen downloading the dataset again, did it crash at 10.7GB as well ?","> I just tried on my side and got no issues.\r\n> When downloading the dataset again, did it crash at 10.7GB as well ?\r\n\r\nYes i have tried it multiple times on different machines. I am wondering if you could share the screenshot of your dependency versions and i will try to make them the same as yours?","I tried using `datasets` from `master` on macos with python 3.7.2\r\nI also have `requests==2.23.0` and `tqdm==4.45.0`."],"created_at":1617100711000,"updated_at":1617268877000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Problem description**\r\nI am getting the following error when trying to load wikipedia\/20200501.en dataset.\r\n\r\n**Error log**\r\nDownloading and preparing dataset wikipedia\/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to \/usr\/local\/workspace\/NAS_NLP\/cache\/wikipedia\/20200501.en\/1.0.0\/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14.6k\/14.6k [00:00<00:00, 5.41MB\/s]\r\nDownloading: 59%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 10.7G\/18.3G [11:30<08:08, 15.5MB\/s]\r\nDataset wikipedia downloaded and prepared to \/usr\/local\/workspace\/NAS_NLP\/cache\/wikipedia\/20200501.en\/1.0.0\/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data.\r\nTraceback (most recent call last):\r\n File \"load_wiki.py\", line 2, in \r\n ds = load_dataset('wikipedia', '20200501.en', cache_dir='\/usr\/local\/workspace\/NAS_NLP\/cache')\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 751, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py\", line 746, in as_dataset\r\n map_tuple=True,\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/py_utils.py\", line 204, in map_nested\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/py_utils.py\", line 204, in \r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py\", line 763, in _build_single_dataset\r\n in_memory=in_memory,\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py\", line 835, in _as_dataset\r\n in_memory=in_memory,\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_reader.py\", line 215, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_reader.py\", line 324, in read_table\r\n pa_table = f.read_all()\r\n File \"pyarrow\/ipc.pxi\", line 544, in pyarrow.lib.RecordBatchReader.read_all\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Expected to be able to read 9176784 bytes for message body, got 4918712\r\n\r\n**Detailed version info**\r\ndatasets==1.5.0\r\n - dataclasses [required: Any, installed: 0.8]\r\n - dill [required: Any, installed: 0.3.3]\r\n - fsspec [required: Any, installed: 0.8.7]\r\n - importlib-metadata [required: Any, installed: 1.7.0]\r\n - zipp [required: >=0.5, installed: 3.1.0]\r\n - huggingface-hub [required: <0.1.0, installed: 0.0.7]\r\n - filelock [required: Any, installed: 3.0.12]\r\n - importlib-metadata [required: Any, installed: 1.7.0]\r\n - zipp [required: >=0.5, installed: 3.1.0]\r\n - requests [required: Any, installed: 2.24.0]\r\n - certifi [required: >=2017.4.17, installed: 2020.6.20]\r\n - chardet [required: >=3.0.2,<4, installed: 3.0.4]\r\n - idna [required: >=2.5,<3, installed: 2.6]\r\n - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]\r\n - tqdm [required: Any, installed: 4.49.0]\r\n - importlib-metadata [required: Any, installed: 1.7.0]\r\n - zipp [required: >=0.5, installed: 3.1.0]\r\n - multiprocess [required: Any, installed: 0.70.11.1]\r\n - dill [required: >=0.3.3, installed: 0.3.3]\r\n - numpy [required: >=1.17, installed: 1.17.0]\r\n - pandas [required: Any, installed: 1.1.5]\r\n - numpy [required: >=1.15.4, installed: 1.17.0]\r\n - python-dateutil [required: >=2.7.3, installed: 2.8.0]\r\n - six [required: >=1.5, installed: 1.15.0]\r\n - pytz [required: >=2017.2, installed: 2020.1]\r\n - pyarrow [required: >=0.17.1, installed: 3.0.0]\r\n - numpy [required: >=1.16.6, installed: 1.17.0]\r\n - requests [required: >=2.19.0, installed: 2.24.0]\r\n - certifi [required: >=2017.4.17, installed: 2020.6.20]\r\n - chardet [required: >=3.0.2,<4, installed: 3.0.4]\r\n - idna [required: >=2.5,<3, installed: 2.6]\r\n - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]\r\n - tqdm [required: >=4.27,<4.50.0, installed: 4.49.0]\r\n - xxhash [required: Any, installed: 2.0.0]\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2144\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2143","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2143\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2143\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2143\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2143","id":844313228,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0","number":2143,"title":"task casting via load_dataset","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"assignees":[{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1617098442000,"updated_at":1623417641000,"closed_at":1623417636000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2143","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2143","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2143.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2143.patch"},"body":"wip\r\nnot satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `` \"facet\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2143\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2142","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2142\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2142\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2142\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2142","id":843919420,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzMjQwMzUy","number":2142,"title":"Gem V1.1","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617061622000,"updated_at":1617063002000,"closed_at":1617063002000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2142","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2142","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2142.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2142.patch"},"body":"This branch updates the GEM benchmark to its 1.1 version which includes:\r\n- challenge sets for most tasks\r\n- detokenized TurkCorpus to match the rest of the text simplification subtasks\r\n- fixed inputs for TurkCorpus and ASSET test sets\r\n- 18 languages in WikiLingua\r\n\r\ncc @sebastianGehrmann","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2142\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2141","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2141\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2141\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2141\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2141","id":843914790,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw","number":2141,"title":"added spans field for the wikiann datasets","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\nThanks a lot for taking time checking it. I update \"dataset_infos.json\", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. ","Thanks !\r\n\r\nFor the fields description in the dataset card, something like this does the job:\r\n```\r\n- `tokens`: a `list` of `string` features.\r\n- `langs`: a `list` of `string` features that correspond to the language of each token.\r\n- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6).\r\n- `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``: ``\r\n```\r\n\r\nAlso for information, I think the trailer of rick and morty season 5 is out now :)","Hi @lhoestq \r\nthank you! This is updated now, please feel free to let me know if I need to modify something :) thanks "],"created_at":1617061106000,"updated_at":1617197270000,"closed_at":1617197270000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2141","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2141","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2141.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2141.patch"},"body":"Hi @lhoestq \r\nI tried to add spans to the wikiann datasets.\r\nThanks a lot for kindly having a look.\r\nThis addresses https:\/\/github.com\/huggingface\/datasets\/issues\/2130. \r\nBest regards\r\nRabeeh ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2141\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2140","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2140\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2140\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2140\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2140","id":843830451,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx","number":2140,"title":"add banking77 dataset","user":{"login":"dkajtoch","id":32985207,"node_id":"MDQ6VXNlcjMyOTg1MjA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32985207?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dkajtoch","html_url":"https:\/\/github.com\/dkajtoch","followers_url":"https:\/\/api.github.com\/users\/dkajtoch\/followers","following_url":"https:\/\/api.github.com\/users\/dkajtoch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dkajtoch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dkajtoch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dkajtoch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dkajtoch\/orgs","repos_url":"https:\/\/api.github.com\/users\/dkajtoch\/repos","events_url":"https:\/\/api.github.com\/users\/dkajtoch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dkajtoch\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I updated files"],"created_at":1617053543000,"updated_at":1617960738000,"closed_at":1617960738000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2140","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2140","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2140.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2140.patch"},"body":"Intent classification\/detection dataset from banking category with 77 unique intents.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2140\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2139","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2139\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2139\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2139\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2139","id":843662613,"node_id":"MDU6SXNzdWU4NDM2NjI2MTM=","number":2139,"title":"TypeError when using save_to_disk in a dataset loaded with ReadInstruction split","user":{"login":"PedroMLF","id":22480495,"node_id":"MDQ6VXNlcjIyNDgwNDk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22480495?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PedroMLF","html_url":"https:\/\/github.com\/PedroMLF","followers_url":"https:\/\/api.github.com\/users\/PedroMLF\/followers","following_url":"https:\/\/api.github.com\/users\/PedroMLF\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PedroMLF\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PedroMLF\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PedroMLF\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PedroMLF\/orgs","repos_url":"https:\/\/api.github.com\/users\/PedroMLF\/repos","events_url":"https:\/\/api.github.com\/users\/PedroMLF\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PedroMLF\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https:\/\/github.com\/huggingface\/datasets.git\r\n```","Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!"],"created_at":1617042234000,"updated_at":1617095573000,"closed_at":1617095573000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nLoading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.\r\n\r\nHere is the minimal reproducible example:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom datasets import ReadInstruction\r\n\r\ndata_1 = load_dataset(\r\n \"wikiann\",\r\n \"en\",\r\n split=\"validation\",\r\n)\r\n\r\ndata_1.save_to_disk(\"temporary_path_1\")\r\n\r\nprint(\"Save with regular split works.\")\r\n\r\ndata_2 = load_dataset(\r\n \"wikiann\",\r\n \"en\",\r\n split=ReadInstruction(\"validation\", to=50, unit=\"%\"),\r\n)\r\n\r\ndata_2.save_to_disk(\"temporary_path_2\")\r\n```\r\n\r\nand the corresponding output:\r\n\r\n```\r\nReusing dataset wikiann (\/xxxxx\/.cache\/huggingface\/datasets\/wikiann\/en\/1.1.0\/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)\r\nSave with regular split works.\r\nReusing dataset wikiann (\/xxxxx\/.cache\/huggingface\/datasets\/wikiann\/en\/1.1.0\/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9)\r\nTraceback (most recent call last):\r\n File \"bug.py\", line 20, in \r\n data_2.save_to_disk(\"temporary_path_2\")\r\n File \"\/xxxxx\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 645, in save_to_disk\r\n json.dump(state, state_file, indent=2, sort_keys=True)\r\n File \"\/usr\/lib\/python3.7\/json\/__init__.py\", line 179, in dump\r\n for chunk in iterable:\r\n File \"\/usr\/lib\/python3.7\/json\/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"\/usr\/lib\/python3.7\/json\/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"\/usr\/lib\/python3.7\/json\/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"\/usr\/lib\/python3.7\/json\/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type ReadInstruction is not JSON serializable\r\n```\r\n\r\nLet me know if there is some misuse from my end.\r\n\r\nThanks in advance.\r\n ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2139\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2138","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2138\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2138\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2138\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2138","id":843508402,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAyODc4NzU2","number":2138,"title":"Add CER metric","user":{"login":"chutaklee","id":6931004,"node_id":"MDQ6VXNlcjY5MzEwMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6931004?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chutaklee","html_url":"https:\/\/github.com\/chutaklee","followers_url":"https:\/\/api.github.com\/users\/chutaklee\/followers","following_url":"https:\/\/api.github.com\/users\/chutaklee\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chutaklee\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chutaklee\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chutaklee\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chutaklee\/orgs","repos_url":"https:\/\/api.github.com\/users\/chutaklee\/repos","events_url":"https:\/\/api.github.com\/users\/chutaklee\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chutaklee\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617033147000,"updated_at":1617725771000,"closed_at":1617693278000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2138","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2138","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2138.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2138.patch"},"body":"Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.\r\n\r\n```python\r\nfrom cer import CER\r\n\r\ncer = CER()\r\n\r\nclass TestCER(unittest.TestCase):\r\n def test_cer_case_senstive(self):\r\n refs = ['White House']\r\n preds = ['white house']\r\n # S = 2, D = 0, I = 0, N = 11, CER = 2 \/ 11\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.1818181818) < 1e-6)\r\n\r\n def test_cer_whitespace(self):\r\n refs = ['were wolf']\r\n preds = ['werewolf']\r\n # S = 0, D = 0, I = 1, N = 9, CER = 1 \/ 9\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.1111111) < 1e-6)\r\n\r\n refs = ['werewolf']\r\n preds = ['weae wolf']\r\n # S = 1, D = 1, I = 0, N = 8, CER = 0.25\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.25) < 1e-6)\r\n\r\n # consecutive whitespaces case 1\r\n refs = ['were wolf']\r\n preds = ['were wolf']\r\n # S = 0, D = 0, I = 0, N = 9, CER = 0\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)\r\n\r\n # consecutive whitespaces case 2\r\n refs = ['were wolf']\r\n preds = ['were wolf']\r\n # S = 0, D = 0, I = 0, N = 9, CER = 0\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.0) < 1e-6)\r\n\r\n def test_cer_sub(self):\r\n refs = ['werewolf']\r\n preds = ['weaewolf']\r\n # S = 1, D = 0, I = 0, N = 8, CER = 0.125\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)\r\n\r\n def test_cer_del(self):\r\n refs = ['werewolf']\r\n preds = ['wereawolf']\r\n # S = 0, D = 1, I = 0, N = 8, CER = 0.125\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)\r\n\r\n def test_cer_insert(self):\r\n refs = ['werewolf']\r\n preds = ['wereolf']\r\n # S = 0, D = 0, I = 1, N = 8, CER = 0.125\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.125) < 1e-6)\r\n\r\n def test_cer_equal(self):\r\n refs = ['werewolf']\r\n char_error_rate = cer.compute(predictions=refs, references=refs)\r\n self.assertEqual(char_error_rate, 0.0)\r\n\r\n def test_cer_list_of_seqs(self):\r\n refs = ['werewolf', 'I am your father']\r\n char_error_rate = cer.compute(predictions=refs, references=refs)\r\n self.assertEqual(char_error_rate, 0.0)\r\n\r\n refs = ['werewolf', 'I am your father', 'doge']\r\n preds = ['werxwolf', 'I am your father', 'doge']\r\n # S = 1, D = 0, I = 0, N = 28, CER = 1 \/ 28\r\n char_error_rate = cer.compute(predictions=preds, references=refs)\r\n self.assertTrue(abs(char_error_rate - 0.03571428) < 1e-6)\r\n\r\n def test_cer_unicode(self):\r\n ref = [u'\u6211\u80fd\u541e\u4e0b\u73bb\u7483\u800c\u4e0d\u4f24\u8eab\u4f53']\r\n pred = [u' \u80fd\u541e\u867e\u73bb\u7483\u800c \u4e0d\u971c\u8eab\u4f53\u5566']\r\n # S = 3, D = 2, I = 0, N = 11\r\n # CER = 5 \/ 11\r\n char_error_rate = cer.compute(predictions=pred, references=ref)\r\n self.assertTrue(abs(char_error_rate - 0.4545454545) < 1e-6)\r\n\r\n ref = [u'\u6211\u80fd\u541e', u'\u4e0b\u73bb\u7483\u800c\u4e0d\u4f24\u8eab\u4f53']\r\n pred = [u'\u6211 \u80fd \u541e \u4e0b \u73bb \u7483', u'\u800c\u4e0d\u4f24\u8eab\u4f53']\r\n # S = 0, D = 5, I = 0, N = 11\r\n # CER = 5 \/ 11\r\n char_error_rate = cer.compute(predictions=pred, references=ref)\r\n self.assertTrue(abs(char_error_rate - 0.454545454545) < 1e-6)\r\n\r\n ref = [u'\u6211\u80fd\u541e\u4e0b\u73bb\u7483\u800c\u4e0d\u4f24\u8eab\u4f53']\r\n char_error_rate = cer.compute(predictions=ref, references=ref)\r\n self.assertFalse(char_error_rate, 0.0)\r\n\r\n def test_cer_empty(self):\r\n ref = ''\r\n pred = 'Hypothesis'\r\n with self.assertRaises(ValueError):\r\n char_error_rate = cer.compute(predictions=pred, references=ref)\r\n\r\nif __name__ == '__main__':\r\n unittest.main()\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2138\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2137","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2137\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2137\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2137\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2137","id":843502835,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw","number":2137,"title":"Fix missing infos from concurrent dataset loading","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617032772000,"updated_at":1617186956000,"closed_at":1617186955000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2137","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2137","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2137.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2137.patch"},"body":"This should fix issue #2131 \r\n\r\nWhen calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2137\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2136","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2136\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2136\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2136\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2136","id":843492015,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5","number":2136,"title":"fix dialogue action slot name and value","user":{"login":"adamlin120","id":31605305,"node_id":"MDQ6VXNlcjMxNjA1MzA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31605305?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adamlin120","html_url":"https:\/\/github.com\/adamlin120","followers_url":"https:\/\/api.github.com\/users\/adamlin120\/followers","following_url":"https:\/\/api.github.com\/users\/adamlin120\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adamlin120\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adamlin120\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adamlin120\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adamlin120\/orgs","repos_url":"https:\/\/api.github.com\/users\/adamlin120\/repos","events_url":"https:\/\/api.github.com\/users\/adamlin120\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adamlin120\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1617032053000,"updated_at":1617194882000,"closed_at":1617194881000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2136","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2136","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2136.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2136.patch"},"body":"fix #2128","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2136\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2135","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2135\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2135\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2135\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2135","id":843246344,"node_id":"MDU6SXNzdWU4NDMyNDYzNDQ=","number":2135,"title":"en language data from MLQA dataset is missing","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https:\/\/github.com\/facebookresearch\/MLQA though, do you know where we can download it ?","Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train\/test do not have en indeed. thanks a lot for the great explanations","I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. "],"created_at":1617014870000,"updated_at":1617099623000,"closed_at":1617099623000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2135\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2134","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2134\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2134\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2134\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2134","id":843242849,"node_id":"MDU6SXNzdWU4NDMyNDI4NDk=","number":2134,"title":"Saving large in-memory datasets with save_to_disk crashes because of pickling","user":{"login":"prokopCerny","id":5815801,"node_id":"MDQ6VXNlcjU4MTU4MDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5815801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/prokopCerny","html_url":"https:\/\/github.com\/prokopCerny","followers_url":"https:\/\/api.github.com\/users\/prokopCerny\/followers","following_url":"https:\/\/api.github.com\/users\/prokopCerny\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/prokopCerny\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/prokopCerny\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/prokopCerny\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/prokopCerny\/orgs","repos_url":"https:\/\/api.github.com\/users\/prokopCerny\/repos","events_url":"https:\/\/api.github.com\/users\/prokopCerny\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/prokopCerny\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) \/\/ 64))\r\ntable = pa.Table.from_arrays([a], names=[\"foo\"])\r\npickle.dumps(table) # fails with an OverflowError\r\npickle.dumps(table, 4) # works !\r\n```\r\nWe'll do the change to use `protocol=4`.\r\n\r\nMoreover I've also seen other users complain about this error\r\n```\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nIt looks like something related to the 4GB limit as well but I'm not able to reproduce on my side.\r\nDo you think you can provide a script that reproduces the issue ?\r\nHow big is your dataset ? (number of bytes, number of rows)\r\n\r\n","Hi!\r\nSo I've managed to created a minimum working (well technically crashing) example for the multiprocessing case, I create a huge list of zeros, like in your example, and then I try to .map(None, num_proc=2) over it, which then crashes, here's the code:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nif __name__ == '__main__':\r\n ton_of_zeroes = [0] * ((12 * 8 << 30) \/\/ 64)\r\n large_dataset = Dataset.from_dict({'col': ton_of_zeroes})\r\n print(\"Start\")\r\n large_dataset.map(function=None, num_proc=2)\r\n print(\"Done - should not print\")\r\n```\r\n\r\nThe amount of zeros could probably be reduced, I haven't tried to minimize it to find the breaking point, I just increased it from your code (which by quick glance I assumed tried to allocate over 4 GiB)\r\n\r\nRunning this results in the following traceback:\r\n\r\n```\r\nParameter 'indices'=[ 0 1 2 ... 805306365 805306366 805306367] of the transform datasets.arrow_dataset.Dataset.select couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\nTraceback (most recent call last):\r\n File \".\/crash_multiproc_pickle.py\", line 7, in \r\n large_dataset.map(function=None, num_proc=2)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1485, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1485, in \r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 657, in get\r\n raise self._value\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 431, in _handle_tasks\r\n put(task)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 662, in save_reduce\r\n save(state)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 732, in save_bytes\r\n self._write_large_bytes(BINBYTES + pack(\"\r\n main()\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 75, in main\r\n tokenize_and_chunkify(config)\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 60, in tokenize_and_chunkify\r\n contexts_dataset.save_to_disk(chunked_path)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 457, in save_to_disk\r\n self = pickle.loads(pickle.dumps(self))\r\nOverflowError: cannot serialize a bytes object larger than 4 GiB\r\n```\r\nFrom what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository.\r\n\r\nTo save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk.\r\n\r\nAdditional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that.\r\n```\r\nTraceback (most recent call last):\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 94, in \r\n main()\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 89, in main\r\n tokenize_and_chunkify(config)\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 67, in tokenize_and_chunkify\r\n contexts_dataset.map(function=None, cache_file_name=str(output_dir_path \/ \"tmp.arrow\"), writer_batch_size=50000, num_proc=config.threads)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1485, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1485, in \r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 657, in get\r\n raise self._value\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 431, in _handle_tasks\r\n put(task)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 662, in save_reduce\r\n save(state)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 732, in save_bytes\r\n self._write_large_bytes(BINBYTES + pack(\"\r\n main()\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 89, in main\r\n tokenize_and_chunkify(config)\r\n File \".\/tokenize_and_chunkify_in_memory.py\", line 67, in tokenize_and_chunkify\r\n contexts_dataset.map(function=None, cache_file_name=str(output_dir_path \/ \"tmp.arrow\"), writer_batch_size=50000, num_proc=config.threads)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1485, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1485, in \r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 657, in get\r\n raise self._value\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/pool.py\", line 431, in _handle_tasks\r\n put(task)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/multiprocess\/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 662, in save_reduce\r\n save(state)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/cernypro\/dev\/envs\/huggingface_gpu\/lib\/python3.7\/site-packages\/dill\/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/mnt\/appl\/software\/Python\/3.7.4-GCCcore-8.3.0\/lib\/python3.7\/pickle.py\", line 732, in save_bytes\r\n self._write_large_bytes(BINBYTES + pack(\">> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631?\",\r\n... \"\\u0643\\u0645 \\u0645\\u0631\\u0629 \\u064a\\u062a\\u0645 \\u0646\\u0634\\u0631\\u0647\\u0627 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0645\\u0627 \\u0647\\u064a \\u0627\\u0644\\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u064a\\u0648\\u0645\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0643\\u0645 \\u0639\\u062f\\u062f \\u0627\\u0644\\u0627\\u0648\\u0631\\u0627\\u0642 \\u0627\\u0644\\u0627\\u062e\\u0628\\u0627\\u0631\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0627\\u0644\\u062a\\u064a \\u0648\\u062c\\u062f\\u062a \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0641\\u064a \\u0627\\u064a \\u0633\\u0646\\u0629 \\u0628\\u062f\\u0627\\u062a \\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u0637\\u0627\\u0644\\u0628 \\u0627\\u0644\\u062d\\u0633 \\u0627\\u0644\\u0633\\u0644\\u064a\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\"\r\n... ]\r\n>>> print(questions)\r\n['\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?', '\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?', '\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?', '\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?', '\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?']\r\n```\r\nI don't think we can change this","Hi @dorost1234.\r\n\r\nIn Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \\u escaped sequence of 16-bit hex values).\r\n\r\nCharacters are usually represented (on screen and papers) with a graphical element called _glyph_. That is what you would like to see: glyphs. But Python does not care about glyphs: that is the job of the GUI or the terminal; glyphs are what you get with the `print` function (if your terminal is properly configured to display those glyphs).\r\n\r\nYou have more detailed information about Unicode in the Python documentation: https:\/\/docs.python.org\/3\/howto\/unicode.html","thank you so much for the insightful comments. "],"created_at":1617008589000,"updated_at":1617126057000,"closed_at":1617126057000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi \r\nLooking into MLQA dataset for langauge \"ar\":\r\n\r\n```\r\n \"question\": [\r\n \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631?\",\r\n \"\\u0643\\u0645 \\u0645\\u0631\\u0629 \\u064a\\u062a\\u0645 \\u0646\\u0634\\u0631\\u0647\\u0627 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n \"\\u0645\\u0627 \\u0647\\u064a \\u0627\\u0644\\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u064a\\u0648\\u0645\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n \"\\u0643\\u0645 \\u0639\\u062f\\u062f \\u0627\\u0644\\u0627\\u0648\\u0631\\u0627\\u0642 \\u0627\\u0644\\u0627\\u062e\\u0628\\u0627\\u0631\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0627\\u0644\\u062a\\u064a \\u0648\\u062c\\u062f\\u062a \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n \"\\u0641\\u064a \\u0627\\u064a \\u0633\\u0646\\u0629 \\u0628\\u062f\\u0627\\u062a \\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u0637\\u0627\\u0644\\u0628 \\u0627\\u0644\\u062d\\u0633 \\u0627\\u0644\\u0633\\u0644\\u064a\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\"\r\n ]\r\n```\r\n\r\nthe questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2133\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2132","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2132\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2132\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2132\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2132","id":843142822,"node_id":"MDU6SXNzdWU4NDMxNDI4MjI=","number":2132,"title":"TydiQA dataset is mixed and is not split per language ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\r\n```","Hi\nthank you very much for the great response, this will be really wonderful\nto have one configuration per language, as one need the dataset in majority\nof case per language for cross-lingual evaluations.\nThis becomes also then more close to TFDS format, which is separated per\nlanguage https:\/\/www.tensorflow.org\/datasets\/catalog\/tydi_qa which will be\nreally awesome to have.\nthanks\n\nOn Mon, Mar 29, 2021 at 6:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> You can filter the languages this way:\n>\n> tydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\n>\n> Otherwise maybe we can have one configuration per language ?\n> What do you think of this for example ?\n>\n> load_dataset(\"tydiqa\", \"primary_task.en\")\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","@lhoestq I greatly appreciate any updates on this. thanks a lot"],"created_at":1617008181000,"updated_at":1617530235000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi @lhoestq \r\nCurrently TydiQA is mixed and user can only access the whole training set of all languages:\r\nhttps:\/\/www.tensorflow.org\/datasets\/catalog\/tydi_qa\r\n\r\nfor using this dataset, one need to train\/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this. \r\n\r\nMeanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2132\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2131","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2131\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2131\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2131\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2131","id":843133112,"node_id":"MDU6SXNzdWU4NDMxMzMxMTI=","number":2131,"title":"When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object","user":{"login":"andy-yangz","id":23011317,"node_id":"MDQ6VXNlcjIzMDExMzE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23011317?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/andy-yangz","html_url":"https:\/\/github.com\/andy-yangz","followers_url":"https:\/\/api.github.com\/users\/andy-yangz\/followers","following_url":"https:\/\/api.github.com\/users\/andy-yangz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/andy-yangz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/andy-yangz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/andy-yangz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/andy-yangz\/orgs","repos_url":"https:\/\/api.github.com\/users\/andy-yangz\/repos","events_url":"https:\/\/api.github.com\/users\/andy-yangz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/andy-yangz\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https:\/\/github.com\/huggingface\/datasets\/pull\/2137 to fix this issue","The PR got merged :)\r\nFeel free to try it out on the `master` branch","Sorry for the late reply. \r\nNow everything just works well XD"],"created_at":1617007558000,"updated_at":1618052935000,"closed_at":1618052935000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"\bversion: 1.5.0\r\nmet a very strange error, I am training large scale language model, and need train on 2 machines(workers).\r\nAnd sometimes I will get this error `TypeError: 'NoneType' object is not iterable`\r\nThis is traceback\r\n```\r\n\r\n71 | \u00a0 | Traceback (most recent call last):\r\n-- | -- | --\r\n72 | \u00a0 | File \"run_gpt.py\", line 316, in \r\n73 | \u00a0 | main()\r\n74 | \u00a0 | File \"run_gpt.py\", line 222, in main\r\n75 | \u00a0 | delimiter=\"\\t\", column_names=[\"input_ids\", \"attention_mask\", \"chinese_ref\"])\r\n76 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 747, in load_dataset\r\n77 | \u00a0 | use_auth_token=use_auth_token,\r\n78 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 513, in download_and_prepare\r\n79 | \u00a0 | self.download_post_processing_resources(dl_manager)\r\n80 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 673, in download_post_processing_resources\r\n81 | \u00a0 | for split in self.info.splits:\r\n82 | \u00a0 | TypeError: 'NoneType' object is not iterable\r\n83 | \u00a0 | WARNING:datasets.builder:Reusing dataset csv (\/usr\/local\/app\/.cache\/huggingface\/datasets\/csv\/default-1c257ebd48e225e7\/0.0.0\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)\r\n84 | \u00a0 | Traceback (most recent call last):\r\n85 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/runpy.py\", line 193, in _run_module_as_main\r\n86 | \u00a0 | \"__main__\", mod_spec)\r\n87 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/runpy.py\", line 85, in _run_code\r\n88 | \u00a0 | exec(code, run_globals)\r\n89 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/site-packages\/torch\/distributed\/launch.py\", line 340, in \r\n90 | \u00a0 | main()\r\n91 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/site-packages\/torch\/distributed\/launch.py\", line 326, in main\r\n92 | \u00a0 | sigkill_handler(signal.SIGTERM, None) # not coming back\r\n93 | \u00a0 | File \"\/data\/miniconda3\/lib\/python3.7\/site-packages\/torch\/distributed\/launch.py\", line 301, in sigkill_handler\r\n94 | \u00a0 | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)\r\n\r\n```\r\nOn worker 1 it loads the dataset well, however on worker 2 will get this error. \r\nAnd I will meet this error from time to time, sometimes it just goes well.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2131\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2130","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2130\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2130\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2130\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2130","id":843111936,"node_id":"MDU6SXNzdWU4NDMxMTE5MzY=","number":2130,"title":"wikiann dataset is missing columns ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Here please find TFDS format of this dataset: https:\/\/www.tensorflow.org\/datasets\/catalog\/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ","Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined here:\r\n\r\nhttps:\/\/github.com\/tensorflow\/datasets\/blob\/c7096bd38e86ed240b8b2c11ecab9893715a7d55\/tensorflow_datasets\/text\/wikiann\/wikiann.py#L81-L126\r\n\r\nIt would be nice to include the `spans` field in this dataset as in TFDS. This could be a good first issue for new contributors !\r\n\r\nThe objective is to use `tags_to_spans` in the `_generate_examples` method [here](https:\/\/github.com\/huggingface\/nlp\/blob\/c98e4b8f23e3770c401c6d9326e243e1ffd599ec\/datasets\/wikiann\/wikiann.py#L292-L316) to create he `spans` for each example.","Hi @lhoestq \r\nthank you very much for the help, it would be very nice to have it included, here is the full code, one need to also convert tags to string first:\r\n\r\n```\r\nimport datasets \r\nfrom datasets import load_dataset\r\n\r\ndef tags_to_spans(tags):\r\n \"\"\"Convert tags to spans.\"\"\"\r\n spans = set()\r\n span_start = 0\r\n span_end = 0\r\n active_conll_tag = None\r\n for index, string_tag in enumerate(tags):\r\n # Actual BIO tag.\r\n bio_tag = string_tag[0]\r\n assert bio_tag in [\"B\", \"I\", \"O\"], \"Invalid Tag\"\r\n conll_tag = string_tag[2:]\r\n if bio_tag == \"O\":\r\n # The span has ended.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = None\r\n # We don't care about tags we are\r\n # told to ignore, so we do nothing.\r\n continue\r\n elif bio_tag == \"B\":\r\n # We are entering a new span; reset indices and active tag to new span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n elif bio_tag == \"I\" and conll_tag == active_conll_tag:\r\n # We're inside a span.\r\n span_end += 1\r\n else:\r\n # This is the case the bio label is an \"I\", but either:\r\n # 1) the span hasn't started - i.e. an ill formed span.\r\n # 2) We have IOB1 tagging scheme.\r\n # We'll process the previous span if it exists, but also include this\r\n # span. This is important, because otherwise, a model may get a perfect\r\n # F1 score whilst still including false positive ill-formed spans.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n # Last token might have been a part of a valid span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n # Return sorted list of spans\r\n return sorted(list(spans), key=lambda x: x[1][0])\r\n\r\ndataset = load_dataset('wikiann', 'en', split=\"train\")\r\nner_tags = {\r\n 0:\"O\",\r\n 1:\"B-PER\",\r\n 2:\"I-PER\",\r\n 3:\"B-ORG\",\r\n 4:\"I-ORG\",\r\n 5:\"B-LOC\",\r\n 6:\"I-LOC\"\r\n}\r\n\r\ndef get_spans(tokens, tags):\r\n \"\"\"Convert tags to textspans.\"\"\"\r\n spans = tags_to_spans(tags)\r\n text_spans = [\r\n x[0] + \": \" + \" \".join([tokens[i]\r\n for i in range(x[1][0], x[1][1] + 1)])\r\n for x in spans\r\n ]\r\n if not text_spans:\r\n text_spans = [\"None\"]\r\n return text_spans\r\n\r\n\r\nfor i, d in enumerate(dataset):\r\n tokens = d['tokens']\r\n tags = d['ner_tags']\r\n tags = [ner_tags[i] for i in tags]\r\n spans = get_spans(tokens, tags)\r\n print(\"spans \", spans)\r\n print(d)\r\n if i > 10:\r\n break; \r\n```\r\nI am not sure how to contribute to the repository and how things work, could you let me know how one can access the datasets to be able to contribute to the repository? Maybe I could do it then\r\nthanks \r\n","Cool ! Let me give you some context:\r\n\r\n#### Contribution guide\r\n\r\nYou can find the contribution guide here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/CONTRIBUTING.md\r\n\r\nIt explains how to set up your dev environment in a few steps.\r\n\r\n#### Dataset loading\r\n\r\nEach Dataset is defined by a Table that have many rows (one row = one example) and columns (one column = one feature).\r\nTo change how a dataset is constructed, you have to modify its dataset script that you can find here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wikiann\/wikiann.py\r\n\r\nIt includes everything needed to load the WikiANN dataset.\r\nYou can load locally a modified version of `wikiann.py` with `load_dataset(\"path\/to\/wikiann.py\")`.\r\n\r\n#### Define a new column\r\n\r\nEach column has a name and a type. You can see how the features of WikiANN are defined here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/c98e4b8f23e3770c401c6d9326e243e1ffd599ec\/datasets\/wikiann\/wikiann.py#L245-L263\r\n\r\nIdeally we would have one additional feature \"spans\":\r\n```python\r\n \"spans\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\n#### Compute the content of each row\r\n\r\nTo build the WikiANN rows, the _generate_examples method from [here](https:\/\/github.com\/huggingface\/nlp\/blob\/c98e4b8f23e3770c401c6d9326e243e1ffd599ec\/datasets\/wikiann\/wikiann.py#L292-L316) is used. This function `yield` one python dictionary for each example:\r\n```python\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs}\r\n```\r\n\r\nThe objective would be to return instead something like\r\n```python\r\nspans = spans = get_spans(tokens, tags)\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs, \"spans\": spans}\r\n```\r\n\r\nLet me know if you have questions !","The PR was merged. Issue should be closed.\r\n\r\nCC: @lhoestq "],"created_at":1617006180000,"updated_at":1630075458000,"closed_at":1630075458000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nWikiann dataset needs to have \"spans\" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2130\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2129","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2129\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2129\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2129\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2129","id":843033656,"node_id":"MDU6SXNzdWU4NDMwMzM2NTY=","number":2129,"title":"How to train BERT model with next sentence prediction?","user":{"login":"jnishi","id":836541,"node_id":"MDQ6VXNlcjgzNjU0MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/836541?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jnishi","html_url":"https:\/\/github.com\/jnishi","followers_url":"https:\/\/api.github.com\/users\/jnishi\/followers","following_url":"https:\/\/api.github.com\/users\/jnishi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jnishi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jnishi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jnishi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jnishi\/orgs","repos_url":"https:\/\/api.github.com\/users\/jnishi\/repos","events_url":"https:\/\/api.github.com\/users\/jnishi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jnishi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.","Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`?","It would probably require a bit of tweaking, but you can apply it to a dataset, yes.\r\nThis should give you a new dataset with sentence pairs you can train a model on.\r\n\r\nYou can find the documentation about dataset processing here:\r\nhttps:\/\/huggingface.co\/docs\/datasets\/processing.html#processing-data-with-map","Thank you for detail information.\r\n\r\nI'll try to apply `create_examples_from_document` to `Dataset` object.\r\n"],"created_at":1617000483000,"updated_at":1617253120000,"closed_at":1617253120000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello.\r\n\r\nI'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction \r\nlike ` TextDatasetForNextSentencePrediction` of `huggingface\/transformers` ?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2129\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2128","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2128\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2128\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2128\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2128","id":843023910,"node_id":"MDU6SXNzdWU4NDMwMjM5MTA=","number":2128,"title":"Dialogue action slot name and value are reversed in MultiWoZ 2.2","user":{"login":"adamlin120","id":31605305,"node_id":"MDQ6VXNlcjMxNjA1MzA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31605305?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adamlin120","html_url":"https:\/\/github.com\/adamlin120","followers_url":"https:\/\/api.github.com\/users\/adamlin120\/followers","following_url":"https:\/\/api.github.com\/users\/adamlin120\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adamlin120\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adamlin120\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adamlin120\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adamlin120\/orgs","repos_url":"https:\/\/api.github.com\/users\/adamlin120\/repos","events_url":"https:\/\/api.github.com\/users\/adamlin120\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adamlin120\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) "],"created_at":1616999642000,"updated_at":1617194881000,"closed_at":1617194881000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!\r\n\r\nI spot an error that the order of Dialogue action slot names and values are reversed.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/649b2c469779bc4221e1b6969aa2496d63eb5953\/datasets\/multi_woz_v22\/multi_woz_v22.py#L251-L262","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2128\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2127","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2127\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2127\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2127\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2127","id":843017199,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3","number":2127,"title":"make documentation more clear to use different cloud storage","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616999046000,"updated_at":1617020184000,"closed_at":1617020184000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2127","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2127","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2127.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2127.patch"},"body":"This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2127\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2126","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2126\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2126\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2126\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2126","id":842779966,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4","number":2126,"title":"Replace legacy torch.Tensor constructor with torch.tensor","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616950650000,"updated_at":1617010034000,"closed_at":1617010033000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2126","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2126","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2126.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2126.patch"},"body":"The title says it all (motivated by [this issue](https:\/\/github.com\/pytorch\/pytorch\/issues\/53146) in the pytorch repo).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2126\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2125","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2125\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2125\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2125\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2125","id":842690570,"node_id":"MDU6SXNzdWU4NDI2OTA1NzA=","number":2125,"title":"Is dataset timit_asr broken?","user":{"login":"kosuke-kitahara","id":42398050,"node_id":"MDQ6VXNlcjQyMzk4MDUw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42398050?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kosuke-kitahara","html_url":"https:\/\/github.com\/kosuke-kitahara","followers_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/followers","following_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/orgs","repos_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/repos","events_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kosuke-kitahara\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ","@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem."],"created_at":1616920218000,"updated_at":1616934565000,"closed_at":1616934565000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Using `timit_asr` dataset, I saw all records are the same.\r\n\r\n``` python\r\nfrom datasets import load_dataset, load_metric\r\n\r\ntimit = load_dataset(\"timit_asr\")\r\n\r\nfrom datasets import ClassLabel\r\nimport random\r\nimport pandas as pd\r\nfrom IPython.display import display, HTML\r\n\r\ndef show_random_elements(dataset, num_examples=10):\r\n assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\r\n picks = []\r\n for _ in range(num_examples):\r\n pick = random.randint(0, len(dataset)-1)\r\n while pick in picks:\r\n pick = random.randint(0, len(dataset)-1)\r\n picks.append(pick)\r\n\r\n df = pd.DataFrame(dataset[picks])\r\n display(HTML(df.to_html()))\r\n\r\n\r\nshow_random_elements(timit['train'].remove_columns([\"file\", \"phonetic_detail\", \"word_detail\", \"dialect_region\", \"id\", \r\n \"sentence_type\", \"speaker_id\"]), num_examples=20)\r\n\r\n```\r\n\r\n`output`\r\n\r\n\"Screen\r\n\r\n\r\nI double-checked it [here](https:\/\/huggingface.co\/datasets\/viewer\/), and met the same problem.\r\n\r\n\"Screen\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2125\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2124","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2124\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2124\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2124\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2124","id":842627729,"node_id":"MDU6SXNzdWU4NDI2Mjc3Mjk=","number":2124,"title":"Adding ScaNN library to do MIPS?","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I haven't played with it (yet) but it sounds really cool !\r\n"],"created_at":1616890020000,"updated_at":1617024223000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors. \r\n\r\nhttps:\/\/github.com\/google-research\/google-research\/tree\/master\/scann\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/16892570\/112738294-78ec9800-8fc6-11eb-9a5f-3d7ee5818e76.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2124\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2123","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2123\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2123\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2123\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2123","id":842577285,"node_id":"MDU6SXNzdWU4NDI1NzcyODU=","number":2123,"title":"Problem downloading GEM wiki_auto_asset_turk dataset","user":{"login":"mille-s","id":29705940,"node_id":"MDQ6VXNlcjI5NzA1OTQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29705940?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mille-s","html_url":"https:\/\/github.com\/mille-s","followers_url":"https:\/\/api.github.com\/users\/mille-s\/followers","following_url":"https:\/\/api.github.com\/users\/mille-s\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mille-s\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mille-s\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mille-s\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mille-s\/orgs","repos_url":"https:\/\/api.github.com\/users\/mille-s\/repos","events_url":"https:\/\/api.github.com\/users\/mille-s\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mille-s\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https:\/\/github.com\/huggingface\/datasets\r\n``` ","Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.","Is there an error message ?\r\nWhat stacktrace do you get if you interrupt the execution of the program while downloading ?","Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!","Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again"],"created_at":1616870488000,"updated_at":1620836118000,"closed_at":1620836117000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"@yjernite \r\n\r\n### Summary\r\n\r\nI am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code.\r\n\r\n### Steps to reproduce\r\nCode snippet:\r\n\r\nfrom datasets import load_dataset\r\n#dataset = load_dataset('gem', 'web_nlg_en')\r\ndataset = load_dataset('gem', 'wiki_auto_asset_turk')\r\n\r\n```\r\n\r\n**Expected behavior:**\r\n\r\nI expect the dataset to start downloading (download bar appears and progresses toward 100%)\r\n\r\n**Actual behavior:**\r\nInstead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more:\r\n\r\nDownloading: 36.6kB [00:00, 37.2MB\/s]\r\nDownloading: 41.7kB [00:00, ?B\/s]\r\nDownloading and preparing dataset gem\/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\\Users\\sfmil\\.cache\\huggingface\\datasets\\gem\\wiki_auto_asset_turk\\1.0.0\\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d...\r\n\r\n### Is this a regression?\r\nNo, it was the first time I was trying to download this dataset (same for the other ones).\r\n\r\n### Debug info\r\n- Python version: Python 3.8.2\r\n- OS version: Windows 10 Family","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2123\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2122","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2122\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2122\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2122\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2122","id":842194588,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0","number":2122,"title":"Fast table queries with interpolation search","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616782160000,"updated_at":1628100719000,"closed_at":1617719581000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2122","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2122","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2122.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2122.patch"},"body":"## Intro\r\n\r\nThis should fix issue #1803 \r\n\r\nCurrently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.\r\nTo fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed).\r\n\r\n## Benchmark\r\n\r\nHere is a [benchmark](https:\/\/pastebin.com\/utEXUqsR) I did on bookcorpus (74M rows):\r\n\r\nfor the current implementation\r\n```python\r\n>>> python speed.py\r\nLoaded dataset 'bookcorpus', len=74004228, nbytes=4835358766\r\n\r\n\r\n========================= Querying unshuffled bookcorpus =========================\r\n\r\nAvg access time key=1 : 0.018ms\r\nAvg access time key=74004227 : 0.215ms\r\nAvg access time key=range(74003204, 74004228) : 1.416ms\r\nAvg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms\r\n\r\n========================== Querying shuffled bookcorpus ==========================\r\n\r\nAvg access time key=1 : 0.187ms\r\nAvg access time key=74004227 : 6.642ms\r\nAvg access time key=range(74003204, 74004228) : 90.941ms\r\nAvg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms\r\n```\r\n\r\nfor the new one using interpolation search:\r\n```python\r\n>>> python speed.py\r\nLoaded dataset 'bookcorpus', len=74004228, nbytes=4835358766\r\n\r\n\r\n========================= Querying unshuffled bookcorpus =========================\r\n\r\nAvg access time key=1 : 0.076ms\r\nAvg access time key=74004227 : 0.056ms\r\nAvg access time key=range(74003204, 74004228) : 1.807ms\r\nAvg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms\r\n\r\n========================== Querying shuffled bookcorpus ==========================\r\n\r\nAvg access time key=1 : 0.061ms\r\nAvg access time key=74004227 : 0.058ms\r\nAvg access time key=range(74003204, 74004228) : 22.166ms\r\nAvg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms\r\n```\r\n\r\nThe RandIter class is just an iterable of 1024 random indices from 0 to 74004228.\r\n\r\nHere is also a plot showing the speed improvement depending on the dataset size:\r\n![image](https:\/\/user-images.githubusercontent.com\/42851186\/112673587-32335c80-8e65-11eb-9a0c-58ad774abaec.png)\r\n\r\n## Implementation details:\r\n- `datasets.table.Table` objects implement interpolation search for the `slice` method\r\n- The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized.\r\n- `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search\r\n- `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary.\r\n- Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table`\r\n\r\n## Checklist:\r\n\r\n- [x] implement interpolation search\r\n- [x] use `datasets.table.Table` in `Dataset` objects\r\n- [x] update current tests\r\n- [x] add tests for interpolation search\r\n- [x] comments and docstring\r\n- [x] add the benchmark to the CI\r\n\r\nFix #1803.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2122\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2121","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2121\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2121\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2121\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2121","id":842148633,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4","number":2121,"title":"Add Validation For README","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section\/subsection it would be cool to have a variable saying whether it's filled out or not (when it's either empty or has `[More Information Needed]`)\r\n- `attributes` should probably be `text`","@yjernite @lhoestq \r\n\r\nI have added basic validation checking in the class. It works based on a YAML string. The YAML string determines the expected structure and which text is to be checked. The `text` can be true or false showing whether the text has to be checked or not for emptiness. Similarly, each subsection is parsed recursively. I have used print statement currently so that all issues are shown.\r\n\r\nPlease let me know your thoughts.\r\n\r\nI haven't added a variable that keeps a track of whether the text is empty or not but it can be done easliy if required.","This looks like a good start !\r\nMaybe we can use a field named `allow_empty` instead of `text` ?\r\nAlso +1 for keeping track of empty texts\r\n\r\nDo you think you can have a way to collect all the validation fails of a readme and then raise an error showing all the failures instead of using print ?\r\n\r\nThen we can create a `tests\/test_dataset_cards.py` test file to make sure all the readmes of the repo are valid !","Hi @lhoestq \r\n\r\nI have added changes accordingly. I prepared a list which stores all the errors and raises them at the end. I'm not sure if there is a better way.","Hi @lhoestq @yjernite \r\n\r\nPlease find the output for the existing READMEs here: http:\/\/p.ip.fi\/2vYU\r\n\r\nThanks,\r\nGunjan","Hi @lhoestq\r\n\r\nI have added some basic tests, also have restructured `ReadMe` class slightly.\r\n\r\nThere is one print statement currently, I'm not sure how to remove it. Basically, I want to warn but not stop further validation. I can't append to a list because the `error_list` and `warning_list` are both only present in `validate` method, and this print is present in the `parse` method. This is done when someone has repeated a section multiple times. For e.g.:\r\n\r\n```markdown\r\n---\r\n---\r\n\r\n# Dataset Card for FashionMNIST\r\n## Dataset Description\r\n## Dataset Description\r\n```\r\n\r\nIn this case, I check for validation only in the latest entry.\r\n\r\nI can also raise an error (ideal case scenario), but still, it is in the `parse`. Should I add `error_lines` and `warning_lines` as instance variables? That would probably solve the issue.\r\n\r\nIn tests, I'm using a dummy YAML string for structure, we can also make it into a file but I feel that is not a hard requirement. Let me know your thoughts.\r\n\r\nI will add tests for `from_readme` as well.\r\n\r\nHowever, I would love to be able to check the exact message in the test when an error is raised. I checked a couple of methods but couldn't get it working. Let me know if you're aware of a way to do that.","Hi @lhoestq \r\n\r\nThanks for merging. :)\r\nThanks a lot to you and @yjernite for guiding me and helping me out.\r\n\r\nYes, I'll also use the next PR for combining the readme and tags validation. ^_^"],"created_at":1616778137000,"updated_at":1620652638000,"closed_at":1620639701000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2121","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2121","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2121.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2121.patch"},"body":"Hi @lhoestq, @yjernite \r\n\r\nThis is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.\r\n\r\nLet me know if this is going in the right direction :)\r\n\r\nCurrently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:\r\n\r\n```json\r\n{\r\n \"name\": \".\/datasets\/fashion_mnist\/README.md\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Dataset Card for FashionMNIST\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Table of Contents\",\r\n \"attributes\": \"- [Dataset Description](#dataset-description)\\n - [Dataset Summary](#dataset-summary)\\n - [Supported Tasks](#supported-tasks-and-leaderboards)\\n - [Languages](#languages)\\n- [Dataset Structure](#dataset-structure)\\n - [Data Instances](#data-instances)\\n - [Data Fields](#data-instances)\\n - [Data Splits](#data-instances)\\n- [Dataset Creation](#dataset-creation)\\n - [Curation Rationale](#curation-rationale)\\n - [Source Data](#source-data)\\n - [Annotations](#annotations)\\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\\n- [Considerations for Using the Data](#considerations-for-using-the-data)\\n - [Social Impact of Dataset](#social-impact-of-dataset)\\n - [Discussion of Biases](#discussion-of-biases)\\n - [Other Known Limitations](#other-known-limitations)\\n- [Additional Information](#additional-information)\\n - [Dataset Curators](#dataset-curators)\\n - [Licensing Information](#licensing-information)\\n - [Citation Information](#citation-information)\\n - [Contributions](#contributions)\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Dataset Description\",\r\n \"attributes\": \"- **Homepage:** [GitHub](https:\/\/github.com\/zalandoresearch\/fashion-mnist)\\n- **Repository:** [GitHub](https:\/\/github.com\/zalandoresearch\/fashion-mnist)\\n- **Paper:** [arXiv](https:\/\/arxiv.org\/pdf\/1708.07747.pdf)\\n- **Leaderboard:**\\n- **Point of Contact:**\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Dataset Summary\",\r\n \"attributes\": \"Fashion-MNIST is a dataset of Zalando's article images\\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Supported Tasks and Leaderboards\",\r\n \"attributes\": \"[More Information Needed]\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Languages\",\r\n \"attributes\": \"[More Information Needed]\",\r\n \"subsections\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"Dataset Structure\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Data Instances\",\r\n \"attributes\": \"A data point comprises an image and its label.\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Data Fields\",\r\n \"attributes\": \"- `image`: a 2d array of integers representing the 28x28 image.\\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\\n | Label | Description |\\n | --- | --- |\\n | 0 | T-shirt\/top |\\n | 1 | Trouser |\\n | 2 | Pullover |\\n | 3 | Dress |\\n | 4 | Coat |\\n | 5 | Sandal |\\n | 6 | Shirt |\\n | 7 | Sneaker |\\n | 8 | Bag |\\n | 9 | Ankle boot |\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Data Splits\",\r\n \"attributes\": \"The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.\",\r\n \"subsections\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"Dataset Creation\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Curation Rationale\",\r\n \"attributes\": \"**From the arXiv paper:**\\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI\/ML\/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \\\"If it doesn't work on MNIST, it won't work at all\\\", they said. \\\"Well, if it does work on MNIST, it may still fail on others.\\\"\\nHere are some good reasons:\\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \\\"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\\\"\\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert\/Keras author Fran\\u00e7ois Chollet.\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Source Data\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Initial Data Collection and Normalization\",\r\n \"attributes\": \"**From the arXiv paper:**\\nFashion-MNIST is based on the assortment on Zalando\\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \\u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \\u00d7 73) are then fed into the following conversion pipeline:\\n1. Converting the input to a PNG image.\\n2. Trimming any edges that are close to the color of the corner pixels. The \\u201ccloseness\\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\\n6. Negating the intensities of the image.\\n7. Converting the image to 8-bit grayscale pixels.\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Who are the source image producers?\",\r\n \"attributes\": \"**From the arXiv paper:**\\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.\",\r\n \"subsections\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"Annotations\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Annotation process\",\r\n \"attributes\": \"**From the arXiv paper:**\\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\\u2019s largest online fashion platform. Each product contains only one silhouette code.\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Who are the annotators?\",\r\n \"attributes\": \"**From the arXiv paper:**\\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.\",\r\n \"subsections\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"Personal and Sensitive Information\",\r\n \"attributes\": \"[More Information Needed]\",\r\n \"subsections\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"Considerations for Using the Data\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Social Impact of Dataset\",\r\n \"attributes\": \"[More Information Needed]\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Discussion of Biases\",\r\n \"attributes\": \"[More Information Needed]\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Other Known Limitations\",\r\n \"attributes\": \"[More Information Needed]\",\r\n \"subsections\": []\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"Additional Information\",\r\n \"attributes\": \"\",\r\n \"subsections\": [\r\n {\r\n \"name\": \"Dataset Curators\",\r\n \"attributes\": \"Han Xiao and Kashif Rasul and Roland Vollgraf\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Licensing Information\",\r\n \"attributes\": \"MIT Licence\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Citation Information\",\r\n \"attributes\": \"@article{DBLP:journals\/corr\/abs-1708-07747,\\n author = {Han Xiao and\\n Kashif Rasul and\\n Roland Vollgraf},\\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\\n Algorithms},\\n journal = {CoRR},\\n volume = {abs\/1708.07747},\\n year = {2017},\\n url = {http:\/\/arxiv.org\/abs\/1708.07747},\\n archivePrefix = {arXiv},\\n eprint = {1708.07747},\\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\\n biburl = {https:\/\/dblp.org\/rec\/bib\/journals\/corr\/abs-1708-07747},\\n bibsource = {dblp computer science bibliography, https:\/\/dblp.org}\\n}\",\r\n \"subsections\": []\r\n },\r\n {\r\n \"name\": \"Contributions\",\r\n \"attributes\": \"Thanks to [@gchhablani](https:\/\/github.com\/gchablani) for adding this dataset.\",\r\n \"subsections\": []\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n}\r\n```\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2121\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2120","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2120\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2120\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2120\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2120","id":841954521,"node_id":"MDU6SXNzdWU4NDE5NTQ1MjE=","number":2120,"title":"dataset viewer does not work anymore ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting :) We're looking into it","Back up. "],"created_at":1616764933000,"updated_at":1616773942000,"closed_at":1616773942000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI normally use this link to see all datasets and how I can load them \r\n\r\n\r\nhttps:\/\/huggingface.co\/datasets\/viewer\/\r\n\r\nNow I am getting \r\n\r\n502 Bad Gateway\r\nnginx\/1.18.0 (Ubuntu)\r\n\r\ncould you bring this webpage back ? this was very helpful @lhoestq \r\nthanks for your help ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2120\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2119","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2119\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2119\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2119\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2119","id":841567199,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy","number":2119,"title":"copy.deepcopy os.environ instead of copy","user":{"login":"NihalHarish","id":5506053,"node_id":"MDQ6VXNlcjU1MDYwNTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5506053?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NihalHarish","html_url":"https:\/\/github.com\/NihalHarish","followers_url":"https:\/\/api.github.com\/users\/NihalHarish\/followers","following_url":"https:\/\/api.github.com\/users\/NihalHarish\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NihalHarish\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NihalHarish\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NihalHarish\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NihalHarish\/orgs","repos_url":"https:\/\/api.github.com\/users\/NihalHarish\/repos","events_url":"https:\/\/api.github.com\/users\/NihalHarish\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NihalHarish\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616731118000,"updated_at":1616771632000,"closed_at":1616771632000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2119","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2119","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2119.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2119.patch"},"body":"Fixes: https:\/\/github.com\/huggingface\/datasets\/issues\/2115\r\n\r\n- bug fix: using envrion.copy() returns a dict.\r\n- using deepcopy(environ) returns an `_environ` object\r\n- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, like `environ.getenv()` for example.\r\n\r\n\r\nTesting:\r\n\r\nTested the change on my terminal:\r\n\r\n```\r\n>>> import os\r\n>>> x = deepcopy(os.environ)\r\n>>> y = os.environ\r\n>>> x is y\r\nFalse\r\n>>> isinstance(x, type(os.environ))\r\nTrue\r\n>>> z = os.environ.copy()\r\n>>> isinstance(z, type(os.environ))\r\nFalse\r\n>>> isinstance(z, dict)\r\nTrue\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2119\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2118","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2118\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2118\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2118\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2118","id":841563329,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx","number":2118,"title":"Remove os.environ.copy in Dataset.map","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I thought deepcopy on `os.environ` is unsafe (see [this](https:\/\/stackoverflow.com\/questions\/13142972\/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.\r\n\r\nClosing this one because #2119 has a much cleaner approach."],"created_at":1616730497000,"updated_at":1616760203000,"closed_at":1616760005000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2118","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2118","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2118.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2118.patch"},"body":"Replace `os.environ.copy` with in-place modification\r\nFixes #2115 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2118\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2117","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2117\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2117\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2117\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2117","id":841535283,"node_id":"MDU6SXNzdWU4NDE1MzUyODM=","number":2117,"title":"load_metric from local \"glue.py\" meet error 'NoneType' object is not callable","user":{"login":"Frankie123421","id":54012361,"node_id":"MDQ6VXNlcjU0MDEyMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/54012361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Frankie123421","html_url":"https:\/\/github.com\/Frankie123421","followers_url":"https:\/\/api.github.com\/users\/Frankie123421\/followers","following_url":"https:\/\/api.github.com\/users\/Frankie123421\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Frankie123421\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Frankie123421\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Frankie123421\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Frankie123421\/orgs","repos_url":"https:\/\/api.github.com\/users\/Frankie123421\/repos","events_url":"https:\/\/api.github.com\/users\/Frankie123421\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Frankie123421\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@Frankie123421 what was the resolution to this?","> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric","thank you!"],"created_at":1616726122000,"updated_at":1629927845000,"closed_at":1616726426000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"actual_task = \"mnli\" if task == \"mnli-mm\" else task\r\ndataset = load_dataset(path='\/home\/glue.py', name=actual_task)\r\nmetric = load_metric(path='\/home\/glue.py', name=actual_task)\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n in \r\n 1 actual_task = \"mnli\" if task == \"mnli-mm\" else task\r\n 2 dataset = load_dataset(path='\/home\/jcli\/glue.py', name=actual_task)\r\n----> 3 metric = load_metric(path='\/home\/jcli\/glue.py', name=actual_task)\r\n\r\n~\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)\r\n 508 keep_in_memory=keep_in_memory,\r\n 509 experiment_id=experiment_id,\r\n--> 510 **metric_init_kwargs,\r\n 511 )\r\n 512 \r\n\r\nTypeError: 'NoneType' object is not callable\r\n\r\nPlease help","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2117\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2116","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2116\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2116\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2116\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2116","id":841481292,"node_id":"MDU6SXNzdWU4NDE0ODEyOTI=","number":2116,"title":"Creating custom dataset results in error while calling the map() function","user":{"login":"GeetDsa","id":13940397,"node_id":"MDQ6VXNlcjEzOTQwMzk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13940397?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/GeetDsa","html_url":"https:\/\/github.com\/GeetDsa","followers_url":"https:\/\/api.github.com\/users\/GeetDsa\/followers","following_url":"https:\/\/api.github.com\/users\/GeetDsa\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/GeetDsa\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/GeetDsa\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/GeetDsa\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/GeetDsa\/orgs","repos_url":"https:\/\/api.github.com\/users\/GeetDsa\/repos","events_url":"https:\/\/api.github.com\/users\/GeetDsa\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/GeetDsa\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over inheritance\" approach with a simple wrapper class that delegates calls to a wrapped `Dataset` (map, etc.). Btw, the library offers the `datasets.Dataset.from_pandas` class method to directly create a `datasets.Dataset` from the dataframe."],"created_at":1616719066000,"updated_at":1617201032000,"closed_at":1617201032000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"calling `map()` of `datasets` library results into an error while defining a Custom dataset.\r\nReproducible example:\r\n```\r\nimport datasets\r\nclass MyDataset(datasets.Dataset):\r\n\r\n def __init__(self, sentences):\r\n \"Initialization\"\r\n self.samples = sentences\r\n\r\n def __len__(self):\r\n \"Denotes the total number of samples\"\r\n return len(self.samples)\r\n\r\n def __getitem__(self, index):\r\n \"Generates one sample of data\"\r\n # Select sample\r\n # Load data and get label\r\n samples = self.samples[index]\r\n\r\n return samples\r\n\r\ndef preprocess_function_train(examples):\r\n inputs = examples\r\n labels = [example+tokenizer.eos_token for example in examples ]\r\n inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True)\r\n labels = tokenizer(labels, max_length=30, padding=True, truncation=True)\r\n model_inputs = inputs\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n print(\"about to return\")\r\n return model_inputs\r\n\r\n\r\n##train[\"sentence\"] is dataframe column\r\ntrain_dataset = MyDataset(train['sentence'].values.tolist())\r\ntrain_dataset = train_dataset.map(\r\n preprocess_function,\r\n batched = True,\r\n batch_size=32\r\n )\r\n```\r\n\r\nStack trace of error:\r\n```\r\nTraceback (most recent call last):\r\n File \"dir\/train_generate.py\", line 362, in \r\n main()\r\n File \"dir\/train_generate.py\", line 245, in main\r\n train_dataset = train_dataset.map(\r\n File \"anaconda_dir\/anaconda3\/envs\/env1\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1244, in map\r\n return self._map_single(\r\n File \"anaconda_dir\/anaconda3\/envs\/env1\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 149, in wrapper\r\n unformatted_columns = set(self.column_names) - set(self._format_columns or [])\r\n File \"anaconda_dir\/anaconda3\/envs\/env1\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 526, in column_names\r\n return self._data.column_names\r\nAttributeError: 'MyDataset' object has no attribute '_data'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2116\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2115","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2115\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2115\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2115\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2115","id":841283974,"node_id":"MDU6SXNzdWU4NDEyODM5NzQ=","number":2115,"title":"The datasets.map() implementation modifies the datatype of os.environ object","user":{"login":"leleamol","id":19983848,"node_id":"MDQ6VXNlcjE5OTgzODQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19983848?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leleamol","html_url":"https:\/\/github.com\/leleamol","followers_url":"https:\/\/api.github.com\/users\/leleamol\/followers","following_url":"https:\/\/api.github.com\/users\/leleamol\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leleamol\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leleamol\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leleamol\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leleamol\/orgs","repos_url":"https:\/\/api.github.com\/users\/leleamol\/repos","events_url":"https:\/\/api.github.com\/users\/leleamol\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leleamol\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616704159000,"updated_at":1616771632000,"closed_at":1616771632000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.\r\n\r\nThis causes following function calls to fail as follows:\r\n\r\n` \r\n x = os.environ.get(\"TEST_ENV_VARIABLE_AFTER_dataset_map\", default=None)\r\n TypeError: get() takes no keyword arguments\r\n`\r\nIt looks like the following line in datasets.map implementation introduced this functionality.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/0cb1ac06acb0df44a1cf4128d03a01865faa2504\/src\/datasets\/arrow_dataset.py#L1421\r\n\r\nHere is the test script to reproduce this error. \r\n\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\nimport os\r\n\r\n\r\ndef test_train():\r\n model_checkpoint = \"distilgpt2\"\r\n datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')\r\n tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)\r\n tokenizer.pad_token = tokenizer.eos_token\r\n\r\n\r\n def tokenize_function(examples):\r\n y = tokenizer(examples['text'], truncation=True, max_length=64)\r\n return y\r\n\r\n x = os.environ.get(\"TEST_ENV_VARIABLE_BEFORE_dataset_map\", default=None)\r\n print(f\"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}\")\r\n print(f\"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}\")\r\n datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=[\"text\"])\r\n print(f\"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}\")\r\n x = os.environ.get(\"TEST_ENV_VARIABLE_AFTER_dataset_map\", default=None)\r\n print(f\"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n test_train()\r\n\r\n\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2115\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2114","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2114\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2114\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2114\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2114","id":841207878,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAwOTc1MTA3","number":2114,"title":"Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR)","user":{"login":"iliaschalkidis","id":1626984,"node_id":"MDQ6VXNlcjE2MjY5ODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1626984?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliaschalkidis","html_url":"https:\/\/github.com\/iliaschalkidis","followers_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/followers","following_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/repos","events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Awesome thank you :)\r\n> This is really cool\r\n> \r\n> I left a few comments.\r\n> \r\n> Also it looks like the dummy data are quite big (100-200KB each). Can you try to reduce their sizes please ? For example I noticed that all the jsonl files inside the `dummy_data.zip` files have 20 lines. Can you only keep 2 lines instead ?\r\n\r\nHi @lhoestq, I did my best to improve the README files, while I also decreased dummy data examples. I included one more legal dataset.","@lhoestq thanks for your review.\r\n\r\n I shortened the examples in README files and removed `DEFAULT_CONFIG_BUILDER` from `eu_regulatory_ir.py`."],"created_at":1616697617000,"updated_at":1617187130000,"closed_at":1617187130000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2114","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2114","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2114.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2114.patch"},"body":"Add support for two legal NLP datasets:\r\n\r\n- EURLEX (https:\/\/www.aclweb.org\/anthology\/P19-1636\/)\r\n- ECtHR cases (https:\/\/arxiv.org\/abs\/2103.13084)\r\n- EU-REG-IR (https:\/\/arxiv.org\/abs\/2101.10726)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2114\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2113","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2113\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2113\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2113\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2113","id":841191303,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAwOTYxMDEz","number":2113,"title":"Implement Dataset as context manager","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616696310000,"updated_at":1617190214000,"closed_at":1617179411000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2113","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2113","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2113.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2113.patch"},"body":"When used as context manager, it would be safely deleted if some exception is raised.\r\n\r\nThis will avoid \r\n> During handling of the above exception, another exception occurred:","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2113\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2112","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2112\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2112\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2112\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2112","id":841098008,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAwODgyMjA0","number":2112,"title":"Support for legal NLP datasets (EURLEX and ECtHR cases)","user":{"login":"iliaschalkidis","id":1626984,"node_id":"MDQ6VXNlcjE2MjY5ODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1626984?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliaschalkidis","html_url":"https:\/\/github.com\/iliaschalkidis","followers_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/followers","following_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/repos","events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliaschalkidis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616689457000,"updated_at":1616697571000,"closed_at":1616697271000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2112","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2112","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2112.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2112.patch"},"body":"Add support for two legal NLP datasets:\r\n- EURLEX (https:\/\/www.aclweb.org\/anthology\/P19-1636\/)\r\n- ECtHR cases (https:\/\/arxiv.org\/abs\/2103.13084)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2112\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2111","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2111\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2111\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2111\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2111","id":841082087,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAwODY4OTg5","number":2111,"title":"Compute WER metric iteratively","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.\r\n\r\nBy default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic complexity).\r\n\r\nSome users might still want to use the old implementation.","@lhoestq @patrickvonplaten are you sure of the parameter name `concatenate_texts`? I was thinking about something like `iter`...","Not sure about the name, if you can improve it feel free to do so ^^'\r\nThe old implementation computes the WER on the concatenation of all the input texts, while the new one makes WER measures computation independent for each reference\/prediction pair.\r\nThat's why I thought of `concatenate_texts`","@lhoestq yes, but the end user does not necessarily know the details of the implementation of the WER computation.\r\n\r\nFrom the end user perspective I think it might make more sense: how do you want to compute the metric?\r\n- all in once, more RAM memory needed?\r\n- iteratively, less RAM requirements?\r\n\r\nBecause of that I was thinking of something like `iter` or `iterative`...","Personally like `concatenate_texts` better since I feel like `iter` or `iterate` are quite vague","Therefore, you can merge... ;)","Ok ! merging :)"],"created_at":1616688408000,"updated_at":1617693643000,"closed_at":1617693643000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2111","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2111","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2111.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2111.patch"},"body":"Compute WER metric iteratively to avoid MemoryError.\r\n\r\nFix #2078.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2111\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2110","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2110\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2110\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2110\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2110","id":840794995,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAwNjI1NDQ5","number":2110,"title":"Fix incorrect assertion in builder.py","user":{"login":"dreamgonfly","id":2340721,"node_id":"MDQ6VXNlcjIzNDA3MjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2340721?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dreamgonfly","html_url":"https:\/\/github.com\/dreamgonfly","followers_url":"https:\/\/api.github.com\/users\/dreamgonfly\/followers","following_url":"https:\/\/api.github.com\/users\/dreamgonfly\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dreamgonfly\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dreamgonfly\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dreamgonfly\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dreamgonfly\/orgs","repos_url":"https:\/\/api.github.com\/users\/dreamgonfly\/repos","events_url":"https:\/\/api.github.com\/users\/dreamgonfly\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dreamgonfly\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\nSo unfortunately we can't use this assertion you suggested","> Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\n> So unfortunately we can't use this assertion you suggested\r\n\r\nThen it would be better to just remove the assertion, because the existing assertion does nothing."],"created_at":1616668760000,"updated_at":1618234383000,"closed_at":1618234383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2110","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2110","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2110.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2110.patch"},"body":"Fix incorrect num_examples comparison assertion in builder.py","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2110\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2109","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2109\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2109\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2109\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2109","id":840746598,"node_id":"MDExOlB1bGxSZXF1ZXN0NjAwNTg1MzM5","number":2109,"title":"Add more issue templates and customize issue template chooser","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If you agree, I could also add a link to [Discussions](https:\/\/github.com\/huggingface\/datasets\/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).\r\n\r\nI could also add some other templates: Bug, Feature Request,...","@theo-m we wrote our same comments at the same time... \ud83d\ude09 "],"created_at":1616665313000,"updated_at":1618813211000,"closed_at":1618813211000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2109","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2109","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2109.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2109.patch"},"body":"When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don\u2019t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset.\r\n\r\n~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~\r\n\r\nWith this PR:\r\n- more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request`\r\n- the issue template chooser is customized, so that it now includes a link to `Discussions` for questions","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2109\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2108","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2108\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2108\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2108\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2108","id":840181055,"node_id":"MDU6SXNzdWU4NDAxODEwNTU=","number":2108,"title":"Is there a way to use a GPU only when training an Index in the process of add_faisis_index?","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616621536000,"updated_at":1616653903000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https:\/\/gist.github.com\/mdouze\/46d6bbbaabca0b9778fca37ed2bcccf6).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2108\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2107","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2107\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2107\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2107\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2107","id":839495825,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk5NTAxODE5","number":2107,"title":"Metadata validation","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"assignees":[{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["> Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.\r\n\r\nI'm unclear on the suggestion, would you rather have a root-level `.\/metadata.py` file? I think it's well where it is, if anything we could move it out of utils and into `datasets` as it could be used by e.g. `DatasetDict` so that users can pull the metadata easily rather than have to reparse the readme.\r\n","Ok that makes sense if we want to have functions that parse the metadata for users","Hi @theo-m @lhoestq \r\n\r\nThis seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet\/markdown table :)\r\n\r\nSorry for the delay in responding.\r\n\r\nThanks,\r\nGunjan","> Hi @theo-m @lhoestq\r\n> \r\n> This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet\/markdown table :)\r\n> \r\n> Sorry for the delay in responding.\r\n> \r\n> Thanks,\r\n> Gunjan\r\n\r\nHi @gchhablani, yes I think at the moment the best solution is for you to write in `datasets-tagging`, as the PR will allow us to discuss and review, even though the work will be ported to this repo in the end. \r\nOr we wait for this to be merged and you reopen the PR here, your call :)","cc @abhi1thakur "],"created_at":1616575961000,"updated_at":1619425634000,"closed_at":1619425633000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2107","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2107","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2107.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2107.patch"},"body":"- `pydantic` metadata schema with dedicated validators against our taxonomy\r\n- ci script to validate new changes against this schema and start a vertuous loop\r\n- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future\r\n\r\nfor reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https:\/\/gist.github.com\/theo-m\/61b3c0c47fc6121d08d3174bd4c2a26b)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2107\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2106","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2106\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2106\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2106\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2106","id":839084264,"node_id":"MDU6SXNzdWU4MzkwODQyNjQ=","number":2106,"title":"WMT19 Dataset for Kazakh-English is not formatted correctly","user":{"login":"trina731","id":22580542,"node_id":"MDQ6VXNlcjIyNTgwNTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22580542?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/trina731","html_url":"https:\/\/github.com\/trina731","followers_url":"https:\/\/api.github.com\/users\/trina731\/followers","following_url":"https:\/\/api.github.com\/users\/trina731\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/trina731\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/trina731\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/trina731\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/trina731\/orgs","repos_url":"https:\/\/api.github.com\/users\/trina731\/repos","events_url":"https:\/\/api.github.com\/users\/trina731\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/trina731\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is only `kk` text and must be appended at the end of the `kk` text of the **previous** line\r\n- L1247 and L1248 are only `kk` texts and must be inserted at the **beginning** of the `kk` text of the next line\r\n- (and there are many others)\r\n\r\nIt would be nice to have a corrected version of this file ! The file is available in the `wmt\/news-commentary` repository on the Datasets Hub here:\r\nhttps:\/\/huggingface.co\/datasets\/wmt\/news-commentary\/tree\/main\/v14\/training\r\n\r\nThen maybe we can notify the WMT authors and host the corrected version somewhere"],"created_at":1616530487000,"updated_at":1616708180000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.\r\n\r\nThe News Commentary v14 parallel data set for kk-en from http:\/\/www.statmt.org\/wmt19\/translation-task.html has a bug here:\r\n\r\n> Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc\u2019s dramatic appreciation over the past few years.\t\u0428\u0432\u0435\u0439\u0446\u0430\u0440\u0438\u044f\u043d\u044b\u04a3 \u04b0\u043b\u0442\u0442\u044b\u049b \u0431\u0430\u043d\u043a\u0456 \u04e9\u0437 \u0442\u0430\u0440\u0430\u043f\u044b\u043d\u0430\u043d, \u0441\u043e\u04a3\u0493\u044b \u0431\u0456\u0440\u043d\u0435\u0448\u0435 \u0436\u044b\u043b \u0456\u0448\u0456\u043d\u0434\u0435 \u0444\u0440\u0430\u043d\u043a \u049b\u04b1\u043d\u044b\u043d\u044b\u04a3 \u049b\u0430\u0442\u0442\u044b \u04e9\u0441\u0443\u0456\u043d\u0456\u04a3 \u0434\u0435\u0444\u043b\u044f\u0446\u0438\u044f\u043b\u044b\u049b \u04d9\u0441\u0435\u0440\u0456\u043c\u0435\u043d \u043a\u04af\u0440\u0435\u0441\u0456\u043f \u043a\u0435\u043b\u0435\u0434\u0456.\r\n> \r\n> Line 95. \u0414\u0435\u0444\u043b\u044f\u0446\u0438\u044f\u043b\u044b\u049b \u043a\u04af\u0448\u0442\u0435\u0440 2008 \u0436\u044b\u043b\u044b \u0442\u0435\u0440\u0435\u04a3 \u0436\u04d9\u043d\u0435 \u04b1\u0437\u0430\u049b\u049b\u0430 \u0441\u043e\u0437\u044b\u043b\u0493\u0430\u043d \u0436\u0430\u04bb\u0430\u043d\u0434\u044b\u049b \u0434\u0430\u0493\u0434\u0430\u0440\u044b\u0441\u049b\u0430 \u0431\u0430\u0439\u043b\u0430\u043d\u044b\u0441\u0442\u044b \u043e\u0440\u044b\u043d \u0430\u043b\u0493\u0430\u043d \u0456\u0440\u0456 \u044d\u043a\u043e\u043d\u043e\u043c\u0438\u043a\u0430\u043b\u044b\u049b \u0436\u04d9\u043d\u0435 \u049b\u0430\u0440\u0436\u044b\u043b\u044b\u049b \u043e\u0440\u044b\u043d \u0430\u043b\u043c\u0430\u0441\u0443\u043b\u0430\u0440\u0434\u044b\u04a3 \u0430\u0440\u049b\u0430\u0441\u044b\u043d\u0434\u0430 \u0431\u043e\u0441\u0430\u0442\u044b\u043b\u0434\u044b. \u0416\u0435\u043a\u0435 \u049b\u0430\u0440\u044b\u0437 \u049b\u0430\u0440\u0430\u0436\u0430\u0442\u044b \u04af\u043b\u0435\u0441\u0456\u043d\u0456\u04a3 \u049b\u044b\u0441\u049b\u0430\u0440\u0443\u044b \u043e\u0440\u0442\u0430\u043b\u044b\u049b \u0431\u0430\u043d\u043a\u0442\u0456\u04a3 \u0440\u0435\u0444\u043b\u044f\u0446\u0438\u044f\u0493\u0430 \u0436\u04b1\u043c\u0441\u0430\u043b\u0493\u0430\u043d \u043a\u04af\u0448-\u0436\u0456\u0433\u0435\u0440\u0456\u043d\u0435 \u0442\u04b1\u0440\u0430\u049b\u0442\u044b \u0441\u043e\u049b\u049b\u0430\u043d \u049b\u0430\u0440\u0441\u044b \u0436\u0435\u043b\u0434\u0435\u0439 \u0431\u043e\u043b\u0434\u044b.\r\n> \r\n> Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate.\t2009 \u0436\u044b\u043b\u044b, \u0430\u043b\u0434\u044b\u04a3\u0493\u044b \u049b\u0430\u0442\u0430\u0440\u043b\u044b \u044d\u043a\u043e\u043d\u043e\u043c\u0438\u043a\u0430\u043b\u0430\u0440\u0434\u044b\u04a3 \u0448\u0430\u043c\u0430\u043c\u0435\u043d \u04af\u0448\u0442\u0435\u043d \u0431\u0456\u0440\u0456 \u0431\u0430\u0493\u0430\u043d\u044b\u04a3 \u0442\u04e9\u043c\u0435\u043d\u0434\u0435\u0443\u0456\u043d \u043a\u04e9\u0440\u0441\u0435\u0442\u0442\u0456, \u0431\u04b1\u043b \u0441\u043e\u0493\u044b\u0441\u0442\u0430\u043d \u043a\u0435\u0439\u0456\u043d\u0433\u0456 \u0436\u043e\u0493\u0430\u0440\u044b \u0434\u0435\u04a3\u0433\u0435\u0439 \u0431\u043e\u043b\u0434\u044b.\r\n\r\nAs you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code \r\n\r\n```\r\nimport datasets\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('wmt19', 'kk-en')\r\nfor key in dataset['train']['translation']:\r\n if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']:\r\n print(key['en'])\r\n print(key['kk'])\r\n break\r\n```\r\nwe get: \r\n> 2009 \u0436\u044b\u043b\u044b, \u0430\u043b\u0434\u044b\u04a3\u0493\u044b \u049b\u0430\u0442\u0430\u0440\u043b\u044b \u044d\u043a\u043e\u043d\u043e\u043c\u0438\u043a\u0430\u043b\u0430\u0440\u0434\u044b\u04a3 \u0448\u0430\u043c\u0430\u043c\u0435\u043d \u04af\u0448\u0442\u0435\u043d \u0431\u0456\u0440\u0456 \u0431\u0430\u0493\u0430\u043d\u044b\u04a3 \u0442\u04e9\u043c\u0435\u043d\u0434\u0435\u0443\u0456\u043d \u043a\u04e9\u0440\u0441\u0435\u0442\u0442\u0456, \u0431\u04b1\u043b \u0441\u043e\u0493\u044b\u0441\u0442\u0430\u043d \u043a\u0435\u0439\u0456\u043d\u0433\u0456 \u0436\u043e\u0493\u0430\u0440\u044b \u0434\u0435\u04a3\u0433\u0435\u0439 \u0431\u043e\u043b\u0434\u044b.\r\n> The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate.\r\n\r\nwhich shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one.\r\n\r\nPlease let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2106\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2105","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2105\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2105\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2105\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2105","id":839059226,"node_id":"MDU6SXNzdWU4MzkwNTkyMjY=","number":2105,"title":"Request to remove S2ORC dataset","user":{"login":"kyleclo","id":13603748,"node_id":"MDQ6VXNlcjEzNjAzNzQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13603748?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kyleclo","html_url":"https:\/\/github.com\/kyleclo","followers_url":"https:\/\/api.github.com\/users\/kyleclo\/followers","following_url":"https:\/\/api.github.com\/users\/kyleclo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kyleclo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kyleclo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kyleclo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kyleclo\/orgs","repos_url":"https:\/\/api.github.com\/users\/kyleclo\/repos","events_url":"https:\/\/api.github.com\/users\/kyleclo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kyleclo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?","Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there.\r\n\r\nIs it OK? Are you planning to eventually delete it? Thank you.","Hi! Sorry I missed @yjernite 's previous message, thanks for responding! \r\n\r\nIs there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it? "],"created_at":1616528586000,"updated_at":1628104682000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi! I was wondering if it's possible to remove [S2ORC](https:\/\/huggingface.co\/datasets\/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2105\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2104","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2104\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2104\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2104\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2104","id":839027834,"node_id":"MDU6SXNzdWU4MzkwMjc4MzQ=","number":2104,"title":"Trouble loading wiki_movies","user":{"login":"adityaarunsinghal","id":35391599,"node_id":"MDQ6VXNlcjM1MzkxNTk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35391599?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adityaarunsinghal","html_url":"https:\/\/github.com\/adityaarunsinghal","followers_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/followers","following_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/orgs","repos_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/repos","events_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adityaarunsinghal\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```","Thanks a lot! That solved it and I was able to upload a model trained on it as well :)"],"created_at":1616525994000,"updated_at":1617664646000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\nI am trying to load_dataset(\"wiki_movies\") and it gives me this error - \r\n\r\n`FileNotFoundError: Couldn't find file locally at wiki_movies\/wiki_movies.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/wiki_movies\/wiki_movies.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/wiki_movies\/wiki_movies.py`\r\n\r\nTrying to do `python run_mlm.py \\\r\n --model_name_or_path roberta-base \\\r\n --dataset_name wiki_movies \\` also gives the same error. \r\n\r\nIs this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago. \r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2104\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2103","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2103\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2103\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2103\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2103","id":838946916,"node_id":"MDU6SXNzdWU4Mzg5NDY5MTY=","number":2103,"title":"citation, homepage, and license fields of `dataset_info.json` are duplicated many times","user":{"login":"samsontmr","id":15007950,"node_id":"MDQ6VXNlcjE1MDA3OTUw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15007950?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/samsontmr","html_url":"https:\/\/github.com\/samsontmr","followers_url":"https:\/\/api.github.com\/users\/samsontmr\/followers","following_url":"https:\/\/api.github.com\/users\/samsontmr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/samsontmr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/samsontmr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/samsontmr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/samsontmr\/orgs","repos_url":"https:\/\/api.github.com\/users\/samsontmr\/repos","events_url":"https:\/\/api.github.com\/users\/samsontmr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/samsontmr\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/349ac4398a3bcae6356f14c5754483383a60e8a4\/src\/datasets\/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease comment if you'd like to improve this and open a PR :)"],"created_at":1616519889000,"updated_at":1617719999000,"closed_at":1617719999000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.\r\n\r\nExample result:\r\n```\r\n\"citation\": \"@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n url = {https:\/\/dumps.wikimedia.org}\\n}\\n\\n@ONLINE {wikidump,\\n author = {Wikimedia Foundation},\\n title = {Wikimedia Downloads},\\n\r\n```\r\n\r\n@lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2103\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2102","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2102\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2102\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2102\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2102","id":838794090,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk4OTEyNzUw","number":2102,"title":"Move Dataset.to_csv to csv module","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":2851292821,"node_id":"MDU6TGFiZWwyODUxMjkyODIx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/refactoring","name":"refactoring","color":"B67A40","default":false,"description":"Restructuring existing code without changing its external behavior"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616510146000,"updated_at":1616594855000,"closed_at":1616594854000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2102","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2102","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2102.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2102.patch"},"body":"Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2102\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2101","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2101\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2101\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2101\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2101","id":838586184,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk4NzQzMDM4","number":2101,"title":"MIAM dataset - new citation details","user":{"login":"eusip","id":1551356,"node_id":"MDQ6VXNlcjE1NTEzNTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1551356?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eusip","html_url":"https:\/\/github.com\/eusip","followers_url":"https:\/\/api.github.com\/users\/eusip\/followers","following_url":"https:\/\/api.github.com\/users\/eusip\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eusip\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eusip\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eusip\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eusip\/orgs","repos_url":"https:\/\/api.github.com\/users\/eusip\/repos","events_url":"https:\/\/api.github.com\/users\/eusip\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eusip\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nLooks like there's a unicode error in the new citation in the miam.py file.\r\nCould you try to fix it ? Not sure from which character it comes from though\r\n\r\nYou can test if it works on your side with\r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_miam\r\n```","Unicode error resolved!"],"created_at":1616496083000,"updated_at":1616522890000,"closed_at":1616522890000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2101","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2101","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2101.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2101.patch"},"body":"Hi @lhoestq, I have updated the citations to reference an OpenReview preprint.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2101\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2100","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2100\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2100\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2100\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2100","id":838574631,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0","number":2100,"title":"Fix deprecated warning message and docstring","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.","`dictionary_encode_column_ ` should be deprecated since it never worked correctly. It will be removed in a major release.\r\nThis has to be deprecated in `DatasetDict` as well.\r\nAnd `Dataset.dictionary_encode_column` doesn't exist indeed.","Thanks @lhoestq. I have fixed deprecated for `dictionary_encode_column_`."],"created_at":1616495272000,"updated_at":1616573981000,"closed_at":1616522629000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2100","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2100","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2100.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2100.patch"},"body":"Fix deprecated warnings:\r\n- Use deprecated Sphinx directive in docstring\r\n- Fix format of deprecated message\r\n- Raise FutureWarning","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2100\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2099","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2099\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2099\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2099\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2099","id":838523819,"node_id":"MDU6SXNzdWU4Mzg1MjM4MTk=","number":2099,"title":"load_from_disk takes a long time to load local dataset","user":{"login":"samsontmr","id":15007950,"node_id":"MDQ6VXNlcjE1MDA3OTUw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15007950?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/samsontmr","html_url":"https:\/\/github.com\/samsontmr","followers_url":"https:\/\/api.github.com\/users\/samsontmr\/followers","following_url":"https:\/\/api.github.com\/users\/samsontmr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/samsontmr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/samsontmr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/samsontmr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/samsontmr\/orgs","repos_url":"https:\/\/api.github.com\/users\/samsontmr\/repos","events_url":"https:\/\/api.github.com\/users\/samsontmr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/samsontmr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?","It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization.\r\n\r\n```\r\ndef add_len_and_seq(example):\r\n end_idx = example['input_ids'].index(SEP)\r\n example['actual_len'] = end_idx-1\r\n seq_len = len(example['input_ids'])\r\n \r\n\r\n example['seq'] = [PAD_ID] + [np.uint8(example['some_integer'])]*(end_idx-1) + [PAD_ID]*(seq_len-end_idx)\r\n \r\n return example\r\n```\r\n","Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type.\r\nDoes this work if you remove the `np.uint8` and use python integers instead ?","yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.","Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`.\r\n\r\nUpdate: I tried creating lists of `int8`s and got the same result.","Yes this is a known issue: #625 \r\nWe're working on making the precision kept for numpy :)\r\nTo specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`","Do you know what step is taking forever in the code ?\r\nWhat happens if you interrupt the execution of the dataset loading ?","After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_proc` for smaller cache files :)\r\n\r\nMaybe this can be highlighted somewhere in the docs."],"created_at":1616491717000,"updated_at":1616519536000,"closed_at":1616519536000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though).\r\n\r\nDoes anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers?\r\n\r\nTagging @lhoestq since you seem to be working on these issues and PRs :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2099\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2098","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2098\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2098\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2098\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2098","id":838447959,"node_id":"MDU6SXNzdWU4Mzg0NDc5NTk=","number":2098,"title":"SQuAD version ","user":{"login":"h-peng17","id":39556019,"node_id":"MDQ6VXNlcjM5NTU2MDE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39556019?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/h-peng17","html_url":"https:\/\/github.com\/h-peng17","followers_url":"https:\/\/api.github.com\/users\/h-peng17\/followers","following_url":"https:\/\/api.github.com\/users\/h-peng17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/h-peng17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/h-peng17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/h-peng17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/h-peng17\/orgs","repos_url":"https:\/\/api.github.com\/users\/h-peng17\/repos","events_url":"https:\/\/api.github.com\/users\/h-peng17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/h-peng17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/349ac4398a3bcae6356f14c5754483383a60e8a4\/datasets\/squad\/squad.py#L50-L55","Got it. Thank you~"],"created_at":1616485674000,"updated_at":1616752134000,"closed_at":1616752134000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi~ \r\nI want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2098\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2097","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2097\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2097\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2097\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2097","id":838105289,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3","number":2097,"title":"fixes issue #1110 by descending further if `obj[\"_type\"]` is a dict","user":{"login":"dcfidalgo","id":15979778,"node_id":"MDQ6VXNlcjE1OTc5Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15979778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dcfidalgo","html_url":"https:\/\/github.com\/dcfidalgo","followers_url":"https:\/\/api.github.com\/users\/dcfidalgo\/followers","following_url":"https:\/\/api.github.com\/users\/dcfidalgo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dcfidalgo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dcfidalgo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dcfidalgo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dcfidalgo\/orgs","repos_url":"https:\/\/api.github.com\/users\/dcfidalgo\/repos","events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616446855000,"updated_at":1616446871000,"closed_at":1616446871000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2097","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2097","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2097.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2097.patch"},"body":"Check metrics","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2097\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2096","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2096\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2096\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2096\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2096","id":838038379,"node_id":"MDU6SXNzdWU4MzgwMzgzNzk=","number":2096,"title":"CoNLL 2003 dataset not including German","user":{"login":"rxian","id":8406802,"node_id":"MDQ6VXNlcjg0MDY4MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8406802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rxian","html_url":"https:\/\/github.com\/rxian","followers_url":"https:\/\/api.github.com\/users\/rxian\/followers","following_url":"https:\/\/api.github.com\/users\/rxian\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rxian\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rxian\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rxian\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rxian\/orgs","repos_url":"https:\/\/api.github.com\/users\/rxian\/repos","events_url":"https:\/\/api.github.com\/users\/rxian\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rxian\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616441036000,"updated_at":1617097535000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!\r\n\r\nI was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of...\r\n\r\nThis is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https:\/\/www.aclweb.org\/anthology\/2020.acl-main.747.pdf).\r\n\r\n## Adding a Dataset\r\n- **Name:** CoNLL 2003 German\r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/W03-0419\/\r\n- **Data:** https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/conll2003\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2096\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2093","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2093\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2093\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2093\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2093","id":837209211,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk3NTgyNjUx","number":2093,"title":"Fix: Allows a feature to be named \"_type\"","user":{"login":"dcfidalgo","id":15979778,"node_id":"MDQ6VXNlcjE1OTc5Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15979778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dcfidalgo","html_url":"https:\/\/github.com\/dcfidalgo","followers_url":"https:\/\/api.github.com\/users\/dcfidalgo\/followers","following_url":"https:\/\/api.github.com\/users\/dcfidalgo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dcfidalgo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dcfidalgo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dcfidalgo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dcfidalgo\/orgs","repos_url":"https:\/\/api.github.com\/users\/dcfidalgo\/repos","events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice thank you !\r\nThis looks like a pretty simple yet effective fix ;)\r\nCould you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?\r\n```python\r\nfrom datasets import Features, Value\r\n\r\n# We usually use `asdict` on a `DatasetInfo` object which is a dataclass instance that contains the features.\r\n# So we need the conversion of features to dict to work.\r\n# You can test that using `dataclasses._asdict_inner`.\r\n# This is the function used by `dataclasses.asdict` to convert a dataclass instance attribute to a dict\r\nfrom dataclasses import _asdict_inner \r\n\r\nf = Features({\"_type\": Value(\"string\")})\r\nreloaded_f = Features.from_dict(_asdict_inner(f, dict))\r\nassert reloaded_f == f\r\n```","Sure, i will add a test. \r\nOne question: are the posted benchmarks reliable? The extra type check seems to add quite some overhead judging by the relative differences. Do you think this is an issue?","The benchmark has a bit of noise, the values are fine ;)\r\nespecially in the change you did since the overhead added is negligible.","Ok, i added the test you described above. \r\n\r\nI avoided importing the private `_asdict_inner` method and directly used the `DatasetInfo` class, if this is ok with you. Thanks a lot for your support during this PR!"],"created_at":1616368917000,"updated_at":1616682954000,"closed_at":1616682954000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2093","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2093","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2093.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2093.patch"},"body":"This PR tries to fix issue #1110. Sorry for taking so long to come back to this.\r\n\r\nIt's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2093\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2092","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2092\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2092\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2092\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2092","id":836984043,"node_id":"MDU6SXNzdWU4MzY5ODQwNDM=","number":2092,"title":"How to disable making arrow tables in load_dataset ?","user":{"login":"Jeevesh8","id":48825663,"node_id":"MDQ6VXNlcjQ4ODI1NjYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48825663?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Jeevesh8","html_url":"https:\/\/github.com\/Jeevesh8","followers_url":"https:\/\/api.github.com\/users\/Jeevesh8\/followers","following_url":"https:\/\/api.github.com\/users\/Jeevesh8\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Jeevesh8\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Jeevesh8\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Jeevesh8\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Jeevesh8\/orgs","repos_url":"https:\/\/api.github.com\/users\/Jeevesh8\/repos","events_url":"https:\/\/api.github.com\/users\/Jeevesh8\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Jeevesh8\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !","People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n","@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?","Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.","@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?","We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub"],"created_at":1616302207000,"updated_at":1616783860000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2092\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2091","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2091\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2091\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2091\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2091","id":836831403,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk3Mjk4ODI3","number":2091,"title":"Fix copy snippet in docs","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616252902000,"updated_at":1616574050000,"closed_at":1616519911000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2091","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2091","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2091.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2091.patch"},"body":"With this change the lines starting with `...` in the code blocks can be properly copied to clipboard.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2091\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2090","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2090\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2090\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2090\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2090","id":836807498,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy","number":2090,"title":"Add machine translated multilingual STS benchmark dataset","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello dear maintainer, are there any comments or questions about this PR?","@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...","Should be clean for merge IMO.","@lhoestq CI is green. ;-)","Thanks again ! this is awesome :)","Thanks for merging. :-)"],"created_at":1616246887000,"updated_at":1617024282000,"closed_at":1617022815000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2090","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2090","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2090.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2090.patch"},"body":"also see here https:\/\/github.com\/PhilipMay\/stsb-multi-mt","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2090\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2089","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2089\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2089\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2089\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2089","id":836788019,"node_id":"MDU6SXNzdWU4MzY3ODgwMTk=","number":2089,"title":"Add documentaton for dataset README.md files","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We are using the [datasets-tagging app](https:\/\/github.com\/huggingface\/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)","@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.","We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them","@lhoestq what is the status on this? Did you add documentation?","Hi ! There's the tagging app at https:\/\/huggingface.co\/datasets\/tagging\/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps:\/\/github.com\/huggingface\/datasets\/tree\/master\/src\/datasets\/utils\/resources","@lhoestq is there something like this form Models?","I don't think so. Feel free to take a look at the tags of other models (example [here](https:\/\/huggingface.co\/bert-base-uncased\/blob\/main\/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this"],"created_at":1616240678000,"updated_at":1626111700000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nthe dataset README files have special headers.\r\nSomehow a documenation of the allowed values and tags is missing.\r\nCould you add that?\r\n\r\nJust to give some concrete questions that should be answered imo:\r\n- which values can be passted to multilinguality?\r\n- what should be passed to language_creators?\r\n- which values should licenses have? What do I say when it is a custom license? Should I add a link?\r\n- how should I choose size_categories ? What are valid ranges?\r\n- what are valid task_categories?\r\n\r\nThanks\r\nPhilip","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2089\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2088","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2088\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2088\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2088\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2088","id":836763733,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk3MjQ4Mzk1","number":2088,"title":"change bibtex template to author instead of authors","user":{"login":"PhilipMay","id":229382,"node_id":"MDQ6VXNlcjIyOTM4Mg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/229382?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PhilipMay","html_url":"https:\/\/github.com\/PhilipMay","followers_url":"https:\/\/api.github.com\/users\/PhilipMay\/followers","following_url":"https:\/\/api.github.com\/users\/PhilipMay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PhilipMay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PhilipMay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PhilipMay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PhilipMay\/orgs","repos_url":"https:\/\/api.github.com\/users\/PhilipMay\/repos","events_url":"https:\/\/api.github.com\/users\/PhilipMay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PhilipMay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Trailing whitespace was removed. So more changes in diff than just this fix."],"created_at":1616232224000,"updated_at":1616514012000,"closed_at":1616514012000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2088","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2088","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2088.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2088.patch"},"body":"Hi,\r\nIMO when using BibTex Author should be used instead of Authors.\r\nSee here: http:\/\/www.bibtex.org\/Using\/de\/\r\n\r\nThanks\r\nPhilip","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2088\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2087","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2087\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2087\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2087\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2087","id":836587392,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2","number":2087,"title":"Update metadata if dataset features are modified","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.","Awesome thank you !\r\nYes this approach with a wrapper is good :)","@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip install git+https:\/\/github.com\/mariosasko\/datasets-1.git@update-metadata\r\n```\r\nin the first cell of the notebook that is attached to the linked issue.\r\n\r\nThe CI failure is unrelated I think (building the docs locally doesn't throw an error).","The CI fail for the docs has been fixed on master.\r\nMerging :)"],"created_at":1616205923000,"updated_at":1617960333000,"closed_at":1617960333000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2087","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2087","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2087.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2087.patch"},"body":"This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features. \r\nFixes #2083 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2087\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2086","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2086\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2086\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2086\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2086","id":836249587,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz","number":2086,"title":"change user permissions to -rw-r--r--","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. "],"created_at":1616177696000,"updated_at":1616594344000,"closed_at":1616594344000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2086","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2086","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2086.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2086.patch"},"body":"Fix for #2065 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2086\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2085","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2085\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2085\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2085\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2085","id":835870994,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk2NDYyOTc2","number":2085,"title":"Fix max_wait_time in requests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616152946000,"updated_at":1616513798000,"closed_at":1616513797000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2085","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2085","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2085.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2085.patch"},"body":"it was handled as a min time, not max cc @SBrandeis ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2085\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2084","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2084\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2084\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2084\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2084","id":835750671,"node_id":"MDU6SXNzdWU4MzU3NTA2NzE=","number":2084,"title":"CUAD - Contract Understanding Atticus Dataset","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["+1 on this request"],"created_at":1616146063000,"updated_at":1618563044000,"closed_at":1618563044000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** CUAD - Contract Understanding Atticus Dataset\r\n- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2103.06268\r\n- **Data:** https:\/\/github.com\/TheAtticusProject\/cuad\/\r\n- **Motivation:** good domain specific datasets are valuable\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2084\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2083","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2083\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2083\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2083\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2083","id":835695425,"node_id":"MDU6SXNzdWU4MzU2OTU0MjU=","number":2083,"title":"`concatenate_datasets` throws error when changing the order of datasets to concatenate","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\r\n\r\n``` \r\nThe order is important because the resulting dataset inherits the schema metadata of the first dataset passed to the `concatenate_datasets(...)` function (`pa.concat_tables` [docs](https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.concat_tables.html)). I'll try to fix this ASAP."],"created_at":1616142588000,"updated_at":1617960333000,"closed_at":1617960333000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hey, \r\n\r\nI played around with the `concatenate_datasets(...)` function: https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets\r\n\r\nand noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO.\r\n\r\nHere is a google colab to reproduce the error: https:\/\/colab.research.google.com\/drive\/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2083\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2082","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2082\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2082\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2082\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2082","id":835401555,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk2MDY1NTM0","number":2082,"title":"Updated card using information from data statement and datasheet","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616114378000,"updated_at":1616164149000,"closed_at":1616164149000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2082","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2082","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2082.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2082.patch"},"body":"I updated and clarified the REFreSD [data card](https:\/\/github.com\/mcmillanmajora\/datasets\/blob\/refresd_card\/datasets\/refresd\/README.md) with information from the Eleftheria's [website](https:\/\/elbria.github.io\/post\/refresd\/). I added brief descriptions where the initial card referred to the paper, and I also recreated some of the tables in the paper to show relevant dataset statistics.\r\n\r\nI'll email Eleftheria to see if she has any comments on the card. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2082\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2081","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2081\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2081\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2081\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2081","id":835112968,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4","number":2081,"title":"Fix docstrings issues","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616091061000,"updated_at":1617806263000,"closed_at":1617806263000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2081","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2081","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2081.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2081.patch"},"body":"Fix docstring issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2081\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2080","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2080\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2080\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2080\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2080","id":835023000,"node_id":"MDU6SXNzdWU4MzUwMjMwMDA=","number":2080,"title":"Multidimensional arrays in a Dataset","user":{"login":"vermouthmjl","id":3142085,"node_id":"MDQ6VXNlcjMxNDIwODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3142085?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vermouthmjl","html_url":"https:\/\/github.com\/vermouthmjl","followers_url":"https:\/\/api.github.com\/users\/vermouthmjl\/followers","following_url":"https:\/\/api.github.com\/users\/vermouthmjl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vermouthmjl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vermouthmjl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vermouthmjl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vermouthmjl\/orgs","repos_url":"https:\/\/api.github.com\/users\/vermouthmjl\/repos","events_url":"https:\/\/api.github.com\/users\/vermouthmjl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vermouthmjl\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset)\r\n```\r\n\r\nThis will work but to use it with the torch formatter you must specify the `Array2D` feature type in order to tell the shape:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset, features=Features({\r\n \"bbox\": Array2D(shape=(3, 4), dtype=\"int64\"),\r\n \"input_ids\": Value(\"int64\")\r\n}))\r\ndataset.set_format(\"torch\")\r\nprint(dataset[0]['bbox'])\r\n# tensor([[1, 2, 3, 4],\r\n# [1, 2, 3, 4],\r\n# [1, 2, 3, 4]])\r\n```\r\nIf you don't specify the `Array2D` feature type, then the inferred type will be Sequence(Sequence(Value(\"int64\"))) and therefore the torch formatter will return list of tensors","Thanks for the explanation. \r\nWith my original DataFrame, I did\r\n```\r\ndataset = dataset.to_dict(\"list\")\r\n```\r\nand then the rest of the transformation from dictionary works just fine."],"created_at":1616084954000,"updated_at":1616676413000,"closed_at":1616676413000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.\r\n\r\nThe following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`)\r\n\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = pd.DataFrame({\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n})\r\ndataset = Dataset.from_pandas(dataset)\r\n```\r\n\r\nSince I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists.\r\n\r\n```\r\nimport torch\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndataset = pd.DataFrame({\r\n 'bbox': [\r\n [[1,2,3,4],[1,2,3,4],[1,2,3,4]],\r\n [[1,2,3,4],[1,2,3,4],[1,2,3,4]],\r\n [[1,2,3,4],[1,2,3,4],[1,2,3,4]],\r\n [[1,2,3,4],[1,2,3,4],[1,2,3,4]]\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n})\r\ndataset = Dataset.from_pandas(dataset)\r\n\r\ndef test(examples):\r\n return {'bbbox': torch.Tensor(examples['bbox'])}\r\ndataset = dataset.map(test)\r\nprint(dataset[0]['bbox'])\r\nprint(dataset[0]['bbbox'])\r\n\r\ndataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True)\r\nprint(dataset[0]['bbox'])\r\nprint(dataset[0]['bbbox'])\r\n\r\ndef test2(examples):\r\n return {'bbbox': torch.stack(examples['bbox'])}\r\ndataset = dataset.map(test2)\r\n\r\nprint(dataset[0]['bbox'])\r\nprint(dataset[0]['bbbox'])\r\n```\r\n\r\nIs is possible to support n-D arrays\/tensors in datasets? \r\nIt seems that it can also be useful for this [feature request](https:\/\/github.com\/huggingface\/datasets\/issues\/263).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2080\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2079","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2079\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2079\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2079\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2079","id":834920493,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk1NjU2MDQ5","number":2079,"title":"Refactorize Metric.compute signature to force keyword arguments only","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616079950000,"updated_at":1616513504000,"closed_at":1616513504000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2079","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2079","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2079.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2079.patch"},"body":"Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2079\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2078","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2078\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2078\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2078\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2078","id":834694819,"node_id":"MDU6SXNzdWU4MzQ2OTQ4MTk=","number":2078,"title":"MemoryError when computing WER metric","user":{"login":"diego-fustes","id":5707233,"node_id":"MDQ6VXNlcjU3MDcyMzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5707233?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/diego-fustes","html_url":"https:\/\/github.com\/diego-fustes","followers_url":"https:\/\/api.github.com\/users\/diego-fustes\/followers","following_url":"https:\/\/api.github.com\/users\/diego-fustes\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/diego-fustes\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/diego-fustes\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/diego-fustes\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/diego-fustes\/orgs","repos_url":"https:\/\/api.github.com\/users\/diego-fustes\/repos","events_url":"https:\/\/api.github.com\/users\/diego-fustes\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/diego-fustes\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions\/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compute the WER is defined here:\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/349ac4398a3bcae6356f14c5754483383a60e8a4\/metrics\/wer\/wer.py#L93-L94","Hi,\r\n\r\nI've just pushed a pull request that is related to this issue https:\/\/github.com\/huggingface\/datasets\/pull\/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at the end. ","I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)","Thanks for diving into this anyway ^^'\r\nAs you said this actually got solved a few days ago","Someone created an issue https:\/\/github.com\/jitsi\/jiwer\/issues\/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?","Hi !\r\n\r\nIt's computed iteratively so not sure what could go wrong\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8\/metrics\/wer\/wer.py#L100-L106\r\n\r\n@NiklasHoltmeyer what version of `datasets` are you running ?\r\n","One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`?\r\n\r\nAs current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. \r\n\r\nThis could be the case, for example, with a single string with all sentences:\r\n```python\r\nresult[\"predicted\"] = \"One sentence. Other sentence.\"\r\n```\r\nor with a __double__ nested list of sentence lists\r\n```python\r\nresult[\"predicted\"] = [[ [\"One sentence.\"], [\"Other sentence\"] ]]\r\n```\r\n\r\nThe user should check the dimensions of the data structure passed to `predictions` and `references`.","Hi all,\r\n\r\nin my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with the latest implementation of datasets, or by using the alternative WER function that I've contributed on this [pull request](https:\/\/github.com\/huggingface\/datasets\/pull\/2169) there shouldn't be memory errors.","@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version\r\n\r\n-> \r\n```\r\n File \"..\/preprocess_dataset.py\", line 132, in \r\n pq.write_table(train_dataset.data, f'{resampled_data_dir}\/{data_args.dataset_config_name}.train.parquet')\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/pyarrow\/parquet.py\", line 1674, in write_table\r\n writer.write_table(table, row_group_size=row_group_size)\r\n File \"\/usr\/local\/lib\/python3.8\/dist-packages\/pyarrow\/parquet.py\", line 588, in write_table\r\n self.writer.write_table(table, row_group_size=row_group_size)\r\nTypeError: Argument 'table' has incorrect type (expected pyarrow.lib.Table, got ConcatenationTable)\r\n``` \r\n\r\nif i do \r\n```\r\nimport pyarrow.parquet as pq\r\n...\r\n...\r\npq.write_table(train_dataset.data, 'train.parquet')\r\npq.write_table(eval_dataset.data, 'eval.parquet')\r\n```\r\n\r\nwhile using 1.6.1. and its working with 1.5.0\r\n","Hi ! You can pass dataset.data.table instead of dataset.data to pq.write_table","This seems to be working so far! Thanks!"],"created_at":1616067005000,"updated_at":1619857909000,"closed_at":1617693643000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:\r\n\r\n```\r\nwer = load_metric(\"wer\")\r\nprint(wer.compute(predictions=result[\"predicted\"], references=result[\"target\"]))\r\n```\r\n\r\nHowever, I receive the following exception:\r\n\r\n`Traceback (most recent call last):\r\n File \"\/home\/diego\/IpGlobal\/wav2vec\/test_wav2vec.py\", line 51, in \r\n print(wer.compute(predictions=result[\"predicted\"], references=result[\"target\"]))\r\n File \"\/home\/diego\/miniconda3\/envs\/wav2vec3.6\/lib\/python3.6\/site-packages\/datasets\/metric.py\", line 403, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"\/home\/diego\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/wer\/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281\/wer.py\", line 94, in _compute\r\n return wer(references, predictions)\r\n File \"\/home\/diego\/miniconda3\/envs\/wav2vec3.6\/lib\/python3.6\/site-packages\/jiwer\/measures.py\", line 81, in wer\r\n truth, hypothesis, truth_transform, hypothesis_transform, **kwargs\r\n File \"\/home\/diego\/miniconda3\/envs\/wav2vec3.6\/lib\/python3.6\/site-packages\/jiwer\/measures.py\", line 192, in compute_measures\r\n H, S, D, I = _get_operation_counts(truth, hypothesis)\r\n File \"\/home\/diego\/miniconda3\/envs\/wav2vec3.6\/lib\/python3.6\/site-packages\/jiwer\/measures.py\", line 273, in _get_operation_counts\r\n editops = Levenshtein.editops(source_string, destination_string)\r\nMemoryError`\r\n\r\nMy system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2078\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2077","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2077\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2077\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2077\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2077","id":834649536,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk1NDI0MTYw","number":2077,"title":"Bump huggingface_hub version","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\ud83d\udd25 "],"created_at":1616064874000,"updated_at":1616067206000,"closed_at":1616067206000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2077","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2077","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2077.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2077.patch"},"body":"`0.0.2 => 0.0.6`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2077\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2076","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2076\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2076\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2076\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2076","id":834445296,"node_id":"MDU6SXNzdWU4MzQ0NDUyOTY=","number":2076,"title":"Issue: Dataset download error","user":{"login":"XuhuiZhou","id":20436061,"node_id":"MDQ6VXNlcjIwNDM2MDYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20436061?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/XuhuiZhou","html_url":"https:\/\/github.com\/XuhuiZhou","followers_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/followers","following_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/orgs","repos_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/repos","events_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/XuhuiZhou\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.","It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with\r\n```\r\ndatasets-cli test .\/datasets\/iwslt2017 --all_configs --save_infos --ignore_verifications\r\n```","Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here)\r\n\r\nI also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are \"canonical\" category?","This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :)\r\nAnd yes you are right, it is a \"canonical\" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)","Hi, thanks for the answer. \r\n\r\nI gave a try to the problem today. But I encountered an upload error: \r\n\r\n```\r\ngit push -u origin fix_link_iwslt\r\nEnter passphrase for key '\/home2\/xuhuizh\/.ssh\/id_rsa': \r\nERROR: Permission to huggingface\/datasets.git denied to XuhuiZhou.\r\nfatal: Could not read from remote repository.\r\n\r\nPlease make sure you have the correct access rights\r\nand the repository exists.\r\n```\r\n\r\nAny insight here? \r\n\r\nBy the way, when I run the datasets-cli command, it shows the following error, but does not seem to be the error coming from `iwslt.py`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home2\/xuhuizh\/anaconda3\/envs\/UMT\/bin\/datasets-cli\", line 33, in \r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"\/home2\/xuhuizh\/projects\/datasets\/src\/datasets\/commands\/datasets_cli.py\", line 35, in main\r\n service.run()\r\n File \"\/home2\/xuhuizh\/projects\/datasets\/src\/datasets\/commands\/test.py\", line 141, in run\r\n try_from_hf_gcs=False,\r\n File \"\/home2\/xuhuizh\/projects\/datasets\/src\/datasets\/builder.py\", line 579, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home2\/xuhuizh\/projects\/datasets\/src\/datasets\/builder.py\", line 639, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"\/home2\/xuhuizh\/projects\/datasets\/src\/datasets\/utils\/info_utils.py\", line 32, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https:\/\/wit3.fbk.eu\/archive\/2017-01-trnmted\/\/texts\/DeEnItNlRo\/DeEnItNlRo\/DeEnItNlRo-DeEnItNlRo.tgz'}\r\n```","Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#start-by-preparing-your-environment).\r\nAnd to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need to use the `--ignore_verifications` flag.","Hi @XuhuiZhou,\r\n\r\nAs @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called \"Fork and Pull Model\". You can find more information here:\r\n- [Understanding the GitHub flow](https:\/\/guides.github.com\/introduction\/flow\/)\r\n- [Forking Projects](https:\/\/guides.github.com\/activities\/forking\/)\r\n\r\nAlternatively, if you find all these steps too complicated, you can use the GitHub official command line tool: [GitHub CLI](https:\/\/cli.github.com\/). Once installed, in order to create a Pull Request, you only need to use this command:\r\n```shell\r\ngh pr create --web\r\n```\r\nThis utility will automatically create the fork, push your changes and open a Pull Request, under the hood."],"created_at":1616049366000,"updated_at":1616413951000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The download link in `iwslt2017.py` file does not seem to work anymore.\r\n\r\nFor example, `FileNotFoundError: Couldn't find file at https:\/\/wit3.fbk.eu\/archive\/2017-01-trnted\/texts\/zh\/en\/zh-en.tgz`\r\n\r\nWould be nice if we could modify it script and use the new downloadable link?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2076\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2075","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2075\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2075\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2075\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2075","id":834301246,"node_id":"MDU6SXNzdWU4MzQzMDEyNDY=","number":2075,"title":"ConnectionError: Couldn't reach common_voice.py","user":{"login":"LifaSun","id":6188893,"node_id":"MDQ6VXNlcjYxODg4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6188893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LifaSun","html_url":"https:\/\/github.com\/LifaSun","followers_url":"https:\/\/api.github.com\/users\/LifaSun\/followers","following_url":"https:\/\/api.github.com\/users\/LifaSun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LifaSun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LifaSun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LifaSun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LifaSun\/orgs","repos_url":"https:\/\/api.github.com\/users\/LifaSun\/repos","events_url":"https:\/\/api.github.com\/users\/LifaSun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LifaSun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?","@albertvillanova Thanks! It works well now. "],"created_at":1616030346000,"updated_at":1616236181000,"closed_at":1616236181000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When I run: \r\nfrom datasets import load_dataset, load_metric\r\n\r\ncommon_voice_train = load_dataset(\"common_voice\", \"zh-CN\", split=\"train+validation\")\r\ncommon_voice_test = load_dataset(\"common_voice\", \"zh-CN\", split=\"test\")\r\n\r\nGot:\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/common_voice\/common_voice.py\r\n\r\nVersion:\r\n1.4.1\r\n\r\nThanks! @lhoestq @LysandreJik @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2075\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2074","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2074\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2074\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2074\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2074","id":834268463,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw","number":2074,"title":"Fix size categories in YAML Tags","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nWe can also update the task lists here: https:\/\/github.com\/huggingface\/datasets-tagging\/blob\/main\/task_set.json","Hi @lhoestq,\r\n\r\nThanks for approving.\r\nHow do I add the new categories to the tagging app? What I have added is till `1T` and not `1M`.\r\n\r\nI'll also check the task list :)\r\n\r\nThanks,\r\nGunjan","I think you can change it here: https:\/\/github.com\/huggingface\/datasets-tagging\/blob\/main\/tagging_app.py#L412-L423","Hi @lhoestq,\r\n\r\nI have made a PR for size categories on `datasets-tagging`\r\n\r\nFor tags, I have thought of adding more tags and categories, based on what I know about the existing datasets, any list will not be exhaustive because the contributors can be very specific or very general. Hence, there could be a continuous process of evaluating existing tags and adding more and more.\r\n\r\n```json\r\n{\r\n \"image-classification\": {\r\n \"description\": \"image classification tasks\",\r\n \"options\": [\r\n \"multi-class-classification\",\r\n \"multi-label-classification\",\r\n \"other\"\r\n ]\r\n },\r\n \"conditional-text-generation\": {\r\n \"description\": \"data-to-text and text transduction tasks such as translation or summarization\",\r\n \"options\": [\r\n \"machine-translation\",\r\n \"sentence-splitting-fusion\",\r\n \"extractive-and-abstractive-summarization\",\r\n \"abstractive-summarization\",\r\n \"extractive-summarization\",\r\n \"multi-document-summarization\",\r\n \"table-to-text\",\r\n \"text-simplification\",\r\n \"explanation-generation\",\r\n \"stuctured-to-text\",\r\n \"other\"\r\n ]\r\n },\r\n \"conditional-speech-generation\": {\r\n \"description\": \"speech generation tasks\",\r\n \"options\": [\r\n \"text-to-speech\",\r\n \"speech-translation\",\r\n \"other\"\r\n ]\r\n },\r\n\r\n \"conditional-structure-generation\":{\r\n \"description\": \"text or speech to structured data\",\r\n \"options\":[\r\n \"knowlege-graph-mining\",\r\n \"code-generation\",\r\n ]\r\n },\r\n \"question-answering\": {\r\n \"description\": \"question answering tasks\",\r\n \"options\": [\r\n \"open-domain-qa\",\r\n \"closed-domain-qa\",\r\n \"multiple-choice-qa\",\r\n \"extractive-qa\",\r\n \"abstractive-qa\",\r\n \"conversational-qa\",\r\n \"multi-document-qa\",\r\n \"other\"\r\n ]\r\n },\r\n \"speech-classification\": {\r\n \"description\": \"speech to label tasks\",\r\n \"options\": [\r\n \"other\"\r\n ]\r\n },\r\n \"sequence-modeling\": {\r\n \"description\": \"such as language, speech or dialogue modeling\",\r\n \"options\": [\r\n \"dialogue-modeling\",\r\n \"language-modeling\",\r\n \"speech-modeling\",\r\n \"multi-turn\",\r\n \"slot-filling\",\r\n \"other\"\r\n ]\r\n },\r\n \"speech-recognition\": {\r\n \"description\": \"speech to text tasks\",\r\n \"options\": [\r\n \"automatic-speech-recognition\",\r\n \"other\"\r\n ]\r\n },\r\n \"structure-prediction\": {\r\n \"description\": \"predicting structural properties of the text, such as syntax\",\r\n \"options\": [\r\n \"coreference-resolution\",\r\n \"named-entity-recognition\",\r\n \"part-of-speech-tagging\",\r\n \"parsing\",\r\n \"sentence-segmentation\",\r\n \"single-span-prediction\",\r\n \"multi-span-prediction\",\r\n \"clause-or-phrase-segmentation\",\r\n \"dependency-parsing\",\r\n \"constituency-parsing\",\r\n \"other\"\r\n ]\r\n },\r\n\r\n \"text-classification\": {\r\n \"description\": \"predicting a class index or boolean value\",\r\n \"options\": [\r\n \"acceptability-classification\",\r\n \"entity-linking-classification\",\r\n \"relation-extraction\",\r\n \"common-sense-reasoning\",\r\n \"fact-checking\",\r\n \"intent-classification\",\r\n \"multi-class-classification\",\r\n \"multi-label-classification\",\r\n \"natural-language-inference\",\r\n \"semantic-similarity-classification\",\r\n \"sentiment-classification\",\r\n \"topic-classification\",\r\n \"emotion-classification\",\r\n \"token-classification\",\r\n \"word-sense-disambiguation\",\r\n \"offense-classification\",\r\n \"hate-speech-classification\",\r\n \"language-classification\",\r\n \"bias-classification\",\r\n \"other\"\r\n ]\r\n },\r\n \"text-retrieval\": {\r\n \"description\": \"information or text retrieval tasks\",\r\n \"options\": [\r\n \"document-retrieval\",\r\n \"utterance-retrieval\",\r\n \"entity-linking-retrieval\",\r\n \"fact-checking-retrieval\",\r\n \"other\"\r\n ]\r\n },\r\n \"text-scoring\": {\r\n \"description\": \"text scoring tasks, predicting a real valued score for some text\",\r\n \"options\": [\r\n \"semantic-similarity-scoring\",\r\n \"sentiment-scoring\",\r\n \"other\"\r\n ]\r\n },\r\n \"other\": {\r\n \"description\": \"raw data or other task families\",\r\n \"options\": [\r\n \"data-mining\",\r\n \"raw-text\",\r\n \"raw-speech\",\r\n \"raw-image\",\r\n \"other\"\r\n ]\r\n }\r\n}\r\n```\r\nI'll sort this when adding it to the .json. Also, I'll change categories according to this if this seems okay to you and commit it to this PR.\r\n\r\nI'll also fix spelling others, and some categories which are partially correct, for e.g. `other-machine-translation` to the correct tag.\r\nLastly, with the options also we can add a description to make it easier for the users to understand what we mean by each option. Example, for \"emotion-classification\", we can explain what kinds of data we are talking about, or what we mean by \"single-span-prediction\", etc.","Good idea thank you ! Can you open a PR on datasets-tagging for the tasks as well ?\r\nAlso you can update the dataset card with the new tasks categories in another PR if you don't mind","Hi @lhoestq,\r\n\r\nThanks, what all do I need to add to merge this PR?","We can merge this one once the PR on dataset sizes is merged on `datasets-tagging` ;)","Hi @lhoestq,\r\n\r\nOne problem with this approach is that for datasets like `ccaligned_multilingual`, the infos won't be complete because we don't have all configs. In that case, people might face trouble finding the datatset using the tag. Although, they probably won't be checking the size tag for a dataset like that.\r\n\r\nWhat do you think?\r\n\r\nCC @theo-m ","For datasets like `ccaligned_multilingual` it's important to have all the tags for users to search and find it. Currently is has the full list of tags (without the config names). So you can actually find the dataset, but you don't know what tag correspond to what configuration. "],"created_at":1616025756000,"updated_at":1616519470000,"closed_at":1616519470000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2074","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2074","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2074.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2074.patch"},"body":"This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.\r\n\r\nThis PR also adds a couple of infos that I found missing.\r\n\r\nThe code for generating this:\r\n```python\r\nfor dataset in sorted(os.listdir('.\/datasets\/')):\r\n if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']:\r\n infos = {}\r\n stats = {}\r\n st = ''\r\n with open(f'datasets\/{dataset}\/README.md') as f:\r\n d = f.read()\r\n start_dash = d.find('---') + 3\r\n end_dash = d[start_dash:].find('---') + 3\r\n rest_text = d[end_dash + 3:]\r\n try:\r\n full_yaml = OmegaConf.create(d[start_dash:end_dash])\r\n readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True)\r\n except Exception as e:\r\n print(e)\r\n continue \r\n try:\r\n with open(f'datasets\/{dataset}\/dataset_infos.json') as f:\r\n data = json.load(f)\r\n except Exception as e:\r\n print(e)\r\n continue # Skip those without infos.\r\n done_set = set([])\r\n num_keys = len(data.keys())\r\n for keys in data:\r\n # dataset = load_dataset('opus100', f'{dirs}')\r\n total = 0\r\n for split in data[keys]['splits']:\r\n total = total + data[keys]['splits'][split]['num_examples']\r\n if total < 1000:\r\n st += \"- n<1K\" + '\\n'\r\n infos[keys] = [\"n<1K\"]\r\n elif total >= 1000 and total < 10000:\r\n infos[keys] = [\"1K= 10000 and total < 100000:\r\n infos[keys] = [\"10K= 100000 and total < 1000000:\r\n infos[keys] = [\"100K= 1000000 and total < 10000000:\r\n infos[keys] = [\"1M= 10000000 and total < 100000000:\r\n infos[keys] = [\"10M= 100000000 and total < 1000000000:\r\n infos[keys] = [\"100M= 1000000000 and total < 10000000000:\r\n infos[keys] = [\"1B= 10000000000 and total < 100000000000:\r\n infos[keys] = [\"10B= 100000000000 and total < 1000000000000:\r\n infos[keys] = [\"100B1T\"]\r\n done_set = done_set.union(infos[keys])\r\n if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos):\r\n\r\n print('-' * 30)\r\n print(done_set)\r\n print(f\"Changing Full YAML for {dataset}\")\r\n print(OmegaConf.to_yaml(full_yaml))\r\n\r\n if len(done_set) == 1:\r\n full_yaml['size_categories'] = list(done_set)\r\n else:\r\n full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])])\r\n\r\n full_yaml_string = OmegaConf.to_yaml(full_yaml)\r\n print('-' * 30)\r\n print(full_yaml_string)\r\n inp = input('Do you wish to continue?(Y\/N)')\r\n if inp == 'Y':\r\n with open(f'.\/datasets\/{dataset}\/README.md', 'w') as f:\r\n f.write('---\\n')\r\n f.write(full_yaml_string)\r\n f.write('---')\r\n f.write(rest_text)\r\n else:\r\n break\r\n```\r\n\r\nNote that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app.\r\n\r\nEDIT:\r\nIt would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nEDIT:\r\nI understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2074\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2073","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2073\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2073\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2073\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2073","id":834192501,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk1MDYyMzQ2","number":2073,"title":"Fixes check of TF_AVAILABLE and TORCH_AVAILABLE","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1616016533000,"updated_at":1616058565000,"closed_at":1616058564000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2073","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2073","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2073.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2073.patch"},"body":"# What is this PR doing\r\n\r\nThis PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2073\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2072","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2072\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2072\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2072\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2072","id":834054837,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4","number":2072,"title":"Fix docstring issues","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?","Sounds good thanks !"],"created_at":1616004824000,"updated_at":1616574057000,"closed_at":1616071281000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2072","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2072","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2072.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2072.patch"},"body":"Fix docstring issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2072\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2071","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2071\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2071\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2071\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2071","id":833950824,"node_id":"MDU6SXNzdWU4MzM5NTA4MjQ=","number":2071,"title":"Multiprocessing is slower than single process","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["dupe of #1992"],"created_at":1615997338000,"updated_at":1616058623000,"closed_at":1616058623000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```python\r\n# benchmark_filter.py\r\nimport logging\r\nimport sys\r\nimport time\r\n\r\nfrom datasets import load_dataset, set_caching_enabled\r\n\r\n\r\nif __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n\r\n bc = load_dataset(\"bookcorpus\")\r\n\r\n now = time.time()\r\n try:\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n elapsed = time.time() - now\r\n\r\n print(elapsed)\r\n```\r\n\r\nRunning `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2071\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2070","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2070\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2070\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2070\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2070","id":833799035,"node_id":"MDU6SXNzdWU4MzM3OTkwMzU=","number":2070,"title":"ArrowInvalid issue for squad v2 dataset","user":{"login":"MichaelYxWang","id":29818977,"node_id":"MDQ6VXNlcjI5ODE4OTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29818977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MichaelYxWang","html_url":"https:\/\/github.com\/MichaelYxWang","followers_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/followers","following_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/orgs","repos_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/repos","events_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MichaelYxWang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch.\r\n\r\nHowever it seems like `tokenized_examples` doesn't have the same number of elements in each field. One field seems to have `1180` elements while `candidate_attention_mask` only has `1178`."],"created_at":1615989109000,"updated_at":1628099836000,"closed_at":1628099836000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, I am using the huggingface official question answering example notebook (https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/question_answering.ipynb). \r\n\r\nIn the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error:\r\n\r\n`ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178`\r\n\r\nMy code is as follows:\r\n\r\n```\r\ndef generate_candidate_questions(examples):\r\n val_questions = examples[\"question\"]\r\n candididate_questions = random.sample(datasets[\"train\"][\"question\"], len(val_questions))\r\n candididate_questions = [x[:max_length] for x in candididate_questions]\r\n return candididate_questions\r\n\r\ndef prepare_validation_features(examples, use_mixing=False):\r\n pad_on_right = tokenizer.padding_side == \"right\"\r\n tokenized_examples = tokenizer(\r\n examples[\"question\" if pad_on_right else \"context\"],\r\n examples[\"context\" if pad_on_right else \"question\"],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=max_length,\r\n stride=doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n if use_mixing:\r\n candidate_questions = generate_candidate_questions(examples)\r\n tokenized_candidates = tokenizer(\r\n candidate_questions if pad_on_right else examples[\"context\"],\r\n examples[\"context\"] if pad_on_right else candidate_questions,\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=max_length,\r\n stride=doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n if use_mixing:\r\n tokenized_examples[\"candidate_input_ids\"] = tokenized_candidates[\"input_ids\"]\r\n tokenized_examples[\"candidate_attention_mask\"] = tokenized_candidates[\"attention_mask\"]\r\n tokenized_examples[\"candidate_token_type_ids\"] = tokenized_candidates[\"token_type_ids\"]\r\n\r\n for i in range(len(tokenized_examples[\"input_ids\"])):\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n context_index = 1 if pad_on_right else 0\r\n\r\n sample_index = sample_mapping[i]\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n tokenized_examples[\"offset_mapping\"][i] = [\r\n (o if sequence_ids[k] == context_index else None)\r\n for k, o in enumerate(tokenized_examples[\"offset_mapping\"][i])\r\n ]\r\n\r\n return tokenized_examples\r\n\r\n\r\n\r\nvalidation_features = datasets[\"validation\"].map(\r\n lambda xs: prepare_validation_features(xs, True),\r\n batched=True,\r\n remove_columns=datasets[\"validation\"].column_names\r\n)\r\n```\r\n\r\nI guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2070\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2069","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2069\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2069\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2069\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2069","id":833768926,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw","number":2069,"title":"Add and fix docstring for NamedSplit","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe we should add some other split classes?"],"created_at":1615987168000,"updated_at":1616063260000,"closed_at":1616063260000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2069","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2069","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2069.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2069.patch"},"body":"Add and fix docstring for `NamedSplit`, which was missing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2069\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2068","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2068\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2068\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2068\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2068","id":833602832,"node_id":"MDU6SXNzdWU4MzM2MDI4MzI=","number":2068,"title":"PyTorch not available error on SageMaker GPU docker though it is installed ","user":{"login":"sivakhno","id":1651457,"node_id":"MDQ6VXNlcjE2NTE0NTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1651457?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sivakhno","html_url":"https:\/\/github.com\/sivakhno","followers_url":"https:\/\/api.github.com\/users\/sivakhno\/followers","following_url":"https:\/\/api.github.com\/users\/sivakhno\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sivakhno\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sivakhno\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sivakhno\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sivakhno\/orgs","repos_url":"https:\/\/api.github.com\/users\/sivakhno\/repos","events_url":"https:\/\/api.github.com\/users\/sivakhno\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sivakhno\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @philschmid ","Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`","Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com\/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ","Could paste the code you use the start your training job and the fine-tuning script you run? ","@sivakhno this should be now fixed in `datasets>=1.5.0`. ","@philschmid Recently released tensorflow-macos seems to be missing. ","I've created a PR to add this. "],"created_at":1615975467000,"updated_at":1623646050000,"closed_at":1623646050000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I get en error when running data loading using SageMaker SDK\r\n\r\n```\r\n File \"main.py\", line 34, in \r\n run_training()\r\n File \"main.py\", line 25, in run_training\r\n dm.setup('fit')\r\n File \"\/opt\/conda\/lib\/python3.6\/site-packages\/pytorch_lightning\/core\/datamodule.py\", line 92, in wrapped_fn\r\n return fn(*args, **kwargs)\r\n File \"\/opt\/ml\/code\/data_module.py\", line 103, in setup\r\n self.dataset[split].set_format(type=\"torch\", columns=self.columns)\r\n File \"\/opt\/conda\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/opt\/conda\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 995, in set_format\r\n _ = get_formatter(type, **format_kwargs)\r\nFile \"\/opt\/conda\/lib\/python3.6\/site-packages\/datasets\/formatting\/__init__.py\", line 114, in get_formatter\r\n raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]\r\nValueError: PyTorch needs to be installed to be able to return PyTorch tensors.\r\n```\r\n\r\nwhen trying to execute dataset loading using this notebook https:\/\/github.com\/PyTorchLightning\/pytorch-lightning\/blob\/master\/notebooks\/04-transformers-text-classification.ipynb, specifically lines \r\n\r\n```\r\nself.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]\r\nself.dataset[split].set_format(type=\"torch\", columns=self.columns)\r\n```\r\n\r\nThe SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com\/pytorch-training:1.4.0-gpu-py3 .\r\n\r\nBy running container interactively I have checked that torch loading completes successfully by executing `https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/config.py#L39`. \r\n\r\nAlso as a first line in the data loading module I have \r\n\r\n```\r\nimport os\r\nos.environ[\"USE_TF\"] = \"0\" \r\nos.environ[\"USE_TORCH\"] = \"1\" \r\n````\r\n\r\nBut unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.\r\nMany Thanks! \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2068\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2067","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2067\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2067\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2067\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2067","id":833559940,"node_id":"MDU6SXNzdWU4MzM1NTk5NDA=","number":2067,"title":"Multiprocessing windows error","user":{"login":"flozi00","id":47894090,"node_id":"MDQ6VXNlcjQ3ODk0MDkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47894090?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/flozi00","html_url":"https:\/\/github.com\/flozi00","followers_url":"https:\/\/api.github.com\/users\/flozi00\/followers","following_url":"https:\/\/api.github.com\/users\/flozi00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/flozi00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/flozi00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/flozi00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/flozi00\/orgs","repos_url":"https:\/\/api.github.com\/users\/flozi00\/repos","events_url":"https:\/\/api.github.com\/users\/flozi00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/flozi00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..","```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\n\r\nupdated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n\r\n```","\r\n\r\n\r\n\r\n\r\nI was able to copy some of the shell \r\nThis is repeating every half second\r\nWin 10, Anaconda with python 3.8, datasets installed from main branche\r\n```\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n exitcode = _main(fd, parent_sentinel)\r\n raise RuntimeError('''\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n\r\n The \"freeze_support()\" line can be omitted if the program\r\n is not going to be frozen to produce an executable. return _run_module_code(code, init_globals, run_name,\r\n prepare(preparation_data)\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in \r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in \r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 327, in _Popen\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n return Popen(process_obj)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\popen_spawn_win32.py\", line 45, in __init__\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n prep_data = spawn.get_preparation_data(process_obj._name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 154, in get_preparation_data\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n raise RuntimeError('''\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n```","Thanks this is really helpful !\r\nI'll try to reproduce on my side and come back to you","if __name__ == '__main__':\r\n\r\n\r\nThis line before calling the map function stops the error but the script still repeats endless","Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https:\/\/stackoverflow.com\/a\/18205006):\r\n\r\n> On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.\r\n\r\nRegarding the hanging issue, can you try to update `dill` and `multiprocess` ?","It's already on the newest version","```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 791, in move\r\n os.rename(src, real_dst)\r\nFileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\tmpx9fl_jg8' -> 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\cvtrain.py\", line 243, in \r\n common_voice_train = common_voice_train.map(remove_special_characters, remove_columns=[\"sentence\"])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1339, in map\r\n return self._map_single(\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 203, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1646, in _map_single\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 805, in move\r\n copy_function(src, real_dst)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 435, in copy2\r\n copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n 0%| | 0\/27771 [00:00 This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions","Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?","Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?","Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?","Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)","I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!","Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?","Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders\/files created during the call use running user's umask\r\n\r\n","You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.","FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.","Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?","That sounds very right to me, @bhavitvyamalik "],"created_at":1615940422000,"updated_at":1620629129000,"closed_at":1620629129000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\n\r\nIt seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2065\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2064","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2064\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2064\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2064\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2064","id":833002360,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1","number":2064,"title":"Fix ted_talks_iwslt version error","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615913025000,"updated_at":1615917608000,"closed_at":1615917608000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2064","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2064","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2064.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2064.patch"},"body":"This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.\r\n\r\nFixes #2059 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2064\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2063","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2063\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2063\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2063\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2063","id":832993705,"node_id":"MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5","number":2063,"title":"[Common Voice] Adapt dataset script so that no manual data download is actually needed","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615912424000,"updated_at":1615974172000,"closed_at":1615974157000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2063","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2063","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2063.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2063.patch"},"body":"This PR changes the dataset script so that no manual data dir is needed anymore. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2063\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2062","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2062\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2062\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2062\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2062","id":832625483,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz","number":2062,"title":"docs: fix missing quotation","user":{"login":"neal2018","id":46561493,"node_id":"MDQ6VXNlcjQ2NTYxNDkz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46561493?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/neal2018","html_url":"https:\/\/github.com\/neal2018","followers_url":"https:\/\/api.github.com\/users\/neal2018\/followers","following_url":"https:\/\/api.github.com\/users\/neal2018\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/neal2018\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/neal2018\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/neal2018\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/neal2018\/orgs","repos_url":"https:\/\/api.github.com\/users\/neal2018\/repos","events_url":"https:\/\/api.github.com\/users\/neal2018\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/neal2018\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615889274000,"updated_at":1615972917000,"closed_at":1615972917000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2062","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2062","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2062.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2062.patch"},"body":"The json code misses a quote","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2062\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2061","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2061\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2061\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2061\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2061","id":832596228,"node_id":"MDU6SXNzdWU4MzI1OTYyMjg=","number":2061,"title":"Cannot load udpos subsets from xtreme dataset using load_dataset()","user":{"login":"adzcodez","id":55791365,"node_id":"MDQ6VXNlcjU1NzkxMzY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55791365?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adzcodez","html_url":"https:\/\/github.com\/adzcodez","followers_url":"https:\/\/api.github.com\/users\/adzcodez\/followers","following_url":"https:\/\/api.github.com\/users\/adzcodez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adzcodez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adzcodez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adzcodez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adzcodez\/orgs","repos_url":"https:\/\/api.github.com\/users\/adzcodez\/repos","events_url":"https:\/\/api.github.com\/users\/adzcodez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adzcodez\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.","Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n","@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https:\/\/github.com\/google-research\/xtreme\/blob\/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de\/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme","I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ","Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https:\/\/github.com\/google-research\/xtreme\/blob\/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de\/utils_preprocess.py#L187-L204) if you decide to work on this). ","Closed by #2466."],"created_at":1615887133000,"updated_at":1624017251000,"closed_at":1624017250000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, \r\n\r\nI am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. \r\n\r\nReprex is: \r\n\r\n`from datasets import load_dataset `\r\n`dataset = load_dataset('xtreme', 'udpos.English')`\r\n\r\nThe error is: \r\n`KeyError: '_'`\r\n\r\nThe full traceback is: \r\nKeyError Traceback (most recent call last)\r\n in \r\n 1 from datasets import load_dataset\r\n----> 2 dataset = load_dataset('xtreme', 'udpos.English')\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)\r\n 738 \r\n 739 # Download and prepare data\r\n--> 740 builder_instance.download_and_prepare(\r\n 741 download_config=download_config,\r\n 742 download_mode=download_mode,\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 576 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 577 if not downloaded_from_gcs:\r\n--> 578 self._download_and_prepare(\r\n 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 580 )\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 654 try:\r\n 655 # Prepare split will record examples associated to the split\r\n--> 656 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 657 except OSError as e:\r\n 658 raise OSError(\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\builder.py in _prepare_split(self, split_generator)\r\n 977 generator, unit=\" examples\", total=split_info.num_examples, leave=False, disable=not_verbose\r\n 978 ):\r\n--> 979 example = self.info.features.encode_example(record)\r\n 980 writer.write(example)\r\n 981 finally:\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\features.py in encode_example(self, example)\r\n 946 def encode_example(self, example):\r\n 947 example = cast_to_python_objects(example)\r\n--> 948 return encode_nested_example(self, example)\r\n 949 \r\n 950 def encode_batch(self, batch):\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\features.py in encode_nested_example(schema, obj)\r\n 840 # Nested structures: we allow dict, list\/tuples, sequences\r\n 841 if isinstance(schema, dict):\r\n--> 842 return {\r\n 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n 844 }\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\features.py in (.0)\r\n 841 if isinstance(schema, dict):\r\n 842 return {\r\n--> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n 844 }\r\n 845 elif isinstance(schema, (list, tuple)):\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\features.py in encode_nested_example(schema, obj)\r\n 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks\r\n 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):\r\n--> 870 return schema.encode_example(obj)\r\n 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)\r\n 872 return obj\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\features.py in encode_example(self, example_data)\r\n 647 # If a string is given, convert to associated integer\r\n 648 if isinstance(example_data, str):\r\n--> 649 example_data = self.str2int(example_data)\r\n 650 \r\n 651 # Allowing -1 to mean no label.\r\n\r\n~\\Anaconda3\\envs\\mlenv\\lib\\site-packages\\datasets\\features.py in str2int(self, values)\r\n 605 if value not in self._str2int:\r\n 606 value = value.strip()\r\n--> 607 output.append(self._str2int[str(value)])\r\n 608 else:\r\n 609 # No names provided, try to integerize\r\n\r\nKeyError: '_'\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2061\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2060","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2060\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2060\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2060\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2060","id":832588591,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx","number":2060,"title":"Filtering refactor","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"assignees":[{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate \ud83d\udc40 \r\n\r\nI'm not familiar with the caching you describe for `.map`, I'll look it up.","turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem.","tracemalloc outputs from this script:\r\n\r\n```python\r\nimport logging\r\nimport sys\r\nimport time\r\nimport tracemalloc\r\n\r\nfrom datasets import load_dataset, set_caching_enabled\r\n\r\n\r\nif __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n\r\n tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n\r\n now = time.time()\r\n try:\r\n snapshot1 = tracemalloc.take_snapshot()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n exit(1)\r\n snapshot2 = tracemalloc.take_snapshot()\r\n tracemalloc.stop()\r\n elapsed = time.time() - now\r\n\r\n print(elapsed)\r\n top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n\r\n print(\"[ Top 10 differences ]\")\r\n for stat in top_stats[:10]:\r\n print(stat)\r\n\r\n```\r\n\r\n\r\nThis branch:\r\n\r\n```\r\n ssh:\/\/theo@35.205.12.130:22\/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/bin\/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https:\/\/s3.amazonaws.com:443 \"HEAD \/datasets.huggingface.co\/datasets\/datasets\/bookcorpus\/bookcorpus.py HTTP\/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https:\/\/raw.githubusercontent.com:443 \"HEAD \/huggingface\/datasets\/master\/datasets\/bookcorpus\/bookcorpus.py HTTP\/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https:\/\/raw.githubusercontent.com:443 \"HEAD \/huggingface\/datasets\/master\/datasets\/bookcorpus\/dataset_infos.json HTTP\/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (\/home\/theo\/.cache\/huggingface\/datasets\/bookcorpus\/plain_text\/1.0.0\/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0\/74005 [00:00:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B\r\n :219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/torch\/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B\r\n :64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/collections\/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/tensorflow\/python\/util\/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/tensorflow\/python\/util\/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/tensorflow\/python\/util\/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nOn master:\r\n```\r\n ssh:\/\/theo@35.205.12.130:22\/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/bin\/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https:\/\/s3.amazonaws.com:443 \"HEAD \/datasets.huggingface.co\/datasets\/datasets\/bookcorpus\/bookcorpus.py HTTP\/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https:\/\/raw.githubusercontent.com:443 \"HEAD \/huggingface\/datasets\/master\/datasets\/bookcorpus\/bookcorpus.py HTTP\/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https:\/\/raw.githubusercontent.com:443 \"HEAD \/huggingface\/datasets\/master\/datasets\/bookcorpus\/dataset_infos.json HTTP\/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (\/home\/theo\/.cache\/huggingface\/datasets\/bookcorpus\/plain_text\/1.0.0\/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0\/74005 [00:00:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B\r\n :219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/torch\/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B\r\n :64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/collections\/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/tensorflow\/python\/util\/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/tensorflow\/python\/util\/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n \/home\/theo\/.local\/share\/miniconda3\/envs\/datasets\/lib\/python3.8\/site-packages\/tensorflow\/python\/util\/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nI'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? ","Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).\r\nWhat's the length of the resulting dataset ?\r\nYou can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow","```diff\r\ndiff --git a\/benchmarks\/benchmark_filter.py b\/benchmarks\/benchmark_filter.py\r\nindex 4b9efd4e..a862c204 100644\r\n--- a\/benchmarks\/benchmark_filter.py\r\n+++ b\/benchmarks\/benchmark_filter.py\r\n@@ -1,6 +1,9 @@\r\n import logging\r\n import sys\r\n import time\r\n+import tracemalloc\r\n+\r\n+import pyarrow as pa\r\n \r\n from datasets import load_dataset, set_caching_enabled\r\n \r\n@@ -9,13 +12,28 @@ if __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n \r\n+ tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n \r\n now = time.time()\r\n try:\r\n+ snapshot1 = tracemalloc.take_snapshot()\r\n+ pamem1 = pa.total_allocated_bytes()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n+ pamem2 = pa.total_allocated_bytes()\r\n+ snapshot2 = tracemalloc.take_snapshot()\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n+ exit(1)\r\n+ tracemalloc.stop()\r\n elapsed = time.time() - now\r\n \r\n print(elapsed)\r\n+ top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n+\r\n+ print(\"[ Top 10 differences ]\")\r\n+ for stat in top_stats[:10]:\r\n+ print(stat)\r\n+\r\n+ print(\"[ pyarrow reporting ]\")\r\n+ print(f\"before: ({pamem1}) after: ({pamem2})\")\r\n```\r\n\r\nthis yields 0-0, does not seem like a good tool \ud83d\ude1b and the documentation is [quite mysterious.](https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.total_allocated_bytes.html)","Personally if I use your script to benchmark on this branch\r\n```python\r\nbc = load_dataset(\"bookcorpus\", split=\"train[:1%]\")\r\nbc = bc.filter(lambda x: len(x[\"text\"]) < 64)\r\n```\r\n\r\nthen I get\r\n```\r\n[ pyarrow reporting ]\r\nbefore: (0) after: (15300672)\r\n```\r\n\r\nMaybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n```python\r\nbc[\"train\"] = bc[\"train\"].filter(...)\r\n```\r\nCan you try again on your side just to make sure ?\r\n\r\nEven if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.\r\nIt tracks the number of bytes used for arrow data.","> Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n> \r\n> ```python\r\n> bc[\"train\"] = bc[\"train\"].filter(...)\r\n> ```\r\nNice catch! I get 1.74GB for this branch","Looks like we may need to write the filtered table on the disk then.\r\n\r\nThe other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon","From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https:\/\/lists.apache.org\/thread.html\/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E)"],"created_at":1615886610000,"updated_at":1617183528000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2060","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2060","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2060.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2060.patch"},"body":"fix https:\/\/github.com\/huggingface\/datasets\/issues\/2032\r\n\r\nbenchmarking is somewhat inconclusive, currently running on `book_corpus` with:\r\n\r\n```python\r\n bc = load_dataset(\"bookcorpus\")\r\n now = time.time()\r\n bc.filter(lambda x: len(x[\"text\"]) < 64)\r\n elapsed = time.time() - now\r\n print(elapsed)\r\n```\r\n\r\nthis branch does it in 233 seconds, master in 1409 seconds.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2060\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2059","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2059\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2059\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2059\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2059","id":832579156,"node_id":"MDU6SXNzdWU4MzI1NzkxNTY=","number":2059,"title":"Error while following docs to load the `ted_talks_iwslt` dataset","user":{"login":"ekdnam","id":40426312,"node_id":"MDQ6VXNlcjQwNDI2MzEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40426312?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ekdnam","html_url":"https:\/\/github.com\/ekdnam","followers_url":"https:\/\/api.github.com\/users\/ekdnam\/followers","following_url":"https:\/\/api.github.com\/users\/ekdnam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ekdnam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ekdnam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ekdnam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ekdnam\/orgs","repos_url":"https:\/\/api.github.com\/users\/ekdnam\/repos","events_url":"https:\/\/api.github.com\/users\/ekdnam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ekdnam\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@skyprince999 as you authored the PR for this dataset, any comments?","This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)"],"created_at":1615885939000,"updated_at":1615917631000,"closed_at":1615917607000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am currently trying to load the `ted_talks_iwslt` dataset into google colab.\r\n\r\nThe [docs](https:\/\/huggingface.co\/datasets\/ted_talks_iwslt) mention the following way of doing so.\r\n\r\n```python\r\ndataset = load_dataset(\"ted_talks_iwslt\", language_pair=(\"it\", \"pl\"), year=\"2014\")\r\n```\r\n\r\nExecuting it results in the error attached below.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n in ()\r\n----> 1 dataset = load_dataset(\"ted_talks_iwslt\", language_pair=(\"it\", \"pl\"), year=\"2014\")\r\n\r\n4 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)\r\n 730 hash=hash,\r\n 731 features=features,\r\n--> 732 **config_kwargs,\r\n 733 )\r\n 734 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in __init__(self, writer_batch_size, *args, **kwargs)\r\n 927 \r\n 928 def __init__(self, *args, writer_batch_size=None, **kwargs):\r\n--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n 930 # Batch size used by the ArrowWriter\r\n 931 # It defines the number of samples that are kept in memory before writing them\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)\r\n 241 name,\r\n 242 custom_features=features,\r\n--> 243 **config_kwargs,\r\n 244 )\r\n 245 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 337 if \"version\" not in config_kwargs and hasattr(self, \"VERSION\") and self.VERSION:\r\n 338 config_kwargs[\"version\"] = self.VERSION\r\n--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)\r\n 340 \r\n 341 # otherwise use the config_kwargs to overwrite the attributes\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/ted_talks_iwslt\/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914\/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)\r\n 219 description=description,\r\n 220 version=datasets.Version(\"1.1.0\", \"\"),\r\n--> 221 **kwargs,\r\n 222 )\r\n 223 \r\n\r\nTypeError: __init__() got multiple values for keyword argument 'version'\r\n```\r\n\r\nHow to resolve this? \r\n\r\nPS: Thanks a lot @huggingface team for creating this great library!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2059\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2058","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2058\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2058\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2058\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2058","id":832159844,"node_id":"MDU6SXNzdWU4MzIxNTk4NDQ=","number":2058,"title":"Is it possible to convert a `tfds` to HuggingFace `dataset`?","user":{"login":"abarbosa94","id":6608232,"node_id":"MDQ6VXNlcjY2MDgyMzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6608232?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abarbosa94","html_url":"https:\/\/github.com\/abarbosa94","followers_url":"https:\/\/api.github.com\/users\/abarbosa94\/followers","following_url":"https:\/\/api.github.com\/users\/abarbosa94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abarbosa94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abarbosa94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abarbosa94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abarbosa94\/orgs","repos_url":"https:\/\/api.github.com\/users\/abarbosa94\/repos","events_url":"https:\/\/api.github.com\/users\/abarbosa94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abarbosa94\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615839527000,"updated_at":1615839527000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)\r\n\r\nI can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful.\r\n\r\nThanks!\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2058\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2057","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2057\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2057\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2057\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2057","id":832120522,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0","number":2057,"title":"update link to ZEST dataset","user":{"login":"matt-peters","id":619844,"node_id":"MDQ6VXNlcjYxOTg0NA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/619844?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/matt-peters","html_url":"https:\/\/github.com\/matt-peters","followers_url":"https:\/\/api.github.com\/users\/matt-peters\/followers","following_url":"https:\/\/api.github.com\/users\/matt-peters\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/matt-peters\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/matt-peters\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/matt-peters\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/matt-peters\/orgs","repos_url":"https:\/\/api.github.com\/users\/matt-peters\/repos","events_url":"https:\/\/api.github.com\/users\/matt-peters\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/matt-peters\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615836177000,"updated_at":1615914388000,"closed_at":1615914388000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2057","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2057","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2057.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2057.patch"},"body":"Updating the link as the original one is no longer working. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2057\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2056","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2056\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2056\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2056\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2056","id":831718397,"node_id":"MDU6SXNzdWU4MzE3MTgzOTc=","number":2056,"title":"issue with opus100\/en-fr dataset ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ","Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google\/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```","as per https:\/\/github.com\/huggingface\/tokenizers\/issues\/626 this looks like to be the tokenizer bug, I therefore, reported it there https:\/\/github.com\/huggingface\/tokenizers\/issues\/626 and I am closing this one."],"created_at":1615807962000,"updated_at":1615909740000,"closed_at":1615909739000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am running run_mlm.py code of huggingface repo with opus100\/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? \r\n\r\nThanks a lot @lhoestq for your help in advance.\r\n\r\n`\r\nthread '' panicked at 'index out of bounds: the len is 617 but the index is 617', \/__w\/tokenizers\/tokenizers\/tokenizers\/src\/tokenizer\/normalizer.rs:382:21\r\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\r\n 63%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a | 626\/1000 [00:27<00:16, 22.69ba\/s]\r\n\r\nTraceback (most recent call last):\r\n File \"run_mlm.py\", line 550, in \r\n main()\r\n File \"run_mlm.py\", line 412, in main\r\n in zip(data_args.dataset_name, data_args.dataset_config_name)]\r\n File \"run_mlm.py\", line 411, in \r\n logger) for dataset_name, dataset_config_name\\\r\n File \"\/user\/dara\/dev\/codes\/seq2seq\/data\/tokenize_datasets.py\", line 96, in get_tokenized_dataset\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py\", line 448, in map\r\n for k, dataset in self.items()\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py\", line 448, in \r\n for k, dataset in self.items()\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1309, in map\r\n update_data=update_data,\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 204, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1574, in _map_single\r\n batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1490, in apply_function_on_filtered_inputs\r\n function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"\/user\/dara\/dev\/codes\/seq2seq\/data\/tokenize_datasets.py\", line 89, in tokenize_function\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py\", line 2347, in __call__\r\n **kwargs,\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_base.py\", line 2532, in batch_encode_plus\r\n **kwargs,\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/transformers\/tokenization_utils_fast.py\", line 384, in _batch_encode_plus\r\n is_pretokenized=is_split_into_words,\r\npyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617\r\n\r\n`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2056\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2055","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2055\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2055\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2055\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2055","id":831684312,"node_id":"MDU6SXNzdWU4MzE2ODQzMTI=","number":2055,"title":"is there a way to override a dataset object saved with save_to_disk?","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi\r\nYou can rename the arrow file and update the name in `state.json`","I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ","I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-.arrow` where the fingerprint is a hash.","Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?"],"created_at":1615805453000,"updated_at":1616385977000,"closed_at":1616385977000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2055\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2054","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2054\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2054\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2054\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2054","id":831597665,"node_id":"MDU6SXNzdWU4MzE1OTc2NjU=","number":2054,"title":"Could not find file for ZEST dataset","user":{"login":"bhadreshpsavani","id":26653468,"node_id":"MDQ6VXNlcjI2NjUzNDY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26653468?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhadreshpsavani","html_url":"https:\/\/github.com\/bhadreshpsavani","followers_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/followers","following_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/repos","events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhadreshpsavani\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The zest dataset url was changed (allenai\/zest#3) and #2057 should resolve this.","This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)","Thanks @lhoestq and @matt-peters ","I am closing this issue since its fixed!"],"created_at":1615799518000,"updated_at":1620034224000,"closed_at":1620034224000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I am trying to use zest dataset from Allen AI using below code in colab,\r\n```\r\n!pip install -q datasets\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"zest\")\r\n```\r\n\r\nI am getting the following error,\r\n```\r\nUsing custom data configuration default\r\n\r\nDownloading and preparing dataset zest\/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to \/root\/.cache\/huggingface\/datasets\/zest\/default\/0.0.0\/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n in ()\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 dataset = load_dataset(\"zest\")\r\n\r\n9 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)\r\n 612 )\r\n 613 elif response is not None and response.status_code == 404:\r\n--> 614 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n 615 _raise_if_offline_mode_is_enabled(f\"Tried to reach {url}\")\r\n 616 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n\r\nFileNotFoundError: Couldn't find file at https:\/\/ai2-datasets.s3-us-west-2.amazonaws.com\/zest\/zest.zip\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2054\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2053","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2053\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2053\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2053\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2053","id":831151728,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2","number":2053,"title":"Add bAbI QA tasks","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.","Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is a lot.\r\nMaybe this dataset can work with parameters `type` and `task_no` ?\r\nYou can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.\r\nAlso feel free to add an example in the dataset card of how to load the other configurations\r\n```\r\nload_dataset(\"babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nfor example, and with a list of the possible combinations.\r\n\r\n> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.\r\n\r\nIt looks appropriate, thanks :)","Hi @lhoestq \r\n\r\nI'm unable to test it locally using:\r\n```python\r\nload_dataset(\"datasets\/babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nIt raises an error:\r\n```python\r\nTypeError: __init__() got an unexpected keyword argument 'type'\r\n```\r\nWill this be possible only after merging? Or am I missing something here?","Can you try adding this class attribute to `BabiQa` ?\r\n```python\r\nBUILDER_CONFIG_CLASS = BabiQaConfig\r\n```\r\nThis should fix the TypeError issue you got","My bad. Thanks a lot!","Hi @lhoestq \r\n\r\nI have added the changes. Only the \"qa1\" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.\r\n\r\nThanks,\r\nGunjan","Hi @lhoestq,\r\n\r\nDoes this look good now?"],"created_at":1615727079000,"updated_at":1617021708000,"closed_at":1617021708000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2053","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2053","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2053.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2053.patch"},"body":"- **Name:** *The (20) QA bAbI tasks*\r\n- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*\r\n- **Paper:** [arXiv](https:\/\/arxiv.org\/pdf\/1502.05698.pdf)\r\n- **Data:** [Facebook Research Page](https:\/\/research.fb.com\/downloads\/babi\/)\r\n- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.\r\n\r\n**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.\r\n\r\nThanks :)\r\n\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2053\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2052","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2052\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2052\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2052\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2052","id":831135704,"node_id":"MDU6SXNzdWU4MzExMzU3MDQ=","number":2052,"title":"Timit_asr dataset repeats examples","user":{"login":"fermaat","id":7583522,"node_id":"MDQ6VXNlcjc1ODM1MjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7583522?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fermaat","html_url":"https:\/\/github.com\/fermaat","followers_url":"https:\/\/api.github.com\/users\/fermaat\/followers","following_url":"https:\/\/api.github.com\/users\/fermaat\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fermaat\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fermaat\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fermaat\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fermaat\/orgs","repos_url":"https:\/\/api.github.com\/users\/fermaat\/repos","events_url":"https:\/\/api.github.com\/users\/fermaat\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fermaat\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https:\/\/github.com\/huggingface\/datasets\r\n```","Ty!"],"created_at":1615722223000,"updated_at":1615804636000,"closed_at":1615804636000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Summary\r\n\r\nWhen loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same\r\nSteps to reproduce\r\n\r\nAs an example, on this code there is the text from the training part:\r\n\r\nCode snippet:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ntimit = load_dataset(\"timit_asr\")\r\ntimit['train']['text']\r\n#['Would such an act of refusal be useful?',\r\n# 'Would such an act of refusal be useful?',\r\n# 'Would such an act of refusal be useful?',\r\n# 'Would such an act of refusal be useful?',\r\n# 'Would such an act of refusal be useful?',\r\n# 'Would such an act of refusal be useful?',\r\n```\r\nThe same behavior happens for other columns\r\n\r\nExpected behavior:\r\n\r\nDifferent info on the actual timit_asr dataset\r\n\r\nActual behavior:\r\n\r\nWhen loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different\r\nDebug info\r\n\r\n Streamlit version: (get it with $ streamlit version)\r\n Python version: Python 3.6.12\r\n Using Conda? PipEnv? PyEnv? Pex? Using pip\r\n OS version: Centos-release-7-9.2009.1.el7.centos.x86_64\r\n\r\nAdditional information\r\n\r\nYou can check the same behavior on https:\/\/huggingface.co\/datasets\/viewer\/?dataset=timit_asr","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2052\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2051","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2051\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2051\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2051\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2051","id":831027021,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1","number":2051,"title":"Add MDD Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nI have added changes from review.","Thanks for approving :)"],"created_at":1615680065000,"updated_at":1616152544000,"closed_at":1616149919000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2051","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2051","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2051.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2051.patch"},"body":"- **Name:** *MDD Dataset*\r\n- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.\r\n- **Paper:** [arXiv](https:\/\/arxiv.org\/pdf\/1511.06931.pdf)\r\n- **Data:** https:\/\/research.fb.com\/downloads\/babi\/\r\n- **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's \"bAbI project\".\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.\r\n\r\n\r\n**Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2051\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2050","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2050\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2050\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2050\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2050","id":831006551,"node_id":"MDU6SXNzdWU4MzEwMDY1NTE=","number":2050,"title":"Build custom dataset to fine-tune Wav2Vec2","user":{"login":"Omarnabk","id":72882909,"node_id":"MDQ6VXNlcjcyODgyOTA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/72882909?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Omarnabk","html_url":"https:\/\/github.com\/Omarnabk","followers_url":"https:\/\/api.github.com\/users\/Omarnabk\/followers","following_url":"https:\/\/api.github.com\/users\/Omarnabk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Omarnabk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Omarnabk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Omarnabk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Omarnabk\/orgs","repos_url":"https:\/\/api.github.com\/users\/Omarnabk\/repos","events_url":"https:\/\/api.github.com\/users\/Omarnabk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Omarnabk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq - We could simply use the \"general\" json dataset for this no? ","Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path\/to\/your\/train_data.json\", \"test\": \"path\/to\/your\/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.","Many thanks! that was what I was looking for. "],"created_at":1615672870000,"updated_at":1615800448000,"closed_at":1615800448000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https:\/\/huggingface.co\/blog\/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file. \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2050\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2049","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2049\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2049\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2049\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2049","id":830978687,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0","number":2049,"title":"Fix text-classification tags","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM, thanks for fixing."],"created_at":1615665102000,"updated_at":1615909666000,"closed_at":1615909666000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2049","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2049","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2049.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2049.patch"},"body":"There are different tags for text classification right now: `text-classification` and `text_classification`:\r\n![image](https:\/\/user-images.githubusercontent.com\/29076344\/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png).\r\n\r\nThis PR fixes it.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2049\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2048","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2048\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2048\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2048\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2048","id":830953431,"node_id":"MDU6SXNzdWU4MzA5NTM0MzE=","number":2048,"title":"github is not always available - probably need a back up","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615658612000,"updated_at":1615658612000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Yesterday morning github wasn't working:\r\n\r\n```\r\n:\/tmp$ wget https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.4.1\/metrics\/sacrebleu\/sacrebleu.py--2021-03-12 18:35:59-- https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.4.1\/metrics\/sacrebleu\/sacrebleu.py\r\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...\r\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\r\nHTTP request sent, awaiting response... 500 Internal Server Error\r\n2021-03-12 18:36:11 ERROR 500: Internal Server Error.\r\n```\r\n\r\nSuggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2048\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2047","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2047\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2047\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2047\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2047","id":830626430,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3","number":2047,"title":"Multilingual dIalogAct benchMark (miam)","user":{"login":"eusip","id":1551356,"node_id":"MDQ6VXNlcjE1NTEzNTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1551356?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eusip","html_url":"https:\/\/github.com\/eusip","followers_url":"https:\/\/api.github.com\/users\/eusip\/followers","following_url":"https:\/\/api.github.com\/users\/eusip\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eusip\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eusip\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eusip\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eusip\/orgs","repos_url":"https:\/\/api.github.com\/users\/eusip\/repos","events_url":"https:\/\/api.github.com\/users\/eusip\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eusip\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)","I will run isort again. Hopefully it resolves the current check_code_quality test failure.","Once the review period is over, feel free to open a PR to add all the missing information ;)","Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include."],"created_at":1615590175000,"updated_at":1616495794000,"closed_at":1616150833000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2047","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2047","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2047.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2047.patch"},"body":"My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2047\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2046","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2046\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2046\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2046\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2046","id":830423033,"node_id":"MDU6SXNzdWU4MzA0MjMwMzM=","number":2046,"title":"add_faisis_index gets very slow when doing it interatively ","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?","Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ","Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https:\/\/github.com\/facebookresearch\/faiss\/wiki\/Threads-and-asynchronous-calls","Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/16892570\/111453464-798c5f80-8778-11eb-86d0-19d212f58e38.png)\r\n","@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps:\/\/github.com\/matsui528\/faiss_tips","@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https:\/\/github.com\/facebookresearch\/faiss\/issues\/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?","It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1\/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).","@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?","When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/Guidelines-to-choose-an-index#how-big-is-the-dataset)","Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ","@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. "],"created_at":1615580838000,"updated_at":1616624951000,"closed_at":1616624951000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag\/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? \r\n\r\n@lhoestq \r\n\r\n```\r\n def training_step(self, batch, batch_idx) -> Dict:\r\n\r\n \r\n if (not batch_idx==0) and (batch_idx%5==0):\r\n\r\n print(\"******************************************************\")\r\n ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder\r\n model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU\r\n model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff\r\n\r\n\r\n list_of_gpus = ['cuda:2','cuda:3']\r\n c_dir='\/custom\/cache\/dir'\r\n\r\n kb_dataset = load_dataset(\"csv\", data_files=[self.custom_config.csv_path], split=\"train\", delimiter=\"\\t\", column_names=[\"title\", \"text\"],cache_dir=c_dir) \r\n\r\n print(kb_dataset)\r\n\r\n \r\n n=len(list_of_gpus) #nunber of dedicated GPUs\r\n kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)]\r\n\r\n #kb_dataset.save_to_disk('\/hpc\/gsir059\/MY-Test\/RAY\/transformers\/examples\/research_projects\/rag\/haha-dir')\r\n\r\n\r\n print(self.trainer.global_rank)\r\n dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank])\r\n output = [None for _ in list_of_gpus]\r\n\r\n #self.trainer.accelerator_connector.accelerator.barrier(\"embedding_process\")\r\n dist.all_gather_object(output, dataset_shards)\r\n \r\n\r\n #This creation and re-initlaization of the new index\r\n if (self.trainer.global_rank==0): #saving will be done in the main process \r\n \r\n combined_dataset = concatenate_datasets(output)\r\n \r\n passages_path =self.config.passages_path\r\n\r\n logger.info(\"saving the dataset with \")\r\n #combined_dataset.save_to_disk('\/hpc\/gsir059\/MY-Test\/RAY\/transformers\/examples\/research_projects\/rag\/MY-Passage')\r\n combined_dataset.save_to_disk(passages_path)\r\n logger.info(\"Add faiss index to the dataset that consist of embeddings\") \r\n\r\n \r\n embedding_dataset=combined_dataset\r\n index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT)\r\n embedding_dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n\r\n embedding_dataset.get_index(\"embeddings\").save(self.config.index_path)\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2046\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2045","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2045\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2045\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2045\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2045","id":830351527,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz","number":2045,"title":"Preserve column ordering in Dataset.rename_column","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ","I don't know how to trigger it manually, but an empty commit should do the job"],"created_at":1615573607000,"updated_at":1615906085000,"closed_at":1615905305000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2045","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2045","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2045.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2045.patch"},"body":"Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:\r\n```python\r\n>>> from datasets import Dataset\r\n>>> d = Dataset.from_dict({'sentences': [\"s1\", \"s2\"], 'label': [0, 1]})\r\n>>> d\r\nDataset({\r\n features: ['sentences', 'label'],\r\n num_rows: 2\r\n})\r\n>>> d.rename_column('sentences', 'text')\r\nDataset({\r\n features: ['label', 'text'],\r\n num_rows: 2\r\n})\r\n```\r\nThis PR fixes this.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2045\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2044","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2044\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2044\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2044\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2044","id":830339905,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1","number":2044,"title":"Add CBT dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nI have added changes from the review.","Thanks for approving @lhoestq "],"created_at":1615572259000,"updated_at":1616152213000,"closed_at":1616149755000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2044","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2044","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2044.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2044.patch"},"body":"This PR adds the [CBT Dataset](https:\/\/arxiv.org\/abs\/1511.02301).\r\n\r\nNote that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable \"task\" for it in YAML tags.\r\n\r\nThe dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space.\r\n\r\nLet me know in case of any issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2044\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2043","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2043\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2043\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2043\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2043","id":830279098,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz","number":2043,"title":"Support pickle protocol for dataset splits defined as ReadInstruction","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.","Yes right ! I read it wrong.\r\nPerfect then"],"created_at":1615566911000,"updated_at":1615904738000,"closed_at":1615903505000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2043","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2043","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2043.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2043.patch"},"body":"Fixes #2022 (+ some style fixes) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2043\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2042","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2042\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2042\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2042\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2042","id":830190276,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3","number":2042,"title":"Fix arrow memory checks issue in tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615560592000,"updated_at":1615561463000,"closed_at":1615561462000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2042","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2042","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2042.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2042.patch"},"body":"The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.\r\nFrom my experiments, the tests fail only when the full test suite is ran.\r\nThis made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests.\r\n\r\nCollecting the garbage collector before checking the arrow memory usage seems to fix this issue.\r\nI added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2042\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2041","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2041\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2041\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2041\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2041","id":830180803,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw","number":2041,"title":"Doc2dial update data_infos and data_loaders","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615559969000,"updated_at":1615892960000,"closed_at":1615892960000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2041","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2041","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2041.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2041.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2041\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2040","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2040\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2040\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2040\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2040","id":830169387,"node_id":"MDU6SXNzdWU4MzAxNjkzODc=","number":2040,"title":"ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk","user":{"login":"simonschoe","id":53626067,"node_id":"MDQ6VXNlcjUzNjI2MDY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53626067?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/simonschoe","html_url":"https:\/\/github.com\/simonschoe","followers_url":"https:\/\/api.github.com\/users\/simonschoe\/followers","following_url":"https:\/\/api.github.com\/users\/simonschoe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/simonschoe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/simonschoe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/simonschoe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/simonschoe\/orgs","repos_url":"https:\/\/api.github.com\/users\/simonschoe\/repos","events_url":"https:\/\/api.github.com\/users\/simonschoe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/simonschoe\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.","Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive\/MyDrive\/data_target_task\/dataset_a\/train\/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive\/MyDrive\/data_target_task\/dataset_b\/'","In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```","Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! "],"created_at":1615559220000,"updated_at":1628100043000,"closed_at":1628100043000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there,\r\n\r\nI am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):\r\n```python\r\nconcatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])\r\n```\r\nYielding the following error:\r\n```python\r\nValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.\r\nHowever datasets' indices [1] come from memory and datasets' indices [0] come from disk.\r\n```\r\nBeen trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...\r\n\r\n`load_from_disk(PATH_DATA_CLS_A)['train']` yields:\r\n```python\r\nDataset({\r\n features: ['labels', 'text'],\r\n num_rows: 785\r\n})\r\n```\r\n`load_from_disk(PATH_DATA_CLS_B)['train']` yields:\r\n```python\r\nDataset({\r\n features: ['labels', 'text'],\r\n num_rows: 3341\r\n})\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2040\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2039","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2039\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2039\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2039\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2039","id":830047652,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3","number":2039,"title":"Doc2dial rc","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615550188000,"updated_at":1615563156000,"closed_at":1615563156000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2039","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2039","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2039.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2039.patch"},"body":"Added fix to handle the last turn that is a user turn.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2039\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2038","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2038\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2038\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2038\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2038","id":830036875,"node_id":"MDU6SXNzdWU4MzAwMzY4NzU=","number":2038,"title":"outdated dataset_infos.json might fail verifications","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test .\/datasets\/doc2dial --all_configs --save_infos --ignore_verifications\r\n```","Fixed by #2041, thanks again @songfeng !"],"created_at":1615549314000,"updated_at":1615912060000,"closed_at":1615912060000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The [doc2dial\/dataset_infos.json](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/doc2dial\/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..\r\n\r\nCould you please update this file or point me how to update this file?\r\n\r\nThank you.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2038\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2037","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2037\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2037\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2037\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2037","id":829919685,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz","number":2037,"title":"Fix: Wikipedia - save memory by replacing root.clear with elem.clear","user":{"login":"miyamonz","id":6331508,"node_id":"MDQ6VXNlcjYzMzE1MDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6331508?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/miyamonz","html_url":"https:\/\/github.com\/miyamonz","followers_url":"https:\/\/api.github.com\/users\/miyamonz\/followers","following_url":"https:\/\/api.github.com\/users\/miyamonz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/miyamonz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/miyamonz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/miyamonz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/miyamonz\/orgs","repos_url":"https:\/\/api.github.com\/users\/miyamonz\/repos","events_url":"https:\/\/api.github.com\/users\/miyamonz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/miyamonz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it"],"created_at":1615540920000,"updated_at":1616479696000,"closed_at":1615892482000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2037","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2037","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2037.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2037.patch"},"body":"see: https:\/\/github.com\/huggingface\/datasets\/issues\/2031\r\n\r\nWhat I did:\r\n- replace root.clear with elem.clear\r\n- remove lines to get root element\r\n- $ make style\r\n- $ make test\r\n - some tests required some pip packages, I installed them.\r\n\r\ntest results on origin\/master and my branch are same. I think it's not related on my modification, isn't it?\r\n```\r\n==================================================================================== short test summary info ====================================================================================\r\nFAILED tests\/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised\r\n============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================\r\nmake: *** [Makefile:19: test] Error 1\r\n\r\n```\r\n\r\nIs there anything else I should do?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2037\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2036","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2036\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2036\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2036\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2036","id":829909258,"node_id":"MDU6SXNzdWU4Mjk5MDkyNTg=","number":2036,"title":"Cannot load wikitext","user":{"login":"Gpwner","id":19349207,"node_id":"MDQ6VXNlcjE5MzQ5MjA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19349207?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Gpwner","html_url":"https:\/\/github.com\/Gpwner","followers_url":"https:\/\/api.github.com\/users\/Gpwner\/followers","following_url":"https:\/\/api.github.com\/users\/Gpwner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Gpwner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Gpwner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Gpwner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Gpwner\/orgs","repos_url":"https:\/\/api.github.com\/users\/Gpwner\/repos","events_url":"https:\/\/api.github.com\/users\/Gpwner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Gpwner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Solved!"],"created_at":1615540179000,"updated_at":1615797902000,"closed_at":1615797884000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"when I execute these codes\r\n```\r\n>>> from datasets import load_dataset\r\n>>> test_dataset = load_dataset(\"wikitext\")\r\n```\r\n\r\nI got an error,any help?\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/xxx\/anaconda3\/envs\/transformer\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/home\/xxx\/anaconda3\/envs\/transformer\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/xxx\/anaconda3\/envs\/transformer\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/home\/xxx\/anaconda3\/envs\/transformer\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 487, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/wikitext\/wikitext.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2036\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2035","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2035\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2035\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2035\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2035","id":829475544,"node_id":"MDU6SXNzdWU4Mjk0NzU1NDQ=","number":2035,"title":"wiki40b\/wikipedia for almost all languages cannot be downloaded","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n","Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/19718818\/110908410-c7e2ce00-8334-11eb-8d10-7354359e9ec3.png)\r\n\r\n","For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https:\/\/dumps.wikimedia.org\/bgwiki\/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n","Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.","Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 ,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b\/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/users\/dara\/cache\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n","HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux\/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.","Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b\/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/users\/dara\/cache\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 ,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux\/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> \u2014\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n>\r\n","I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB\/s] \r\nDownloading: 1.40kB [00:00, 327kB\/s] \r\nDownloading and preparing dataset wiki40b\/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/bhavitvya\/.cache\/huggingface\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https:\/\/developers.google.com\/accounts\/docs\/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:16<00:00, 16.42s\/sources]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1.12sources\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1.14sources\/s]\r\nDataset wiki40b downloaded and prepared to \/home\/bhavitvya\/.cache\/huggingface\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```","Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB\/s]\r\n> Downloading: 1.40kB [00:00, 327kB\/s]\r\n> Downloading and preparing dataset wiki40b\/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/bhavitvya\/.cache\/huggingface\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https:\/\/developers.google.com\/accounts\/docs\/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:16<00:00, 16.42s\/sources]\r\n> 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1.12sources\/s]\r\n> 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1.14sources\/s]\r\n> Dataset wiki40b downloaded and prepared to \/home\/bhavitvya\/.cache\/huggingface\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> \u2014\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n>\r\n"],"created_at":1615492494000,"updated_at":1615906417000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying to download the data as below:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\")\r\nprint(dataset)\r\n```\r\n\r\nI am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.\r\n\r\nI really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. \r\n\r\nthank you very much.\r\n\r\n```\r\n(fast) dara@vgne046:\/user\/dara\/dev\/codes\/seq2seq$ python test_data.py\r\nDownloading and preparing dataset wiki40b\/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp\/dara\/cache_home_2\/datasets\/wiki40b\/cs\/1.1.0\/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nTraceback (most recent call last):\r\n File \"test_data.py\", line 3, in \r\n dataset = load_dataset(\"wiki40b\", \"cs\")\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 579, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1105, in _download_and_prepare\r\n import apache_beam as beam\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/apache_beam-2.28.0-py3.7-linux-x86_64.egg\/apache_beam\/__init__.py\", line 96, in \r\n from apache_beam import io\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/apache_beam-2.28.0-py3.7-linux-x86_64.egg\/apache_beam\/io\/__init__.py\", line 23, in \r\n from apache_beam.io.avroio import *\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/apache_beam-2.28.0-py3.7-linux-x86_64.egg\/apache_beam\/io\/avroio.py\", line 55, in \r\n import avro\r\n File \"\", line 983, in _find_and_load\r\n File \"\", line 967, in _find_and_load_unlocked\r\n File \"\", line 668, in _load_unlocked\r\n File \"\", line 638, in _load_backward_compatible\r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/avro_python3-1.9.2.1-py3.7.egg\/avro\/__init__.py\", line 34, in \r\n File \"\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/avro_python3-1.9.2.1-py3.7.egg\/avro\/__init__.py\", line 30, in LoadResource\r\nNotADirectoryError: [Errno 20] Not a directory: '\/user\/dara\/libs\/anaconda3\/envs\/fast\/lib\/python3.7\/site-packages\/avro_python3-1.9.2.1-py3.7.egg\/avro\/VERSION.txt'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2035\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2034","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2034\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2034\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2034\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2034","id":829381388,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw","number":2034,"title":"Fix typo","user":{"login":"pcyin","id":3413464,"node_id":"MDQ6VXNlcjM0MTM0NjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3413464?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pcyin","html_url":"https:\/\/github.com\/pcyin","followers_url":"https:\/\/api.github.com\/users\/pcyin\/followers","following_url":"https:\/\/api.github.com\/users\/pcyin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pcyin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pcyin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pcyin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pcyin\/orgs","repos_url":"https:\/\/api.github.com\/users\/pcyin\/repos","events_url":"https:\/\/api.github.com\/users\/pcyin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pcyin\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615484773000,"updated_at":1615485985000,"closed_at":1615485985000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2034","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2034","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2034.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2034.patch"},"body":"Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2034\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2033","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2033\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2033\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2033\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2033","id":829295339,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy","number":2033,"title":"Raise an error for outdated sacrebleu versions","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615478880000,"updated_at":1615485492000,"closed_at":1615485492000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2033","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2033","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2033.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2033.patch"},"body":"The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12\r\n\r\nFor example using sacrebleu==1.2.10, an error is raised (from metric\/sacrebleu\/sacrebleu.py):\r\n```python\r\n def _compute(\r\n self,\r\n predictions,\r\n references,\r\n smooth_method=\"exp\",\r\n smooth_value=None,\r\n force=False,\r\n lowercase=False,\r\n tokenize=scb.DEFAULT_TOKENIZER,\r\n use_effective_order=False,\r\n ):\r\n references_per_prediction = len(references[0])\r\n if any(len(refs) != references_per_prediction for refs in references):\r\n raise ValueError(\"Sacrebleu requires the same number of references for each prediction\")\r\n transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]\r\n> output = scb.corpus_bleu(\r\n sys_stream=predictions,\r\n ref_streams=transformed_references,\r\n smooth_method=smooth_method,\r\n smooth_value=smooth_value,\r\n force=force,\r\n lowercase=lowercase,\r\n tokenize=tokenize,\r\n use_effective_order=use_effective_order,\r\n )\r\n\r\nE TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method'\r\n\/mnt\/cache\/modules\/datasets_modules\/metrics\/sacrebleu\/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86\/sacrebleu.py:114: TypeError\r\n```\r\n\r\nI improved the error message when users have an outdated version of sacrebleu.\r\nThe new error message tells the user to update sacrebleu.\r\ncc @LysandreJik ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2033\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2032","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2032\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2032\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2032\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2032","id":829250912,"node_id":"MDU6SXNzdWU4MjkyNTA5MTI=","number":2032,"title":"Use Arrow filtering instead of writing a new arrow file for Dataset.filter","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"assignees":[{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1615475930000,"updated_at":1615483257000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.\r\n\r\nUsing a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.\r\n\r\nI think there are two cases:\r\n- if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)`\r\n- if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)`\r\n\r\nThe indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table.\r\n\r\nThe new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask.\r\n\r\nFeel free to discuss this idea in this thread :)\r\n\r\nOne additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle.\r\n\r\ncc @theo-m @gchhablani \r\n\r\nrelated issues: #1796 #1949 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2032\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2031","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2031\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2031\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2031\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2031","id":829122778,"node_id":"MDU6SXNzdWU4MjkxMjI3Nzg=","number":2031,"title":"wikipedia.py generator that extracts XML doesn't release memory","user":{"login":"miyamonz","id":6331508,"node_id":"MDQ6VXNlcjYzMzE1MDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6331508?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/miyamonz","html_url":"https:\/\/github.com\/miyamonz","followers_url":"https:\/\/api.github.com\/users\/miyamonz\/followers","following_url":"https:\/\/api.github.com\/users\/miyamonz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/miyamonz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/miyamonz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/miyamonz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/miyamonz\/orgs","repos_url":"https:\/\/api.github.com\/users\/miyamonz\/repos","events_url":"https:\/\/api.github.com\/users\/miyamonz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/miyamonz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?","OK! I'll send it later."],"created_at":1615467084000,"updated_at":1616402032000,"closed_at":1616402032000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.\r\n\r\nI found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/13a5b7db992ad5cf77895e4c0f76595314390418\/datasets\/wikipedia\/wikipedia.py#L464-L502\r\n\r\n`root.clear()` intend to clear memory, but it doesn't.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/13a5b7db992ad5cf77895e4c0f76595314390418\/datasets\/wikipedia\/wikipedia.py#L490\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/13a5b7db992ad5cf77895e4c0f76595314390418\/datasets\/wikipedia\/wikipedia.py#L494\r\nI replaced them with `elem.clear()`, then it seems to work correctly.\r\n\r\nhere is the notebook to reproduce it.\r\nhttps:\/\/gist.github.com\/miyamonz\/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2031\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2030","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2030\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2030\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2030\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2030","id":829110803,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4","number":2030,"title":"Implement Dataset from text","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..."],"created_at":1615466090000,"updated_at":1616074169000,"closed_at":1616074169000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2030","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2030","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2030.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2030.patch"},"body":"Implement `Dataset.from_text`.\r\n\r\nAnalogue to #1943, #1946.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2030\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2029","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2029\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2029\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2029\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2029","id":829097290,"node_id":"MDU6SXNzdWU4MjkwOTcyOTA=","number":2029,"title":"Loading a faiss index KeyError","user":{"login":"nbroad1881","id":24982805,"node_id":"MDQ6VXNlcjI0OTgyODA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24982805?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nbroad1881","html_url":"https:\/\/github.com\/nbroad1881","followers_url":"https:\/\/api.github.com\/users\/nbroad1881\/followers","following_url":"https:\/\/api.github.com\/users\/nbroad1881\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nbroad1881\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nbroad1881\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nbroad1881\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nbroad1881\/orgs","repos_url":"https:\/\/api.github.com\/users\/nbroad1881\/repos","events_url":"https:\/\/api.github.com\/users\/nbroad1881\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nbroad1881\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.","Ok in that case HF should fix their misleading example at https:\/\/huggingface.co\/docs\/datasets\/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```","Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)","> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR"],"created_at":1615464973000,"updated_at":1615508469000,"closed_at":1615508469000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.\r\n\r\nThe basic steps are:\r\n\r\n1. Create a dataset (dataset1)\r\n2. Create an embeddings column using DPR\r\n3. Add a faiss index to the dataset\r\n4. Save faiss index to a file\r\n5. Create a new dataset (dataset2) with the same text and label information as dataset1\r\n6. Try to load the faiss index from file to dataset2\r\n7. Get `KeyError: \"Column embeddings not in the dataset\"`\r\n\r\nI've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.\r\n\r\nhttps:\/\/colab.research.google.com\/drive\/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing\r\n\r\nUbuntu Version\r\nVERSION=\"18.04.5 LTS (Bionic Beaver)\"\r\n\r\ndatasets==1.4.1\r\nfaiss==1.5.3\r\nfaiss-gpu==1.7.0\r\ntorch==1.8.0+cu101\r\ntransformers==4.3.3\r\n\r\nNVIDIA-SMI 460.56\r\nDriver Version: 460.32.03\r\nCUDA Version: 11.2 \r\nTesla K80 \r\n\r\nI was basically following the steps here: https:\/\/huggingface.co\/docs\/datasets\/faiss_and_ea.html#adding-a-faiss-index\r\n\r\nI included the exact code from the documentation at the end of the notebook to show that they don't work either.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2029\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2028","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2028\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2028\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2028\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2028","id":828721393,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx","number":2028,"title":"Adding PersiNLU reading-comprehension","user":{"login":"danyaljj","id":2441454,"node_id":"MDQ6VXNlcjI0NDE0NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2441454?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/danyaljj","html_url":"https:\/\/github.com\/danyaljj","followers_url":"https:\/\/api.github.com\/users\/danyaljj\/followers","following_url":"https:\/\/api.github.com\/users\/danyaljj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/danyaljj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/danyaljj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/danyaljj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/danyaljj\/orgs","repos_url":"https:\/\/api.github.com\/users\/danyaljj\/repos","events_url":"https:\/\/api.github.com\/users\/danyaljj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/danyaljj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I think I have addressed all your comments. ","Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ","It's all good thanks ;)\r\nmerging"],"created_at":1615437673000,"updated_at":1615801197000,"closed_at":1615801197000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2028","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2028","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2028.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2028.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2028\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2027","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2027\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2027\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2027\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2027","id":828490444,"node_id":"MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1","number":2027,"title":"Update format columns in Dataset.rename_columns","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615420259000,"updated_at":1615473520000,"closed_at":1615473520000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2027","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2027","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2027.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2027.patch"},"body":"Fixes #2026 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2027\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2026","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2026\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2026\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2026\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2026","id":828194467,"node_id":"MDU6SXNzdWU4MjgxOTQ0Njc=","number":2026,"title":"KeyError on using map after renaming a column","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.","Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?","I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)"],"created_at":1615402457000,"updated_at":1615473574000,"closed_at":1615473520000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.\r\n\r\nHere is what I try:\r\n\r\n```python\r\ntransform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n print(examples)\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(examples[\"image\"][example_idx].permute(2,0,1)))\r\n else:\r\n images.append(examples[\"image\"][example_idx].permute(2,0,1))\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n\r\nraw_dataset = load_dataset('cifar10')\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(3,32,32),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\r\n \"airplane\",\r\n \"automobile\",\r\n \"bird\",\r\n \"cat\",\r\n \"deer\",\r\n \"dog\",\r\n \"frog\",\r\n \"horse\",\r\n \"ship\",\r\n \"truck\",\r\n ]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nThe error:\r\n```python\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n in ()\r\n 14 ]),\r\n 15 })\r\n---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n2 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1287 test_inputs = self[:2] if batched else self[0]\r\n 1288 test_indices = [0, 1] if batched else 0\r\n-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)\r\n 1290 logger.info(\"Testing finished, running the mapping function on the dataset\")\r\n 1291 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py in does_function_return_dict(inputs, indices)\r\n 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]\r\n 1259 processed_inputs = (\r\n-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n 1261 )\r\n 1262 does_return_dict = isinstance(processed_inputs, Mapping)\r\n\r\n in prepare_features(examples)\r\n 3 labels = []\r\n 4 print(examples)\r\n----> 5 for example_idx, example in enumerate(examples[\"image\"]):\r\n 6 if transform is not None:\r\n 7 images.append(transform(examples[\"image\"][example_idx].permute(2,0,1)))\r\n\r\nKeyError: 'image'\r\n```\r\n\r\nThe print statement inside returns this:\r\n```python\r\n{'label': tensor([6, 9])}\r\n```\r\nApparently, both `img` and `image` do not exist after renaming. \r\n\r\nNote that this code works fine with `img` everywhere.\r\n\r\nNotebook: https:\/\/colab.research.google.com\/drive\/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2026\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2025","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2025\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2025\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2025\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2025","id":828047476,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz","number":2025,"title":"[Refactor] Use in-memory\/memory-mapped\/concatenation tables in Dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?\r\n\r\n\r\n\r\np.s one last thing?\r\n\r\nIs there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process. \r\n\r\n","> There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name. So I do not see an easy way to override the previous files. @lhoestq is this possible?\r\n\r\nIn the new save_to_disk, the filename of the arrow file is fixed: `dataset.arrow`.\r\nThis way is will be overwritten if you save your dataset again\r\n\r\n> Is there a way to flush out any connection to a data source loaded from **load_from_disk** or **load_dataset** methods? At the moment I suspect when we use any of those functions, it will always keep a pointer although we override it again with a new version of the dataset source. This is really useful in an iterative process.\r\n\r\nIf you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.\r\nDoes that answer the question ? How does this \"pointer\" behavior manifest exactly on your side ?","Apparently the usage of the compute layer of pyarrow requires pyarrow>=1.0.0 (otherwise there are some issues on windows with file permissions when doing dataset concatenation).\r\n\r\nI'll bump the pyarrow requirement from, 0.17.1 to 1.0.0","\r\n> If you update an arrow file, then you must reload it with `load_from_disk` for example in order to have the updated data.\r\n> Does that answer the question? How does this \"pointer\" behavior manifest exactly on your side?\r\n\r\nYes, I checked this behavior.. if we update the .arrow file it kind of flushes out the previous one. So your solution is perfect <3. ","Sorry for spamming, there's a a bug that only happens on the CI so I have to re-run it several times","Alright I finally added all the tests I wanted !\r\nI also fixed all the bugs and now all the tests are passing :)\r\n\r\nLet me know if you have comments.\r\n\r\nI also noticed that two methods in pyarrow seem to bring some data in memory even for a memory mapped table: filter and cast:\r\n- for filter I took a look at the C++ code on the arrow's side and found [this part](https:\/\/github.com\/apache\/arrow\/blob\/55c8d74d5556b25238fb2028e9fb97290ea24684\/cpp\/src\/arrow\/compute\/kernels\/vector_selection.cc#L93-L160) that \"builds\" the array during filter. It seems to indicate that it allocates new memory for the filtered array but not 100% sure.\r\n- regarding cast I noticed that it happens when changing the precision of an array of integers. Not sure if there are other cases.\r\n\r\n\r\nMaybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.","> Maybe we'll need to investigate this a bit for your PR on improving `filter` @theo-m , since we don't want to fill the users memory.\r\n\r\nI'm a bit unclear on this, I thought the point of the refactor was to use `Table.filter` to speed up our own `.filter` and stop using `.map` that offloaded too much stuff on disk. \r\nAt some point I recall we decided to use `keep_in_memory=True` as the expectations were that it would be hard to fill the memory?","> I'm a bit unclear on this, I thought the point of the refactor was to use Table.filter to speed up our own .filter and stop using .map that offloaded too much stuff on disk.\r\n> At some point I recall we decided to use keep_in_memory=True as the expectations were that it would be hard to fill the memory?\r\n\r\nYes it's ok to have the mask in memory, but not the full table. I was not aware that the table returned by filter could actually be in memory (it's not part of the pyarrow documentation afaik).\r\nTo be more specific I noticed that every time you call `filter`, the pyarrow total allocated memory increases.\r\nI haven't checked on a big dataset though, but it would be nice to see how much memory it uses with respect to the size of the dataset.","I have addressed your comments @theo-m @albertvillanova ! Thanks for the suggestions","I totally agree with you. I would have loved to use inheritance instead.\r\nHowever because `pa.Table` is a cython class without proper initialization methods (you can't call `__init__` for example): you can't instantiate a subclass of `pa.Table` in python.\r\nTo be more specific, you actually can try to instantiate a subclass of `pa.Table` with no data BUT this is not a valid table so you get an error.\r\nAnd since `pa.Table` objects are immutable you can't even set the data in `__new__` or `__init__`.\r\n\r\nEDIT: one could make a new cython class that inherits from `pa.Table` with proper initialization methods, so that we can inherit from this class instead in python. We can do that in the future if we plan to use cython in `datasets`.\r\n(see: https:\/\/arrow.apache.org\/docs\/python\/extending.html)","@lhoestq, but in which cases you would like to instantiate directly either `InMemoryTable` or `MemoryMappedTable`? You normally use one of their `from_xxx` class methods...","Yes I was thinking of these cases. The issue is that they return `pa.Table` objects even from a subclass of `pa.Table`","That is indeed a weird behavior...","I guess that in this case, the best approach is as you did, using composition over inheritance...\r\n\r\nhttps:\/\/github.com\/apache\/arrow\/pull\/5322","@lhoestq I think you forgot to add the new classes to the docs?","Yes you're right, let me add them"],"created_at":1615395647000,"updated_at":1617115613000,"closed_at":1616777519000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2025","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2025","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2025.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2025.patch"},"body":"## Intro\r\n\r\nCurrently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).\r\nThis assumption is used for pickling for example:\r\n- in-memory dataset can just be pickled\/unpickled in-memory\r\n- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling\r\n\r\n## Issues\r\n\r\nBecause of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.\r\nMoreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.\r\n\r\n## Solution provided in this PR\r\n\r\nI changed this by allowing several types of Table to be used in the Dataset object.\r\nMore specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.\r\nThe in-memory and memory-mapped tables implement the pickling behavior described above.\r\nThe ConcatenationTable can be made from several tables (either in-memory or memory mapped) called \"blocks\". Pickling a ConcatenationTable simply pickles the underlying blocks.\r\n\r\n## Implementation details\r\n\r\nThe three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.\r\n\r\nRegarding the MemoryMappedTable:\r\nReloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a \"replay\" mechanism to re-apply the changes when reloading the pyarrow table from the disk.\r\n\r\n## Checklist\r\n\r\n- [x] add InMemoryTable\r\n- [x] add MemoryMappedTable\r\n- [x] add ConcatenationTable\r\n- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter\r\n- [x] Update Dataset.from_xxx methods\r\n- [x] Update load_from_disk and save_to_disk\r\n- [x] Backward compatibility of load_from_disk\r\n- [x] Add tests for the new tables\r\n- [x] Update current tests\r\n- [ ] Documentation\r\n\r\n----------\r\n\r\nI would be happy to discuss the design of this PR :)\r\n\r\nClose #1877 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2025\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2024","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2024\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2024\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2024\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2024","id":827842962,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy","number":2024,"title":"Remove print statement from mnist.py","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for noticing !\r\n#2020 fixed this earlier today though ^^'\r\n\r\nClosing this one"],"created_at":1615387198000,"updated_at":1615485832000,"closed_at":1615485831000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2024","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2024","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2024.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2024.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2024\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2023","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2023\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2023\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2023\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2023","id":827819608,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2","number":2023,"title":"Add Romanian to XQuAD","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for updating XQUAD :)\r\n\r\nThe slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.\r\n\r\nCould you please generate the dummy data with\r\n```\r\ndatasets-cli dummy_data .\/datasets\/xquad --auto_generate --json_field data\r\n```\r\nThis will update all the dummy data files, and also add the new one for the romanian configuration.\r\n\r\n\r\nYou can also update the metadata with\r\n```\r\ndatasets-cli test .\/datasets\/xquad --name xquad.ro --save_infos\r\n```\r\nThis will update the dataset_infos.json file with the metadata of the romanian config :)\r\n\r\nThanks in advance !","Hello Quentin, and thanks for your help.\r\n\r\nI found that running\r\n\r\n```python\r\ndatasets-cli test .\/datasets\/xquad --name xquad.ro --save_infos\r\n```\r\n\r\nwas not enough to pass the slow tests, because it was not adding the new `xquad.ro.json` checksum to the other configs infos and becuase of that an `UnexpectedDownloadedFile` error was being thrown, so instead I used:\r\n\r\n```python\r\ndatasets-cli test .\/datasets\/xquad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n`--ignore_verifications` was necessary to bypass the same `UnexpectedDownloadedFile` error.\r\n\r\nAdditionally, I deleted `dummy_data_copy.zip` and the `copy.sh` script because they both seem now unnecessary.\r\n\r\nThe slow tests for both the real and dummy data now pass successfully, so I hope that I didn't mess anything up :)\r\n","You're right, you needed the `--ignore_verifications` flag !\r\nThanks for updating them :)\r\n\r\nAlthough I just noticed that the new dummy_data.zip files are quite big (170KB each) because they contain the json files of all the languages, while only one json file per language is necessary. Could you remove the unnecessary json files to reduce the size of the dummy_data.zip files if you don't mind ?","Done. I created a script (`remove_unnecessary_langs.sh`) to automate the process.\r\n"],"created_at":1615386272000,"updated_at":1615802897000,"closed_at":1615802897000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2023","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2023","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2023.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2023.patch"},"body":"On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https:\/\/github.com\/deepmind\/xquad\/commit\/60cac411649156efb6aab9dd4c9cde787a2c0345))\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2023\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2022","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2022\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2022\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2022\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2022","id":827435033,"node_id":"MDU6SXNzdWU4Mjc0MzUwMzM=","number":2022,"title":"ValueError when rename_column on splitted dataset","user":{"login":"simonschoe","id":53626067,"node_id":"MDQ6VXNlcjUzNjI2MDY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53626067?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/simonschoe","html_url":"https:\/\/github.com\/simonschoe","followers_url":"https:\/\/api.github.com\/users\/simonschoe\/followers","following_url":"https:\/\/api.github.com\/users\/simonschoe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/simonschoe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/simonschoe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/simonschoe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/simonschoe\/orgs","repos_url":"https:\/\/api.github.com\/users\/simonschoe\/repos","events_url":"https:\/\/api.github.com\/users\/simonschoe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/simonschoe\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```","This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues"],"created_at":1615369238000,"updated_at":1615903568000,"closed_at":1615903505000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there,\r\nI am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:\r\n\r\n```python\r\nsplit = {\r\n 'train': ReadInstruction('train', to=90, unit='%'),\r\n 'test': ReadInstruction('train', from_=-10, unit='%')\r\n}\r\n\r\ndataset = load_dataset(\r\n path='csv', # use 'text' loading script to load from local txt-files\r\n delimiter='\\t', # xxx\r\n data_files=text_files, # list of paths to local text files\r\n split=split, # xxx\r\n)\r\n\r\ndataset\r\n```\r\n\r\nPart of output:\r\n```python\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'sentiment'],\r\n num_rows: 900\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'sentiment'],\r\n num_rows: 100\r\n })\r\n})\r\n```\r\nAfterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however:\r\n```python\r\ndataset['train'].rename_column('sentence', 'text')\r\n```\r\n```python\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/splits.py in __init__(self, name)\r\n 353 for split_name in split_names_from_instruction:\r\n 354 if not re.match(_split_re, split_name):\r\n--> 355 raise ValueError(f\"Split name should match '{_split_re}'' but got '{split_name}'.\")\r\n 356 \r\n 357 def __str__(self):\r\n\r\nValueError: Split name should match '^\\w+(\\.\\w+)*$'' but got 'ReadInstruction('.\r\n```\r\nIn particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split.\r\n\r\nThanks in advance! :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2022\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2021","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2021\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2021\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2021\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2021","id":826988016,"node_id":"MDU6SXNzdWU4MjY5ODgwMTY=","number":2021,"title":"Interactively doing save_to_disk and load_from_disk corrupts the datasets object?","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https:\/\/huggingface.co\/docs\/datasets\/master\/processing.html#controling-the-cache-behavior) of the docs explains how to control caching."],"created_at":1615344514000,"updated_at":1615630061000,"closed_at":1615630061000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":" dataset_info.json file saved after using save_to_disk gets corrupted as follows. \r\n \r\n \r\n![image](https:\/\/user-images.githubusercontent.com\/16892570\/110568474-ed969880-81b7-11eb-832f-2e5129656016.png)\r\n\r\nIs there a way to disable the cache that will save to \/tmp\/huggiface\/datastes ? \r\nI have a feeling there is a serious issue with cashing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2021\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2020","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2020\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2020\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2020\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2020","id":826961126,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx","number":2020,"title":"Remove unnecessary docstart check in conll-like datasets","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615342816000,"updated_at":1615469617000,"closed_at":1615469617000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2020","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2020","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2020.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2020.patch"},"body":"Related to this PR: #1998\r\n\r\nAdditionally, this PR adds the docstart note to the conll2002 dataset card ([link](https:\/\/raw.githubusercontent.com\/teropa\/nlp\/master\/resources\/corpora\/conll2002\/ned.train) to the raw data with `DOCSTART` lines).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2020\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2019","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2019\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2019\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2019\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2019","id":826625706,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy","number":2019,"title":"Replace print with logging in dataset scripts","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Maybe a script or even a test in `test_dataset_common.py` that verifies that a dataset script meets some set of quality standards (print calls and todos from the dataset script template are not present, etc.) could be added?","Yes definitely !"],"created_at":1615323574000,"updated_at":1615543741000,"closed_at":1615479259000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2019","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2019","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2019.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2019.patch"},"body":"Replaces `print(...)` in the dataset scripts with the library logger.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2019\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2018","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2018\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2018\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2018\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2018","id":826473764,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz","number":2018,"title":"Md gender card update","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Link to the card: https:\/\/github.com\/mcmillanmajora\/datasets\/blob\/md-gender-card\/datasets\/md_gender_bias\/README.md","dataset card* @sgugger :p ","Ahah that's what I wanted to say @lhoestq, thanks for fixing. Not used to review the Datasets side ;-)"],"created_at":1615316240000,"updated_at":1615570260000,"closed_at":1615570260000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2018","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2018","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2018.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2018.patch"},"body":"I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2018\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2017","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2017\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2017\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2017\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2017","id":826428578,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2","number":2017,"title":"Add TF-based Features to handle different modes of data","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615314592000,"updated_at":1615984328000,"closed_at":1615984327000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2017","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2017","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2017.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2017.patch"},"body":"Hi,\r\n\r\nI am creating this draft PR to work on add features similar to [TF datasets](https:\/\/github.com\/tensorflow\/datasets\/tree\/master\/tensorflow_datasets\/core\/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2017\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2016","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2016\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2016\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2016\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2016","id":825965493,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz","number":2016,"title":"Not all languages have 2 digit codes.","user":{"login":"asiddhant","id":13891775,"node_id":"MDQ6VXNlcjEzODkxNzc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13891775?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/asiddhant","html_url":"https:\/\/github.com\/asiddhant","followers_url":"https:\/\/api.github.com\/users\/asiddhant\/followers","following_url":"https:\/\/api.github.com\/users\/asiddhant\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/asiddhant\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/asiddhant\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/asiddhant\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/asiddhant\/orgs","repos_url":"https:\/\/api.github.com\/users\/asiddhant\/repos","events_url":"https:\/\/api.github.com\/users\/asiddhant\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/asiddhant\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615298019000,"updated_at":1615485663000,"closed_at":1615485663000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2016","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2016","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2016.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2016.patch"},"body":".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2016\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2015","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2015\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2015\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2015\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2015","id":825942108,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0","number":2015,"title":"Fix ipython function creation in tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615297019000,"updated_at":1615298764000,"closed_at":1615298763000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2015","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2015","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2015.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2015.patch"},"body":"The test at `tests\/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.\r\n\r\nFix #2010 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2015\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2014","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2014\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2014\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2014\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2014","id":825916531,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3","number":2014,"title":"more explicit method parameters","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615295909000,"updated_at":1615370917000,"closed_at":1615370916000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2014","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2014","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2014.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2014.patch"},"body":"re: #2009\n\nnot super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2014\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2013","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2013\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2013\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2013\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2013","id":825694305,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx","number":2013,"title":"Add Cryptonite dataset","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615285931000,"updated_at":1615318027000,"closed_at":1615318026000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2013","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2013","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2013.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2013.patch"},"body":"cc @aviaefrat who's the original author of the dataset & paper, see https:\/\/github.com\/aviaefrat\/cryptonite","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2013\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2012","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2012\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2012\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2012\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2012","id":825634064,"node_id":"MDU6SXNzdWU4MjU2MzQwNjQ=","number":2012,"title":"No upstream branch","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https:\/\/github.com\/huggingface\/datasets.git`, you can totally rebase from `upstream\/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/987df6b4e9e20fc0c92bc9df48137d170756fd7b\/ADD_NEW_DATASET.md#L10-L14","~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo \ud83e\udd21 "],"created_at":1615283335000,"updated_at":1615289611000,"closed_at":1615289611000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Feels like the documentation on adding a new dataset is outdated?\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/987df6b4e9e20fc0c92bc9df48137d170756fd7b\/ADD_NEW_DATASET.md#L49-L54\r\n\r\nThere is no upstream branch on remote. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2012\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2011","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2011\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2011\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2011\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2011","id":825621952,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx","number":2011,"title":"Add RoSent Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615282808000,"updated_at":1615485652000,"closed_at":1615485652000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2011","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2011","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2011.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2011.patch"},"body":"This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529.\r\n\r\nI had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique.\r\n\r\nLet me know in case of any issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2011\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2010","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2010\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2010\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2010\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2010","id":825567635,"node_id":"MDU6SXNzdWU4MjU1Njc2MzU=","number":2010,"title":"Local testing fails","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?","```\r\nco_filename = '', returned_obj = [0]\r\n \r\n def create_ipython_func(co_filename, returned_obj):\r\n def func():\r\n return returned_obj\r\n \r\n code = func.__code__\r\n> code = CodeType(*[getattr(code, k) if k != \"co_filename\" else co_filename for k in code_args])\r\nE TypeError: an integer is required (got type bytes)\r\n\r\ntests\/test_caching.py:152: TypeError\r\n```\r\n\r\nPython 3.8.8 \r\ndill==0.3.1.1\r\n","I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8\r\nI opened a PR to fix this test\r\nThanks !"],"created_at":1615280498000,"updated_at":1615298763000,"closed_at":1615298763000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm following the CI setup as described in \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/8eee4fa9e133fe873a7993ba746d32ca2b687551\/.circleci\/config.yml#L16-L19\r\n\r\nin a new conda environment, at commit https:\/\/github.com\/huggingface\/datasets\/commit\/4de6dbf84e93dad97e1000120d6628c88954e5d4\r\n\r\nand getting\r\n\r\n```\r\nFAILED tests\/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes)\r\n1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04)\r\n```\r\n\r\nSeems like a discrepancy with CI, perhaps a lib version that's not controlled? \r\nTried with `pyarrow=={1.0.0,0.17.1,2.0.0}`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2010\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2009","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2009\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2009\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2009\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2009","id":825541366,"node_id":"MDU6SXNzdWU4MjU1NDEzNjY=","number":2009,"title":"Ambiguous documentation","user":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false},"assignees":[{"login":"theo-m","id":17948980,"node_id":"MDQ6VXNlcjE3OTQ4OTgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17948980?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/theo-m","html_url":"https:\/\/github.com\/theo-m","followers_url":"https:\/\/api.github.com\/users\/theo-m\/followers","following_url":"https:\/\/api.github.com\/users\/theo-m\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/theo-m\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/theo-m\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/theo-m\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/theo-m\/orgs","repos_url":"https:\/\/api.github.com\/users\/theo-m\/repos","events_url":"https:\/\/api.github.com\/users\/theo-m\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/theo-m\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir, \"dev.jsonl\"),\r\n \"split\": \"dev\",\r\n },\r\n),\r\n```\r\n\r\nNotice the `gen_kwargs` argument passed to the constructor of `SplitGenerator`: this dict will be unpacked as keyword arguments to pass to the `_generat_examples` method (in this case the `filepath` and `split` arguments).\r\n\r\nLet me know if that helps!","Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks!"],"created_at":1615279331000,"updated_at":1615561294000,"closed_at":1615561294000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/2ac9a0d24a091989f869af55f9f6411b37ff5188\/templates\/new_dataset_script.py#L156-L158\r\n\r\nLooking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.\r\n\r\nHappy to push a PR with a clearer statement when I understand the meaning.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2009\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2008","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2008\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2008\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2008\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2008","id":825153804,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4","number":2008,"title":"Fix various typos\/grammer in the docs","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["What do yo think of the documentation btw ?\r\nWhat parts would you like to see improved ?","I like how concise and straightforward the docs are.\r\n\r\nFew things that would further improve the docs IMO:\r\n* the usage example of `Dataset.formatted_as` in https:\/\/huggingface.co\/docs\/datasets\/master\/processing.html\r\n* the \"Open in Colab\" button would be nice where it makes sense (we can borrow this from the transformers project + link to HF Forum)"],"created_at":1615253968000,"updated_at":1615833769000,"closed_at":1615285292000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2008","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2008","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2008.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2008.patch"},"body":"This PR:\r\n* fixes various typos\/grammer I came across while reading the docs\r\n* adds the \"Install with conda\" installation instructions\r\n\r\nCloses #1959 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2008\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2007","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2007\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2007\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2007\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2007","id":824518158,"node_id":"MDU6SXNzdWU4MjQ1MTgxNTg=","number":2007,"title":"How to not load huggingface datasets into memory ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ","The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions"],"created_at":1615206926000,"updated_at":1628100145000,"closed_at":1628100145000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am running this example from transformers library version 4.3.3:\r\n(Here is the full documentation https:\/\/github.com\/huggingface\/transformers\/issues\/8771 but the running command should work out of the box)\r\n\r\n USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google\/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix \"translate English to Romanian: \" --task translation_en_to_ro --output_dir \/test\/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir\r\n\r\n(Here please find the script: https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/seq2seq\/run_seq2seq.py)\r\n\r\nIf you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.\r\n \r\nI need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? \r\n\r\nIn above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.\r\n\r\nthank you so much @lhoestq for your great help in advance\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2007\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2006","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2006\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2006\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2006\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2006","id":824457794,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2","number":2006,"title":"Don't gitignore dvc.lock","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615201988000,"updated_at":1615202915000,"closed_at":1615202914000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2006","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2006","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2006.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2006.patch"},"body":"The benchmarks runs are [failing](https:\/\/github.com\/huggingface\/datasets\/runs\/2055534629?check_suite_focus=true) because of \r\n```\r\nERROR: 'dvc.lock' is git-ignored.\r\n```\r\n\r\nI removed the dvc.lock file from the gitignore to fix that","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2006\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2005","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2005\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2005\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2005\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2005","id":824275035,"node_id":"MDU6SXNzdWU4MjQyNzUwMzU=","number":2005,"title":"Setting to torch format not working with torchvision and MNIST","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I get an output like this for the `image`:\r\n\r\n```\r\n[[tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor...\r\n```\r\nFor `label`, it works fine:\r\n```\r\ntensor([7, 6])\r\n```\r\nNote that I didn't specify conversion to torch tensors anywhere.\r\n\r\nBasically, there are two problems here:\r\n1. `dataset.map` doesn't return tensor type objects, even though it uses the transforms, the grayscale conversion in transform was done, but the output was lists only.\r\n2. The `DataLoader` performs its own conversion, which may be not desired.\r\n\r\nI understand that we can't change `DataLoader` because it is a torch functionality, however, is there a way we can handle image data to allow using it with torch `DataLoader` and `torchvision` properly?\r\n\r\nI think if the `image` was a torch tensor (N,H,W,C), or a list of torch tensors (H,W,C), before it is passed to `DataLoader`, then we might not face this issue. ","What's the feature types of your new dataset after `.map` ?\r\n\r\nCan you try with adding `features=` in the `.map` call in order to set the \"image\" feature type to `Array2D` ?\r\nThe default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet","Hi @lhoestq\r\n\r\nRaw feature types are like this:\r\n```\r\nImage:\r\n 60000 #(type, len)\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\nInside the `prepare_feature` method with batch size 100000 , after processing, they are like this:\r\n\r\nInside Prepare Train Features\r\n```\r\nImage:\r\n 10000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 10000\r\n\r\n```\r\n\r\nAfter map, the feature type are like this:\r\n```\r\nImage:\r\n 60000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\n\r\nAfter dataloader with batch size 2, the batch features are like this:\r\n```\r\nImage:\r\n 1\r\n 28\r\n 28\r\n 2\r\n\r\nLabel:\r\n 2\r\n\r\n```\r\n
\r\n\r\nWhen I was setting the format of `train_dataset` to 'torch' after mapping - \r\n```\r\nImage:\r\n 60000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\n\r\nCorresponding DataLoader batch:\r\n```\r\nFrom DataLoader batch features\r\nImage:\r\n 1\r\n 28\r\n 2\r\n 28\r\n\r\nLabel:\r\n 2\r\n\r\n```\r\n\r\nI will check with features and get back.\r\n\r\n\r\n\r\n","Hi @lhoestq\r\n\r\n# Using Array3D\r\nI tried this:\r\n```python\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(1,28,28),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nand it didn't fix the issue.\r\n\r\nDuring the `prepare_train_features:\r\n```\r\nImage:\r\n 10000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 10000\r\n\r\n```\r\n\r\nAfter the `map`:\r\n\r\n```\r\nImage:\r\n 60000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\nFrom the DataLoader batch:\r\n```\r\nImage:\r\n 1\r\n 28\r\n 28\r\n 2\r\n\r\nLabel:\r\n 2\r\n\r\n```\r\nIt is the same as before.\r\n\r\n---\r\n\r\nUsing `datasets.Sequence(datasets.Array2D(shape=(28,28),dtype=\"float32\"))` gave an error during `map`:\r\n\r\n```python\r\nArrowNotImplementedError Traceback (most recent call last)\r\n in ()\r\n 3 \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n 4 })\r\n----> 5 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n15 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/dataset_dict.py in (.0)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1307 fn_kwargs=fn_kwargs,\r\n 1308 new_fingerprint=new_fingerprint,\r\n-> 1309 update_data=update_data,\r\n 1310 )\r\n 1311 else:\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 202 }\r\n 203 # apply actual function\r\n--> 204 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 205 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 206 # re-apply format to the output\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 335 # Call actual function\r\n 336 \r\n--> 337 out = func(self, *args, **kwargs)\r\n 338 \r\n 339 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1580 if update_data:\r\n 1581 batch = cast_to_python_objects(batch)\r\n-> 1582 writer.write_batch(batch)\r\n 1583 if update_data:\r\n 1584 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 274 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 275 typed_sequence_examples[col] = typed_sequence\r\n--> 276 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 277 self.write_table(pa_table, writer_batch_size)\r\n 278 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.asarray()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/datasets\/arrow_writer.py in __arrow_array__(self, type)\r\n 95 out = pa.ExtensionArray.from_storage(type, pa.array(self.data, type.storage_dtype))\r\n 96 else:\r\n---> 97 out = pa.array(self.data, type=type)\r\n 98 if trying_type and out[0].as_py() != self.data[0]:\r\n 99 raise TypeError(\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: extension\r\n```","# Convert raw tensors to torch format\r\nStrangely, converting to torch tensors works perfectly on `raw_dataset`:\r\n```python\r\nraw_dataset.set_format('torch',columns=['image','label'])\r\n```\r\nTypes:\r\n```\r\nImage:\r\n 60000\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\n\r\nUsing this for transforms:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n examples[\"image\"][example_idx].numpy()\r\n ))\r\n else:\r\n images.append(examples[\"image\"][example_idx].numpy())\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n```\r\n\r\nInside `prepare_train_features`:\r\n```\r\nImage:\r\n 10000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 10000\r\n\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n 60000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\nDataLoader batch:\r\n\r\n```\r\nImage:\r\n 1\r\n 28\r\n 2\r\n 28\r\n\r\nLabel:\r\n 2\r\n\r\n```\r\n\r\n---\r\n\r\n## Using `torch` format:\r\n```\r\nImage:\r\n 60000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\nDataLoader batches:\r\n\r\n```\r\nImage:\r\n 1\r\n 28\r\n 2\r\n 28\r\n\r\nLabel:\r\n 2\r\n\r\n```\r\n\r\n---\r\n## Using the features - `Array3D`:\r\n\r\n```\r\nImage:\r\n 10000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 10000\r\n\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n 60000\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 60000\r\n\r\n```\r\n\r\nAfter DataLoader `batch`:\r\n```\r\nImage:\r\n 2\r\n 1\r\n 28\r\n 28\r\n\r\nLabel:\r\n 2\r\n\r\n```\r\n\r\nThe last one works perfectly.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/29076344\/110491452-4cf09c00-8117-11eb-8a47-73bf3fc0c3dc.png)\r\n\r\nI wonder why this worked, and others didn't.\r\n\r\n\r\n\r\n\r\n\r\n\r\n","Concluding, the way it works right now is:\r\n\r\n1. Converting raw dataset to `torch` format.\r\n2. Use the transform and apply using `map`, ensure the returned values are tensors. \r\n3. When mapping, use `features` with `image` being `Array3D` type.","What the dataset returns depends on the feature type.\r\nFor a feature type that is Sequence(Sequence(Sequence(Value(\"uint8\")))), a dataset formatted as \"torch\" return lists of lists of tensors. This is because the lists lengths may vary.\r\nFor a feature type that is Array3D on the other hand it returns one tensor. This is because the size of the tensor is fixed and defined bu the Array3D type.","Okay, that makes sense.\r\nRaw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?\r\n\r\nUsing `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.","I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved."],"created_at":1615189091000,"updated_at":1615312693000,"closed_at":1615312693000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\n\r\nI am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.\r\n\r\nA snippet of what I am trying to do:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n np.array(examples[\"image\"][example_idx], dtype=np.uint8)\r\n ))\r\n else:\r\n images.append(torch.tensor(np.array(examples[\"image\"][example_idx], dtype=np.uint8)))\r\n labels.append(torch.tensor(examples[\"label\"][example_idx]))\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n\r\nraw_dataset = load_dataset('mnist')\r\ntrain_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000)\r\ntrain_dataset.set_format(\"torch\",columns=[\"image\",\"label\"])\r\n```\r\n\r\nAfter this, I check the type of the following:\r\n```python\r\nprint(type(train_dataset[\"train\"][\"label\"]))\r\nprint(type(train_dataset[\"train\"][\"image\"][0]))\r\n```\r\nThis leads to the following output:\r\n\r\n```python\r\n\r\n\r\n```\r\nI use `torch.utils.DataLoader` for batches, the type of `batch[\"train\"][\"image\"]` is also ``.\r\n\r\nI don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue?\r\n\r\nThanks,\r\nGunjan\r\n\r\nEDIT:\r\nI just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28).\r\n\r\nEDIT 2:\r\nInside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2005\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2004","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2004\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2004\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2004\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2004","id":824080760,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1","number":2004,"title":"LaRoSeDa","user":{"login":"MihaelaGaman","id":6823177,"node_id":"MDQ6VXNlcjY4MjMxNzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6823177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MihaelaGaman","html_url":"https:\/\/github.com\/MihaelaGaman","followers_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/followers","following_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/orgs","repos_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/repos","events_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq all the changes requested are implemented. Thank you for your time and feedback :)"],"created_at":1615165592000,"updated_at":1615977800000,"closed_at":1615977800000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2004","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2004","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2004.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2004.patch"},"body":"Add LaRoSeDa to huggingface datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2004\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2003","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2003\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2003\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2003\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2003","id":824034678,"node_id":"MDU6SXNzdWU4MjQwMzQ2Nzg=","number":2003,"title":"Messages are being printed to the `stdout`","user":{"login":"mahnerak","id":1367529,"node_id":"MDQ6VXNlcjEzNjc1Mjk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1367529?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mahnerak","html_url":"https:\/\/github.com\/mahnerak","followers_url":"https:\/\/api.github.com\/users\/mahnerak\/followers","following_url":"https:\/\/api.github.com\/users\/mahnerak\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mahnerak\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mahnerak\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mahnerak\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mahnerak\/orgs","repos_url":"https:\/\/api.github.com\/users\/mahnerak\/repos","events_url":"https:\/\/api.github.com\/users\/mahnerak\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mahnerak\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?","@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you decided to output a message that is always shown. The only problem is that the message is printed to the `stdout`. For example, if the user runs `python run_glue.py > log_file`, it will redirect `stdout` to the file named `log_file`, and the message will not be shown to the user.\r\n\r\nInstead, we should print this message to `stderr`. Even in the case of `python run_glue.py > log_file` only `stdout` is being redirected and so the message is always shown."],"created_at":1615154974000,"updated_at":1615830467000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In this code segment, we can see some messages are being printed to the `stdout`.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/7e60bb509b595e8edc60a87f32b2bacfc065d607\/src\/datasets\/builder.py#L545-L554\r\nAccording to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`.\r\nIn my opinion, this kind of messages should never printed to the stdout. At least some configuration\/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2003\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2002","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2002\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2002\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2002\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2002","id":823955744,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3","number":2002,"title":"MOROCO","user":{"login":"MihaelaGaman","id":6823177,"node_id":"MDQ6VXNlcjY4MjMxNzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6823177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MihaelaGaman","html_url":"https:\/\/github.com\/MihaelaGaman","followers_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/followers","following_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/orgs","repos_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/repos","events_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MihaelaGaman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for all the feedback. I've added the suggested changes in my last commit."],"created_at":1615134137000,"updated_at":1616147526000,"closed_at":1616147526000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/2002","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2002","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2002.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/2002.patch"},"body":"Add MOROCO to huggingface datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2002\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2001","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2001\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2001\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2001\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2001","id":823946706,"node_id":"MDU6SXNzdWU4MjM5NDY3MDY=","number":2001,"title":"Empty evidence document (\"provenance\") in KILT ELI5 dataset","user":{"login":"donggyukimc","id":16605764,"node_id":"MDQ6VXNlcjE2NjA1NzY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16605764?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/donggyukimc","html_url":"https:\/\/github.com\/donggyukimc","followers_url":"https:\/\/api.github.com\/users\/donggyukimc\/followers","following_url":"https:\/\/api.github.com\/users\/donggyukimc\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/donggyukimc\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/donggyukimc\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/donggyukimc\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/donggyukimc\/orgs","repos_url":"https:\/\/api.github.com\/users\/donggyukimc\/repos","events_url":"https:\/\/api.github.com\/users\/donggyukimc\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/donggyukimc\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615131695000,"updated_at":1615960261000,"closed_at":1615960261000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In the original KILT benchmark(https:\/\/github.com\/facebookresearch\/KILT), \r\n\r\nall samples has its evidence document (i.e. wikipedia page id) for prediction.\r\n\r\nFor example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this\r\n\r\n`{\"id\": \"1kiwfx\", \"input\": \"In Trading Places (1983, Akroyd\/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?\", \"output\": [{\"answer\": \"I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year\/month\/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \\\"what happens at the end of Trading Places?\\\"\"}, {\"provenance\": [{\"wikipedia_id\": \"242855\", \"title\": \"Futures contract\", \"section\": \"Section::::Abstract.\", \"start_paragraph_id\": 1, \"start_character\": 14, \"end_paragraph_id\": 1, \"end_character\": 612, \"bleu_score\": 0.9232808519770748}]}], \"meta\": {\"partial_evidence\": [{\"wikipedia_id\": \"520990\", \"title\": \"Trading Places\", \"section\": \"Section::::Plot.\\n\", \"start_paragraph_id\": 7, \"end_paragraph_id\": 7, \"meta\": {\"evidence_span\": [\"On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.\", \"On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.\", \"Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.\"]}}]}}`\r\n\r\nHowever, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance.\r\n\r\n`{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside\/throws the defense will catch on.\\n\\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': \"I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.\", 'meta': {'score': 2}, 'provenance': []}]}\r\n`\r\n\r\nshould i perform other procedure to obtain evidence documents?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2001\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2000","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2000\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2000\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2000\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2000","id":823899910,"node_id":"MDU6SXNzdWU4MjM4OTk5MTA=","number":2000,"title":"Windows Permission Error (most recent version of datasets)","user":{"login":"itsLuisa","id":73881148,"node_id":"MDQ6VXNlcjczODgxMTQ4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73881148?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/itsLuisa","html_url":"https:\/\/github.com\/itsLuisa","followers_url":"https:\/\/api.github.com\/users\/itsLuisa\/followers","following_url":"https:\/\/api.github.com\/users\/itsLuisa\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/itsLuisa\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/itsLuisa\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/itsLuisa\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/itsLuisa\/orgs","repos_url":"https:\/\/api.github.com\/users\/itsLuisa\/repos","events_url":"https:\/\/api.github.com\/users\/itsLuisa\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/itsLuisa\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ","Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 537, in incomplete_dir\r\n yield tmp_dir\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 656, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 982, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 297, in finalize\r\n self.write_on_file()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 230, in write_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow\\array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 97, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\\array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\\error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\\error.pxi\", line 107, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Expected bytes, got a 'list' object\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\/Users\/Luisa\/Documents\/Uni\/WS 2020,21\/Neural Networks\/Final_Project\/NN_Project\/data_loading.py\", line 122, in \r\n main()\r\n File \"C:\/Users\/Luisa\/Documents\/Uni\/WS 2020,21\/Neural Networks\/Final_Project\/NN_Project\/data_loading.py\", line 111, in main\r\n dataset = datasets.load_dataset(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 586, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 543, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 616, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\\\Users\\\\Luisa\\\\.cache\\\\huggingface\\\\datasets\\\\sample\\\\default-20ee7d51a6a9454f\\\\0.0.0\\\\5fc4c3a355ea77ab446bd31fca5082437600b8364d29b2b95264048bd1f398b1.incomplete\\\\sample-train.arrow'\r\n\r\nProcess finished with exit code 1\r\n```","Hi @itsLuisa, thanks for sharing the Traceback.\r\n\r\nYou are defining the \"id\" field as a `string` feature:\r\n```python\r\nclass Sample(datasets.GeneratorBasedBuilder):\r\n ...\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n # ^^ here\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"pos_tags\": datasets.Sequence(datasets.features.ClassLabel(names=[...])),\r\n[...]\r\n```\r\n\r\nBut in the `_generate_examples`, the \"id\" field is a list:\r\n```python\r\nids = list()\r\n```\r\n\r\nChanging:\r\n```python\r\n\"id\": datasets.Value(\"string\"),\r\n```\r\nInto:\r\n```python\r\n\"id\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\nShould fix your issue.\r\n\r\nLet me know if this helps!","It seems to be working now, thanks a lot for the help, @SBrandeis !","Glad to hear it!\r\nI'm closing the issue"],"created_at":1615118128000,"updated_at":1615293777000,"closed_at":1615293777000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi everyone,\r\nCan anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/conll2003\/conll2003.py , only I want to load the data from three local three-column tsv-files (id\\ttokens\\tpos_tags\\n). I am using the most recent version of datasets. Thank you in advance!\r\nLuisa\r\n\r\nMy script:\r\n```\r\nimport datasets\r\nimport csv\r\n\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\nclass SampleConfig(datasets.BuilderConfig):\r\n\r\n def __init__(self, **kwargs):\r\n super(SampleConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass Sample(datasets.GeneratorBasedBuilder):\r\n BUILDER_CONFIGS = [\r\n SampleConfig(name=\"conll2003\", version=datasets.Version(\"1.0.0\"), description=\"Conll2003 dataset\"),\r\n ]\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=\"Dataset with words and their POS-Tags\",\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"pos_tags\": datasets.Sequence(\r\n datasets.features.ClassLabel(\r\n names=[\r\n \"''\",\r\n \",\",\r\n \"-LRB-\",\r\n \"-RRB-\",\r\n \".\",\r\n \":\",\r\n \"CC\",\r\n \"CD\",\r\n \"DT\",\r\n \"EX\",\r\n \"FW\",\r\n \"HYPH\",\r\n \"IN\",\r\n \"JJ\",\r\n \"JJR\",\r\n \"JJS\",\r\n \"MD\",\r\n \"NN\",\r\n \"NNP\",\r\n \"NNPS\",\r\n \"NNS\",\r\n \"PDT\",\r\n \"POS\",\r\n \"PRP\",\r\n \"PRP$\",\r\n \"RB\",\r\n \"RBR\",\r\n \"RBS\",\r\n \"RP\",\r\n \"TO\",\r\n \"UH\",\r\n \"VB\",\r\n \"VBD\",\r\n \"VBG\",\r\n \"VBN\",\r\n \"VBP\",\r\n \"VBZ\",\r\n \"WDT\",\r\n \"WP\",\r\n \"WRB\",\r\n \"``\"\r\n ]\r\n )\r\n ),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=\"https:\/\/catalog.ldc.upenn.edu\/LDC2011T03\",\r\n citation=\"Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.\",\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n loaded_files = dl_manager.download_and_extract(self.config.data_files)\r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": loaded_files[\"train\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={\"filepath\": loaded_files[\"test\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": loaded_files[\"val\"]})\r\n ]\r\n\r\n def _generate_examples(self, filepath):\r\n logger.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"cp1252\") as f:\r\n data = csv.reader(f, delimiter=\"\\t\")\r\n ids = list()\r\n tokens = list()\r\n pos_tags = list()\r\n for id_, line in enumerate(data):\r\n #print(line)\r\n if len(line) == 1:\r\n if tokens:\r\n yield id_, {\"id\": ids, \"tokens\": tokens, \"pos_tags\": pos_tags}\r\n ids = list()\r\n tokens = list()\r\n pos_tags = list()\r\n else:\r\n ids.append(line[0])\r\n tokens.append(line[1])\r\n pos_tags.append(line[2])\r\n # last example\r\n yield id_, {\"id\": ids, \"tokens\": tokens, \"pos_tags\": pos_tags}\r\n\r\n\r\ndef main():\r\n dataset = datasets.load_dataset(\r\n \"data_loading.py\", data_files={\r\n \"train\": \"train.tsv\",\r\n \"test\": \"test.tsv\",\r\n \"val\": \"val.tsv\"\r\n }\r\n )\r\n\r\n #print(dataset)\r\n\r\nif __name__==\"__main__\":\r\n main()\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2000\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1999","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1999\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1999\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1999\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1999","id":823753591,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy","number":1999,"title":"Add FashionMNIST dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nI have added the changes from the review."],"created_at":1615066617000,"updated_at":1615283531000,"closed_at":1615283531000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1999","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1999","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1999.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1999.patch"},"body":"This PR adds [FashionMNIST](https:\/\/github.com\/zalandoresearch\/fashion-mnist) dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1999\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1998","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1998\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1998\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1998\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1998","id":823723960,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg2MTE4NTQ4","number":1998,"title":"Add -DOCSTART- note to dataset card of conll-like datasets","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice catch! Yes I didn't check the actual data, instead I was just looking for the `if line.startswith(\"-DOCSTART-\")` pattern."],"created_at":1615057709000,"updated_at":1615429207000,"closed_at":1615429207000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1998","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1998","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1998.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1998.patch"},"body":"Closes #1983","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1998\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1997","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1997\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1997\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1997\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1997","id":823679465,"node_id":"MDU6SXNzdWU4MjM2Nzk0NjU=","number":1997,"title":"from datasets import MoleculeDataset, GEOMDataset","user":{"login":"futianfan","id":5087210,"node_id":"MDQ6VXNlcjUwODcyMTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5087210?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/futianfan","html_url":"https:\/\/github.com\/futianfan","followers_url":"https:\/\/api.github.com\/users\/futianfan\/followers","following_url":"https:\/\/api.github.com\/users\/futianfan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/futianfan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/futianfan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/futianfan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/futianfan\/orgs","repos_url":"https:\/\/api.github.com\/users\/futianfan\/repos","events_url":"https:\/\/api.github.com\/users\/futianfan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/futianfan\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1615045819000,"updated_at":1615047206000,"closed_at":1615047206000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1997\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1996","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1996\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1996\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1996\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1996","id":823573410,"node_id":"MDU6SXNzdWU4MjM1NzM0MTA=","number":1996,"title":"Error when exploring `arabic_speech_corpus`","user":{"login":"elgeish","id":6879673,"node_id":"MDQ6VXNlcjY4Nzk2NzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6879673?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/elgeish","html_url":"https:\/\/github.com\/elgeish","followers_url":"https:\/\/api.github.com\/users\/elgeish\/followers","following_url":"https:\/\/api.github.com\/users\/elgeish\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/elgeish\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/elgeish\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/elgeish\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/elgeish\/orgs","repos_url":"https:\/\/api.github.com\/users\/elgeish\/repos","events_url":"https:\/\/api.github.com\/users\/elgeish\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/elgeish\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting! We'll fix that as soon as possible","Actually soundfile is not a dependency of this dataset.\r\nThe error comes from a bug that was fixed in this commit: https:\/\/github.com\/huggingface\/datasets\/pull\/1767\/commits\/c304e63629f4453367de2fd42883a78768055532\r\nBasically the library used to consider the `import soundfile` in the docstring as a dependency, while it's just here as a code example.\r\n\r\nUpdating the viewer to the latest version of `datasets` should fix this issue\r\n"],"created_at":1615010120000,"updated_at":1615288345000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Navigate to https:\/\/huggingface.co\/datasets\/viewer\/?dataset=arabic_speech_corpus\r\n\r\nError:\r\n```\r\nImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'\r\nTraceback:\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/streamlit\/script_runner.py\", line 332, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 233, in \r\n configs = get_confs(option)\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 604, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 588, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 145, in get_confs\r\n module_path = nlp.load.prepare_module(path, dataset=True\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 342, in prepare_module\r\n f\"To be able to use this {module_type}, you need to install the following dependencies\"\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1996\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1995","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1995\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1995\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1995\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1995","id":822878431,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg1NDI5NTg0","number":1995,"title":"[Timit_asr] Make sure not only the first sample is used ","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @lhoestq @vrindaprabhu","Failing `run (push)` is unrelated -> merging","Thanks for fixing this, it was affecting my runs for https:\/\/github.com\/huggingface\/transformers\/pull\/10581\/","I am seeing this very late! Sorry for the blunder everyone! :("],"created_at":1614933771000,"updated_at":1625034353000,"closed_at":1614934739000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1995","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1995","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1995.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1995.patch"},"body":"When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1995\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1994","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1994\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1994\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1994\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1994","id":822871238,"node_id":"MDU6SXNzdWU4MjI4NzEyMzg=","number":1994,"title":"not being able to get wikipedia es language","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en\/de\/fr currently works, but I need all the languages more or less. thanks ","Hi @dorost1234, I think I can help you a little. I\u2019ve processed some Wikipedia datasets (Spanish inclusive) using the HF\/datasets library during recent research.\r\n\r\n@lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\n","Thank you so much @jonatasgrosman , I greatly appreciate your help with them. \r\nYes, I unfortunately does not have access to a good resource and need it for my\r\nresearch. I greatly appreciate @lhoestq your help with uploading the processed datasets in huggingface datasets. This would be really helpful for some users like me with not access to high-memory GPU resources.\r\n\r\nthank you both so much again.\r\n\r\nOn Sat, Mar 6, 2021 at 12:55 AM Jonatas Grosman \r\nwrote:\r\n\r\n> Hi @dorost1234 , I think I can help you a\r\n> little. I\u2019ve processed some Wikipedia datasets (Spanish inclusive) using\r\n> the HF\/datasets library during recent research.\r\n>\r\n> @lhoestq Could you help me to upload these\r\n> preprocessed datasets to Huggingface's repositories? To be more precise,\r\n> I've built datasets from the following languages using the 20201201 dumps:\r\n> Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish.\r\n> Process these datasets have high costs that most of the community can't\r\n> afford. I think these preprocessed datasets I have could be helpful for\r\n> someone without access to high-resource machines to process Wikipedia's\r\n> dumps like @dorost1234 \r\n>\r\n> \u2014\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n>\r\n","Hi @dorost1234, so sorry, but looking at my files here, I figure out that I've preprocessed files using the HF\/datasets for all the languages previously listed by me (Portuguese, Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my tests I've used the [wikicorpus](https:\/\/www.cs.upc.edu\/~nlp\/wikicorpus\/) instead).\r\n\r\nOnly with the Spanish Wikipedia's dump, I had the same `KeyError: '000nbsp'` problem already reported here https:\/\/github.com\/huggingface\/datasets\/issues\/577\r\n\r\nSo nowadays, even with access to a high resource machine, you couldn't be able to get Wikipedia's Spanish data using the HF\/datasets :(\r\n\r\n\r\n\r\n\r\n","Thanks a lot for the information and help. This would be great to have\nthese datasets.\n@lhoestq Do you know a way I could get\nsmaller amount of these data like 1 GBtype of each language to deal with\ncomputatioanl requirements? thanks\n\nOn Sat, Mar 6, 2021 at 5:36 PM Jonatas Grosman \nwrote:\n\n> Hi @dorost1234 , so sorry, but looking at\n> my files here, I figure out that I've preprocessed files using the\n> HF\/datasets for all the languages previously listed by me (Portuguese,\n> Russian, French, Japanese, Chinese, and Turkish) except the Spanish (on my\n> tests I've used the wikicorpus \n> instead).\n>\n> Only with the Spanish Wikipedia's dump, I had the same KeyError: '000nbsp'\n> problem already reported here #577\n> \n>\n> So nowadays, even with access to a high resource machine, you couldn't be\n> able to get Wikipedia's Spanish data using the HF\/datasets :(\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> ,\n> or unsubscribe\n> \n> .\n>\n","Hi ! As mentioned above the Spanish configuration have parsing issues from `mwparserfromhell`. I haven't tested with the latest `mwparserfromhell` >=0.6 though. Which version of `mwparserfromhell` are you using ?\r\n\r\n> @lhoestq Could you help me to upload these preprocessed datasets to Huggingface's repositories? To be more precise, I've built datasets from the following languages using the 20201201 dumps: Spanish, Portuguese, Russian, French, Japanese, Chinese, and Turkish. Process these datasets have high costs that most of the community can't afford. I think these preprocessed datasets I have could be helpful for someone without access to high-resource machines to process Wikipedia's dumps like @dorost1234\r\n\r\nThat would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\n> Do you know a way I could get smaller amount of these data like 1 GBtype of each language to deal with computatioanl requirements? thanks\r\n\r\nI'd suggest to copy the [wikipedia.py](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wikipedia\/wikipedia.py) to a new script `custom_wikipedia.py` and modify it to only download and process only a subset of the raw data files.\r\nYou can for example replace [this line](https:\/\/github.com\/huggingface\/datasets\/blob\/64e59fc45ca2134218b3e42e83fddddbe840ff74\/datasets\/wikipedia\/wikipedia.py#L446) by:\r\n```python\r\n if total_bytes >= (1 << 30): # stop if the total amount of data is >= 1GB\r\n break\r\n else:\r\n xml_urls.append(_base_url(lang) + fname)\r\n```\r\n\r\nThen you can load your custom wikipedia dataset with\r\n```python\r\nload_dataset(\"path\/to\/my\/custom_wikipedia.py\", f\"{date}.{language}\")\r\n```","Hi @lhoestq!\r\n\r\n> Hi ! As mentioned above the Spanish configuration have parsing issues from mwparserfromhell. I haven't tested with the latest mwparserfromhell >=0.6 though. Which version of mwparserfromhell are you using ?\r\n\r\nI'm using the latest mwparserfromhell version (0.6)\r\n\r\n> That would be awesome ! Feel free to ping me on slack so we can put the processed wikipedia files on google storage with the other ones we've already preprocessed.\r\n\r\nI'll ping you there \ud83d\udc4d ","Thank you so much @jonatasgrosman and @lhoestq this would be a great help. I am really thankful to you both and to wonderful Huggingface dataset library allowing us to train models at scale."],"created_at":1614933108000,"updated_at":1615495581000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying to run a code with wikipedia of config 20200501.es, getting:\r\n\r\nTraceback (most recent call last):\r\n File \"run_mlm_t5.py\", line 608, in \r\n main()\r\n File \"run_mlm_t5.py\", line 359, in main\r\n datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)\r\n File \"\/dara\/libs\/anaconda3\/envs\/success432\/lib\/python3.7\/site-packages\/datasets-1.2.1-py3.7.egg\/datasets\/load.py\", line 612, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/dara\/libs\/anaconda3\/envs\/success432\/lib\/python3.7\/site-packages\/datasets-1.2.1-py3.7.egg\/datasets\/builder.py\", line 527, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/dara\/libs\/anaconda3\/envs\/success432\/lib\/python3.7\/site-packages\/datasets-1.2.1-py3.7.egg\/datasets\/builder.py\", line 1050, in _download_and_prepare\r\n \"\\n\\t`{}`\".format(usage_example)\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https:\/\/beam.apache.org\/documentation\/runners\/capability-matrix\/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n\t`load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`\r\n\r\nthanks @lhoestq for any suggestion\/help ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1994\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1993","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1993\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1993\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1993\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1993","id":822758387,"node_id":"MDU6SXNzdWU4MjI3NTgzODc=","number":1993,"title":"How to load a dataset with load_from disk and save it again after doing transformations without changing the original? ","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset","Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https:\/\/github.com\/shamanez\/transformers\/blob\/rag-end-to-end-retrieval\/examples\/research_projects\/rag\/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to compute the embeddings in this use **load_from_disk**. \r\n\r\nThen finally save it. You can see the original dataset object (CSV after splitting also will be changed)\r\n\r\nOne more thing- when I save the dataset object with **save_to_disk** it name the arrow file with cache.... rather than using dataset. arrow. Can you add a variable that we can feed a name to save_to_disk function?","@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. ","I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.\r\nBut from your last message it looks like save_to_disk isn't the root cause right ?","ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? ","> Anyways I think load_from_disk uses the arrow files mentioned in state.json right?\r\n\r\nYes exactly","Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!"],"created_at":1614921950000,"updated_at":1616385950000,"closed_at":1616385950000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. \r\n\r\nWhen I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1993\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1992","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1992\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1992\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1992\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1992","id":822672238,"node_id":"MDU6SXNzdWU4MjI2NzIyMzg=","number":1992,"title":"`datasets.map` multi processing much slower than single processing ","user":{"login":"hwijeen","id":29157715,"node_id":"MDQ6VXNlcjI5MTU3NzE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29157715?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hwijeen","html_url":"https:\/\/github.com\/hwijeen","followers_url":"https:\/\/api.github.com\/users\/hwijeen\/followers","following_url":"https:\/\/api.github.com\/users\/hwijeen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hwijeen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hwijeen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hwijeen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hwijeen\/orgs","repos_url":"https:\/\/api.github.com\/users\/hwijeen\/repos","events_url":"https:\/\/api.github.com\/users\/hwijeen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hwijeen\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I\/O operations being performed.","I see that many people are experiencing the same issue. Is this problem considered an \"official\" bug that is worth a closer look? @lhoestq","Yes this is an official bug. On my side I haven't managed to reproduce it but @theo-m has. We'll investigate this !","Thank you for the reply! I would be happy to follow the discussions related to the issue.\r\nIf you do not mind, could you also give a little more explanation on my p.s.2? I am having a hard time figuring out why the single processing `map` uses all of my cores.\r\n@lhoestq @theo-m ","Regarding your ps2: It depends what function you pass to `map`.\r\nFor example, fast tokenizers from `transformers` in Rust tokenize texts and parallelize the tokenization over all the cores.","I am still experiencing this issue with datasets 1.9.0..\r\nHas there been a further investigation? \r\n\"image\"\r\n"],"created_at":1614910202000,"updated_at":1626689109000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, thank you for the great library.\r\n\r\nI've been using datasets to pretrain language models, and it often involves datasets as large as ~70G.\r\nMy data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer.\r\n\r\nI noticed that `map` function with `num_proc=mp.cpu_count() \/\/2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running.\r\n\r\nWhat could be the reason? I would be happy to provide information necessary to spot the reason.\r\n\r\np.s. I was experiencing the imbalance issue mentioned in [here](https:\/\/github.com\/huggingface\/datasets\/issues\/610#issuecomment-705177036) when I was using multi processing.\r\np.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work. \r\n![Screen Shot 2021-03-05 at 11 04 59](https:\/\/user-images.githubusercontent.com\/29157715\/110056895-ef6cf000-7da2-11eb-8307-6698e9fb1ad4.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1992\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1991","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1991\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1991\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1991\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1991","id":822554473,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg1MTYwNDkx","number":1991,"title":"Adding the conllpp dataset","user":{"login":"ZihanWangKi","id":21319243,"node_id":"MDQ6VXNlcjIxMzE5MjQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21319243?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZihanWangKi","html_url":"https:\/\/github.com\/ZihanWangKi","followers_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/followers","following_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/repos","events_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the reviews! A note that I have addressed the comments, and waiting for a further review."],"created_at":1614896383000,"updated_at":1615977459000,"closed_at":1615977459000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1991","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1991","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1991.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1991.patch"},"body":"Adding the conllpp dataset, is a revision from https:\/\/github.com\/huggingface\/datasets\/pull\/1910.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1991\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1990","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1990\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1990\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1990\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1990","id":822384502,"node_id":"MDU6SXNzdWU4MjIzODQ1MDI=","number":1990,"title":"OSError: Memory mapping file failed: Cannot allocate memory","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you","It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.\r\n\r\nWhat dataset did you use to get this error ?\r\nOn what OS are you running ? What's your python and pyarrow version ?","Dear @lhoestq \r\nthank you so much for coming back to me. Please find info below:\r\n1) Dataset name: I used wikipedia with config 20200501.en\r\n2) I got these pyarrow in my environment:\r\npyarrow 2.0.0 \r\npyarrow 3.0.0 \r\n\r\n3) python version 3.7.10\r\n4) OS version \r\n\r\nlsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tDebian\r\nDescription:\tDebian GNU\/Linux 10 (buster)\r\nRelease:\t10\r\nCodename:\tbuster\r\n\r\n\r\nIs there a way I could solve the memory issue and if I could run this model, I am using GeForce GTX 108, \r\nthanks \r\n","I noticed that the error happens when loading the validation dataset.\r\nWhat value of `data_args.validation_split_percentage` did you use ?","Dear @lhoestq \r\n\r\nthank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid this?\r\n\r\nThank you very much for the great help.\r\n\r\n\r\nOn Mon, Mar 8, 2021 at 11:28 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> I noticed that the error happens when loading the validation dataset.\r\n> What value of data_args.validation_split_percentage did you use ?\r\n>\r\n> \u2014\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> ,\r\n> or unsubscribe\r\n> \r\n> .\r\n>\r\n","Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory. \r\nThe only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset[\"my_column_names\"]`.\r\n\r\nBut it's possible that trying to use those methods to build your validation set doesn't fix the issue since, if I understand correctly, the error happens when when the dataset arrow file is opened (just before the 5% percentage is applied).\r\n\r\nDid you try to reproduce this issue in a google colab ? This would be super helpful to investigate why this happened.\r\n\r\nAlso maybe you can try clearing your cache at `~\/.cache\/huggingface\/datasets` and try again. If the arrow file was corrupted somehow, removing it and rebuilding may fix the issue."],"created_at":1614882118000,"updated_at":1628100265000,"closed_at":1628100265000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https:\/\/github.com\/huggingface\/transformers\/blob\/v4.3.2\/examples\/language-modeling\/run_mlm.py \r\n```\r\npython run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir \/dara\/test --max_seq_length 128\r\n```\r\n\r\nI am using transformer version: 4.3.2 \r\n\r\nBut I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset?\r\nSpecially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions:\r\n\r\n```\r\n File \"run_mlm.py\", line 441, in \r\n main()\r\n File \"run_mlm.py\", line 233, in main\r\n split=f\"train[{data_args.validation_split_percentage}%:]\",\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/builder.py\", line 740, in as_dataset\r\n map_tuple=True,\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/builder.py\", line 757, in _build_single_dataset\r\n in_memory=in_memory,\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/builder.py\", line 829, in _as_dataset\r\n in_memory=in_memory,\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/arrow_reader.py\", line 215, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow\/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow\/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1990\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1989","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1989\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1989\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1989\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1989","id":822328147,"node_id":"MDU6SXNzdWU4MjIzMjgxNDc=","number":1989,"title":"Question\/problem with dataset labels","user":{"login":"ioana-blue","id":17202292,"node_id":"MDQ6VXNlcjE3MjAyMjky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17202292?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ioana-blue","html_url":"https:\/\/github.com\/ioana-blue","followers_url":"https:\/\/api.github.com\/users\/ioana-blue\/followers","following_url":"https:\/\/api.github.com\/users\/ioana-blue\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ioana-blue\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ioana-blue\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ioana-blue\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ioana-blue\/orgs","repos_url":"https:\/\/api.github.com\/users\/ioana-blue\/repos","events_url":"https:\/\/api.github.com\/users\/ioana-blue\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ioana-blue\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems that I get parsing errors for various fields in my data. For example now I get this:\r\n```\r\n File \"..\/..\/..\/models\/tr-4.3.2\/run_puppets.py\", line 523, in \r\n main()\r\n File \"..\/..\/..\/models\/tr-4.3.2\/run_puppets.py\", line 249, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 572, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 650, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1028, in _prepare_split\r\n writer.write_table(table)\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 292, in write_table\r\n pa_table = pa_table.cast(self._schema)\r\n File \"pyarrow\/table.pxi\", line 1311, in pyarrow.lib.Table.cast\r\n File \"pyarrow\/table.pxi\", line 265, in pyarrow.lib.ChunkedArray.cast\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/pyarrow\/compute.py\", line 87, in cast\r\n return call_function(\"cast\", [arr], options)\r\n File \"pyarrow\/_compute.pyx\", line 298, in pyarrow._compute.call_function\r\n File \"pyarrow\/_compute.pyx\", line 192, in pyarrow._compute.Function.call\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Failed to parse string: https:\/\/www.netgalley.com\/catalog\/book\/121872\r\n```","Not sure if this helps, this is how I load my files (as in the sample scripts on transformers):\r\n\r\n```\r\n if data_args.train_file.endswith(\".csv\"):\r\n # Loading a dataset from local csv files\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n```","Since this worked out of the box in a few examples before, I wonder if it's some quoting issue or something else. ","Hi @ioana-blue,\r\nCan you share a sample from your .csv? A dummy where you get this error will also help.\r\n\r\nI tried this csv:\r\n```csv\r\nfeature,label\r\n1.2,not nurse\r\n1.3,nurse\r\n1.5,surgeon\r\n```\r\nand the following snippet:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"csv\",data_files=['test.csv'])\r\n\r\nprint(d)\r\nprint(d['train']['label'])\r\n```\r\nand this works perfectly fine for me:\r\n```sh\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['feature', 'label'],\r\n num_rows: 3\r\n })\r\n})\r\n['not nurse', 'nurse', 'surgeon']\r\n```\r\nI'm sure your csv is more complicated than this one. But it is hard to tell where the issue might be without looking at a sample.","I've had versions where it worked fain. For this dataset, I had all kind of parsing issues that I couldn't understand. What I ended up doing is strip all the columns that I didn't need and also make the label 0\/1. \r\n\r\nI think one line that may have caused a problem was the csv version of this:\r\n\r\n```crawl-data\/CC-MAIN-2017-47\/segments\/1510934806225.78\/wet\/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job. ^M ('Rose', '', 'Blakey') journalist F 38 journalist https:\/\/www.netgalley.com\/catalog\/book\/121872 _ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```\r\n\r\nThe error I got in this case is this one: https:\/\/github.com\/huggingface\/datasets\/issues\/1989#issuecomment-790842771\r\n\r\nNote, this line was part of a much larger file and until this line I guess it was working fine. ","Hi @ioana-blue,\r\n\r\nWhat is the separator you're using for the csv? I see there are only two commas in the given line, but they don't seem like appropriate points. Also, is this a string part of one line, or an entire line? There should also be a label, right?","Sorry for the confusion, the sample above was from a tsv that was used to derive the csv. Let me construct the csv again (I had remove it). \r\n\r\nThis is the line in the csv - this is the whole line:\r\n```crawl-data\/CC-MAIN-2017-47\/segments\/1510934806225.78\/wet\/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,\"('Rose', '', 'Blakey')\",journalist,F,38,journalist,https:\/\/www.netgalley.com\/catalog\/book\/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.```","Hi,\r\nJust in case you want to use tsv directly, you can use the separator argument while loading the dataset.\r\n```python\r\nd = load_dataset(\"csv\",data_files=['test.csv'],sep=\"\\t\")\r\n```\r\n\r\nAdditionally, I don't face the issues with the following csv (same as the one you provided):\r\n\r\n```sh\r\nlink1,text1,info1,info2,info3,info4,info5,link2,text2,text3\r\ncrawl-data\/CC-MAIN-2017-47\/segments\/1510934806225.78\/wet\/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz,Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead,\"('Rose', '', 'Blakey')\",journalist,F,38,journalist,https:\/\/www.netgalley.com\/catalog\/book\/121872,_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job., She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.\r\n```\r\nOutput after loading:\r\n```sh\r\n{'link1': 'crawl-data\/CC-MAIN-2017-47\/segments\/1510934806225.78\/wet\/CC-MAIN-20171120203833-20171120223833-00571.warc.wet.gz', 'text1': 'Rose Blakey is an aspiring journalist. She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead', 'info1': \"('Rose', '', 'Blakey')\", 'info2': 'journalist', 'info3': 'F', 'info4': 38, 'info5': 'journalist', 'link2': 'https:\/\/www.netgalley.com\/catalog\/book\/121872', 'text2': '_ is desperate to escape the from the small Australian town in which _ lives. Rejection after rejection mean _ is stuck in what _ sees as a dead-end waitressing job.', 'text3': ' She is desperate to escape the from the small Australian town in which she lives. Rejection after rejection mean she is stuck in what she sees as a dead-end waitressing job.'}\r\n```\r\nCan you check once if the tsv works for you directly using the separator argument? The conversion from tsv to csv could create issues, I'm only guessing though.","thanks for the tip. very strange :\/ I'll check my datasets version as well. \r\n\r\nI will have more similar experiments soon so I'll let you know if I manage to get rid of this. ","No problem at all. I thought I'd be able to solve this but I'm unable to replicate the issue :\/"],"created_at":1614877613000,"updated_at":1615455855000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I'm using a dataset with two labels \"nurse\" and \"not nurse\". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are \"nurse\" and \"surgeon\". \r\n\r\nThis is the trace I get:\r\n\r\n```\r\nFile \"..\/..\/..\/models\/tr-4.3.2\/run_puppets.py\", line 523, in \r\n main()\r\n File \"..\/..\/..\/models\/tr-4.3.2\/run_puppets.py\", line 249, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files)\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 572, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 650, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1028, in _prepare_split\r\n writer.write_table(table)\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/datasets\/arrow_writer.py\", line 292, in write_table\r\n pa_table = pa_table.cast(self._schema)\r\n File \"pyarrow\/table.pxi\", line 1311, in pyarrow.lib.Table.cast\r\n File \"pyarrow\/table.pxi\", line 265, in pyarrow.lib.ChunkedArray.cast\r\n File \"\/dccstor\/redrug_ier\/envs\/last-tr\/lib\/python3.8\/site-packages\/pyarrow\/compute.py\", line 87, in cast\r\n return call_function(\"cast\", [arr], options)\r\n File \"pyarrow\/_compute.pyx\", line 298, in pyarrow._compute.call_function\r\n File \"pyarrow\/_compute.pyx\", line 192, in pyarrow._compute.Function.call\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Failed to parse string: not nurse\r\n```\r\n\r\nAny ideas how to fix this? For now, I'll probably make them numeric. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1989\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1988","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1988\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1988\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1988\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1988","id":822324605,"node_id":"MDU6SXNzdWU4MjIzMjQ2MDU=","number":1988,"title":"Readme.md is misleading about kinds of datasets?","user":{"login":"surak","id":878399,"node_id":"MDQ6VXNlcjg3ODM5OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/878399?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/surak","html_url":"https:\/\/github.com\/surak","followers_url":"https:\/\/api.github.com\/users\/surak\/followers","following_url":"https:\/\/api.github.com\/users\/surak\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/surak\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/surak\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/surak\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/surak\/orgs","repos_url":"https:\/\/api.github.com\/users\/surak\/repos","events_url":"https:\/\/api.github.com\/users\/surak\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/surak\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)"],"created_at":1614877460000,"updated_at":1628100323000,"closed_at":1628100323000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi!\r\n\r\nAt the README.MD, you say: \"efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV\/JSON\/text. \"\r\n\r\nBut here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/new_dataset_script.py#L82-L117\r\n\r\nYou mention other kinds of datasets, with images and so on. I'm confused. \r\n\r\nIs it possible to use it to store, say, imagenet locally? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1988\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1987","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1987\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1987\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1987\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1987","id":822308956,"node_id":"MDU6SXNzdWU4MjIzMDg5NTY=","number":1987,"title":"wmt15 is broken","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614876385000,"updated_at":1614876385000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken:\r\n```\r\npython -c 'from datasets import load_dataset; load_dataset(\"wmt15\", \"de-en\")' \r\nDownloading: 2.91kB [00:00, 818kB\/s]\r\nDownloading: 3.02kB [00:00, 897kB\/s]\r\nDownloading: 41.1kB [00:00, 19.1MB\/s]\r\nDownloading and preparing dataset wmt15\/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/stas\/.cache\/huggingface\/datasets\/wmt15\/de-en\/1.0.0\/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 634, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/stas\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt15\/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f\/wmt_utils.py\", line 757, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 283, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 191, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 204, in \r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 160, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 160, in \r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/download_manager.py\", line 214, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 274, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/home\/stas\/anaconda3\/envs\/main-38\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 614, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/huggingface.co\/datasets\/wmt\/wmt15\/resolve\/main\/training-parallel-nc-v10.tgz\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1987\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1986","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1986\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1986\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1986\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1986","id":822176290,"node_id":"MDU6SXNzdWU4MjIxNzYyOTA=","number":1986,"title":"wmt datasets fail to load","user":{"login":"sabania","id":32322564,"node_id":"MDQ6VXNlcjMyMzIyNTY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32322564?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sabania","html_url":"https:\/\/github.com\/sabania","followers_url":"https:\/\/api.github.com\/users\/sabania\/followers","following_url":"https:\/\/api.github.com\/users\/sabania\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sabania\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sabania\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sabania\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sabania\/orgs","repos_url":"https:\/\/api.github.com\/users\/sabania\/repos","events_url":"https:\/\/api.github.com\/users\/sabania\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sabania\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["caching issue, seems to work again.."],"created_at":1614867535000,"updated_at":1614868267000,"closed_at":1614868267000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n 758 # Extract manually downloaded files.\r\n 759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n 761 \r\n 762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1986\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1985","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1985\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1985\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1985\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1985","id":822170651,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw","number":1985,"title":"Optimize int precision","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, are the tests OK? Some other cases I missed? Do you agree with this approach?","I just tested this and it works like a charm :) \r\n\r\nHowever tokenizing and then setting the format to \"torch\" to feed the tokens into a model doesn't seem to work anymore, since the pytorch tensors have the int32\/int8 precisions instead of int64 that is required as model inputs.\r\n\r\nFor example:\r\n\r\n```python\r\nimport torch\r\nfrom datasets import Dataset\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\ntorch.set_grad_enabled(False)\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n\r\ndataset = Dataset.from_dict({\"text\": [\"hello there !\"]})\r\ndataset = dataset.map(tokenizer, input_columns=\"text\", remove_columns=dataset.column_names)\r\ndataset = dataset.with_format(\"torch\")\r\n\r\nprint(dataset.features)\r\n# {'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),\r\n# 'input_ids': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), # this should be int32 though\r\n# 'token_type_ids': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None)}\r\n\r\nmodel(**dataset[:1])\r\n# RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.CharTensor instead (while checking arguments for embedding)\r\n\r\ndataset = dataset.with_format(\"torch\", dtype=torch.int64)\r\n\r\nmodel(**dataset[:1])\r\n# works as expected\r\n```\r\n\r\nPinging @sgugger here to make sure we take the right decision here.\r\n\r\nDo we want the \"torch\" format to always return int64 ? Or does it have to keep the precision defined by the `dataset.features` \r\n and therefore we would need to specify \"torch\" with `dtype=torch.int64` ?","From a user perspective, I think it's fine if the \"torch\" format converts all ints types to `torch.int64` by default since it's what the model will need almost all the time. I don't see a case where you would want to keep the low precision at the top of my head, and one can always write a custom transform for an edge case.","Sounds good to me !\r\nFor consistency maybe we should make the float precision fixed as well (float32, I guess)","Yes, that would be the one used by default.","Do we have the same requirements for TensorFlow?","Yes I we should do the same for tensorflow as well since tf models would have the same issue\r\n\r\nThanks for adding this :)","@lhoestq I think this PR is ready... :)"],"created_at":1614867143000,"updated_at":1616414680000,"closed_at":1615887840000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1985","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1985","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1985.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1985.patch"},"body":"Optimize int precision to reduce dataset file size.\r\n\r\nClose #1973, close #1825, close #861.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1985\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1984","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1984\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1984\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1984\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1984","id":821816588,"node_id":"MDU6SXNzdWU4MjE4MTY1ODg=","number":1984,"title":"Add tests for WMT datasets","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614840402000,"updated_at":1614840402000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As requested in #1981, we need tests for WMT datasets, using dummy data.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1984\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1983","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1983\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1983\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1983\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1983","id":821746008,"node_id":"MDU6SXNzdWU4MjE3NDYwMDg=","number":1983,"title":"The size of CoNLL-2003 is not consistant with the official release.","user":{"login":"h-peng17","id":39556019,"node_id":"MDQ6VXNlcjM5NTU2MDE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39556019?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/h-peng17","html_url":"https:\/\/github.com\/h-peng17","followers_url":"https:\/\/api.github.com\/users\/h-peng17\/followers","following_url":"https:\/\/api.github.com\/users\/h-peng17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/h-peng17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/h-peng17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/h-peng17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/h-peng17\/orgs","repos_url":"https:\/\/api.github.com\/users\/h-peng17\/repos","events_url":"https:\/\/api.github.com\/users\/h-peng17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/h-peng17\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\nif you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in our implementation.\r\n\r\n@lhoestq What do you think about including these lines? ([Link](https:\/\/github.com\/flairNLP\/flair\/issues\/1097) to a similar issue in the flairNLP repo)","We should mention in the Conll2003 dataset card that these lines have been removed indeed.\r\n\r\nIf some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.\r\n\r\nBut IMO the default config should stay the current one (without the `-DOCSTART-` stuff), so that you can directly train NER models without additional preprocessing. Let me know what you think","@lhoestq Yes, I agree adding a small note should be sufficient.\r\n\r\nCurrently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.","I added a mention of this in conll2003's dataset card:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/fc9796920da88486c3b97690969aabf03d6b4088\/datasets\/conll2003\/README.md#conll2003\r\n\r\nEdit: just saw your PR @mariosasko (noticed it too late ^^)\r\nLet me take a look at it :)"],"created_at":1614832894000,"updated_at":1615220665000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.\r\nThe statistics of conll-2003 in this repo is : \r\n\\#train 14041 \\#dev 3250 \\#test 3453\r\nWhile the official statistics is:\r\n\\#train 14987 \\#dev 3466 \\#test 3684\r\nWish for your reply~","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1983\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1982","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1982\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1982\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1982\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1982","id":821448791,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0","number":1982,"title":"Fix NestedDataStructure.data for empty dict","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I validated that this fixed the problem, thank you, @albertvillanova!\r\n","still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='.\/datasets')\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n 758 # Extract manually downloaded files.\r\n 759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n 761 \r\n 762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list","Hi @sabania \r\nWe released a patch version that fixes this issue (1.4.1), can you try with the new version please ?\r\n```\r\npip install --upgrade datasets\r\n```","I re-validated with the hotfix and the problem is no more.","It's working. thanks a lot."],"created_at":1614802611000,"updated_at":1614876364000,"closed_at":1614811716000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1982","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1982","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1982.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1982.patch"},"body":"Fix #1981","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1982\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1981","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1981\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1981\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1981\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1981","id":821411109,"node_id":"MDU6SXNzdWU4MjE0MTExMDk=","number":1981,"title":"wmt datasets fail to load","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["@stas00 Mea culpa... May I fix this tomorrow morning?","yes, of course, I reverted to the version before that and it works ;)\r\n\r\nbut since a new release was just made you will probably need to make a hotfix.\r\n\r\nand add the wmt to the tests?","Sure, I will implement a regression test!","@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?","I'll do a patch release for this issue early tomorrow.\r\n\r\nAnd yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)","still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='.\/datasets')\r\n\r\n~.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n758 # Extract manually downloaded files.\r\n759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n761\r\n762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list"],"created_at":1614799299000,"updated_at":1614867407000,"closed_at":1614811716000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"on master:\r\n```\r\npython -c 'from datasets import load_dataset; load_dataset(\"wmt14\", \"de-en\")'\r\nDownloading and preparing dataset wmt14\/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/stas\/.cache\/huggingface\/datasets\/wmt14\/de-en\/1.0.0\/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/builder.py\", line 634, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/stas\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt14\/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\/wmt_utils.py\", line 760, in _split_generators\r\n extraction_map = dict(downloaded_files, **manual_files)\r\n```\r\n\r\nit worked fine recently. same problem if I try wmt16.\r\n\r\ngit bisect points to this commit from Feb 25 as the culprit https:\/\/github.com\/huggingface\/datasets\/commit\/792f1d9bb1c5361908f73e2ef7f0181b2be409fa\r\n\r\n@albertvillanova ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1981\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1980","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1980\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1980\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1980\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1980","id":821312810,"node_id":"MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy","number":1980,"title":"Loading all answers from drop","user":{"login":"KaijuML","id":25499439,"node_id":"MDQ6VXNlcjI1NDk5NDM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25499439?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KaijuML","html_url":"https:\/\/github.com\/KaijuML","followers_url":"https:\/\/api.github.com\/users\/KaijuML\/followers","following_url":"https:\/\/api.github.com\/users\/KaijuML\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KaijuML\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KaijuML\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KaijuML\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KaijuML\/orgs","repos_url":"https:\/\/api.github.com\/users\/KaijuML\/repos","events_url":"https:\/\/api.github.com\/users\/KaijuML\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KaijuML\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice thanks for the change !\r\nThis looks all good to me\r\n\r\nBefore we merge can you just update the dataset_infos.json file of drop ? You can do it by running\r\n```\r\ndatasets-cli test .\/datasets\/drop --all_configs --save_infos --ignore_verifications\r\n```","Done!"],"created_at":1614791587000,"updated_at":1615807646000,"closed_at":1615807646000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1980","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1980","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1980.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1980.patch"},"body":"Hello all,\r\n\r\nI propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only \"span\" answers are loaded, which excludes a significant amount of answers from drop (i.e. \"number\" and \"date\").\r\n\r\nI updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files.\r\n\r\nNote that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them.\r\n\r\nLet me know if there is anything else I can do,\r\nCl\u00e9ment","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1980\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1979","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1979\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1979\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1979\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1979","id":820977853,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3","number":1979,"title":"Add article_id and process test set template for semeval 2020 task 11\u2026","user":{"login":"hemildesai","id":8195444,"node_id":"MDQ6VXNlcjgxOTU0NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8195444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hemildesai","html_url":"https:\/\/github.com\/hemildesai","followers_url":"https:\/\/api.github.com\/users\/hemildesai\/followers","following_url":"https:\/\/api.github.com\/users\/hemildesai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hemildesai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hemildesai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hemildesai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hemildesai\/orgs","repos_url":"https:\/\/api.github.com\/users\/hemildesai\/repos","events_url":"https:\/\/api.github.com\/users\/hemildesai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hemildesai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks !\r\nNow to fix the CI the only thing left is to add a dummy `test-task-tc-template.out` file inside the `dummy_data.zip` at `.\/datasets\/sem_eval_2020_task_11\/dummy\/1.1.0`\r\nIt must contain the labels template for each dummy article of the test set included in `dummy_data.zip`\r\n\r\nAfter that we should be good to merge this one :)","@lhoestq Made the changes! The failure now seems to be unrelated to the changes. Any idea what's going on?","This is a bug on master that we're investigating. You can ignore it"],"created_at":1614767672000,"updated_at":1615633180000,"closed_at":1615554650000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1979","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1979","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1979.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1979.patch"},"body":"\u2026 dataset\r\n\r\n- `article_id` is needed to create the submission file for the task at https:\/\/propaganda.qcri.org\/semeval2020-task11\/\r\n- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1979\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1978","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1978\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1978\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1978\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1978","id":820956806,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz","number":1978,"title":"Adding ro sts dataset","user":{"login":"lorinczb","id":36982089,"node_id":"MDQ6VXNlcjM2OTgyMDg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36982089?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lorinczb","html_url":"https:\/\/github.com\/lorinczb","followers_url":"https:\/\/api.github.com\/users\/lorinczb\/followers","following_url":"https:\/\/api.github.com\/users\/lorinczb\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lorinczb\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lorinczb\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lorinczb\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lorinczb\/orgs","repos_url":"https:\/\/api.github.com\/users\/lorinczb\/repos","events_url":"https:\/\/api.github.com\/users\/lorinczb\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lorinczb\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq thank you very much for the quick review and useful comments! \r\n\r\nI have tried to address them all, and a few comments that you left for ro_sts I have applied to the ro_sts_parallel as well (in read-me: fixed source_datasets, links to homepage, repository, leaderboard, thanks to me message, in ro_sts_parallel.py changed to camel case as well). In the ro_sts_parallel I have changed the order on the languages, also in the example, as you said order doesn't matter, but just to have them listed in the readme in the same order.\r\n\r\nI have commented above on why we would like to keep them as separate datasets, hope it makes sense.\r\n\r\nIf there is anything else I should change please let me know.\r\n\r\nThanks again!","@lhoestq I tried to adjust the ro_sts_parallel, locally when I run the tests they are passing, but somewhere it has the old name of rosts-parallel-ro-en which I am trying to change to ro_sts_parallel. I don't think I have left anything related to rosts-parallel-ro-en, but when the dataset_infos.json is regenerated it adds it. Could you please help me out, how can I fix this? Thanks in advance!","Great, thanks for all your help! "],"created_at":1614766133000,"updated_at":1614938414000,"closed_at":1614936835000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1978","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1978","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1978.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1978.patch"},"body":"Adding [RO-STS](https:\/\/github.com\/dumitrescustefan\/RO-STS) dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1978\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1977","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1977\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1977\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1977\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1977","id":820312022,"node_id":"MDU6SXNzdWU4MjAzMTIwMjI=","number":1977,"title":"ModuleNotFoundError: No module named 'apache_beam' for wikipedia datasets ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I sometimes also get this error with other languages of the same dataset:\r\n\r\n File \"\/dara\/libs\/anaconda3\/envs\/code\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow\/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow\/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n\r\n@lhoestq \r\n","Hi ! Thanks for reporting\r\nSome wikipedia configurations do require the user to have `apache_beam` in order to parse the wikimedia data.\r\n\r\nOn the other hand regarding your second issue\r\n```\r\nOSError: Memory mapping file failed: Cannot allocate memory\r\n```\r\nI've never experienced this, can you open a new issue for this specific error and provide more details please ?\r\nFor example what script did you use to get this, what language did you use, what's your environment details (os, python version, pyarrow version).."],"created_at":1614712888000,"updated_at":1614766660000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying to run run_mlm.py code [1] of huggingface with following \"wikipedia\"\/ \"20200501.aa\" dataset:\r\n\r\n`python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.aa --do_train --do_eval --output_dir \/tmp\/test-mlm --max_seq_length 256\r\n`\r\n\r\nI am getting this error, but as per documentation, huggingface dataset provide processed version of this dataset and users can load it without requiring setup extra settings for apache-beam. could you help me please to load this dataset? \r\nDo you think I can run run_ml.py with this dataset? or anyway I could subsample and train the model? I greatly appreciate providing the processed version of all languages for this dataset, which allow the user to use them without setting up apache-beam,. thanks \r\n\r\nI really appreciate your help.\r\n@lhoestq \r\n\r\nthanks.\r\n\r\n[1] https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm.py\r\n\r\nerror I get: \r\n\r\n```\r\n>>> import datasets \r\n>>> datasets.load_dataset(\"wikipedia\", \"20200501.aa\")\r\nDownloading and preparing dataset wikipedia\/20200501.aa (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/dara\/temp\/cache_home_2\/datasets\/wikipedia\/20200501.aa\/1.0.0\/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"\/dara\/temp\/libs\/anaconda3\/envs\/codes\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/dara\/temp\/libs\/anaconda3\/envs\/codes\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/dara\/temp\/libs\/anaconda3\/envs\/codes\/lib\/python3.7\/site-packages\/datasets-1.3.0-py3.7.egg\/datasets\/builder.py\", line 1099, in _download_and_prepare\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1977\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1976","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1976\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1976\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1976\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1976","id":820228538,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4","number":1976,"title":"Add datasets full offline mode with HF_DATASETS_OFFLINE","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614706019000,"updated_at":1614786331000,"closed_at":1614786330000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1976","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1976","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1976.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1976.patch"},"body":"Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts\/retries to happen. This was requested in https:\/\/github.com\/huggingface\/datasets\/issues\/1939\r\n\r\ncc @stas00 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1976\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1975","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1975\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1975\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1975\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1975","id":820205485,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3","number":1975,"title":"Fix flake8","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614704353000,"updated_at":1614854602000,"closed_at":1614854602000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1975","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1975","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1975.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1975.patch"},"body":"Fix flake8 style.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1975\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1974","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1974\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1974\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1974\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1974","id":820122223,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0","number":1974,"title":"feat(docs): navigate with left\/right arrow keys","user":{"login":"ydcjeff","id":32727188,"node_id":"MDQ6VXNlcjMyNzI3MTg4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32727188?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ydcjeff","html_url":"https:\/\/github.com\/ydcjeff","followers_url":"https:\/\/api.github.com\/users\/ydcjeff\/followers","following_url":"https:\/\/api.github.com\/users\/ydcjeff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ydcjeff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ydcjeff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ydcjeff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ydcjeff\/orgs","repos_url":"https:\/\/api.github.com\/users\/ydcjeff\/repos","events_url":"https:\/\/api.github.com\/users\/ydcjeff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ydcjeff\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614698690000,"updated_at":1614854652000,"closed_at":1614854568000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1974","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1974","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1974.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1974.patch"},"body":"Enables docs navigation with left\/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.\r\nMore info : https:\/\/github.com\/sphinx-doc\/sphinx\/pull\/2064\r\n\r\nYou can try here : https:\/\/29353-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/index.html","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1974\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1973","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1973\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1973\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1973\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1973","id":820077312,"node_id":"MDU6SXNzdWU4MjAwNzczMTI=","number":1973,"title":"Question: what gets stored in the datasets cache and why is it so huge?","user":{"login":"ioana-blue","id":17202292,"node_id":"MDQ6VXNlcjE3MjAyMjky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17202292?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ioana-blue","html_url":"https:\/\/github.com\/ioana-blue","followers_url":"https:\/\/api.github.com\/users\/ioana-blue\/followers","following_url":"https:\/\/api.github.com\/users\/ioana-blue\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ioana-blue\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ioana-blue\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ioana-blue\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ioana-blue\/orgs","repos_url":"https:\/\/api.github.com\/users\/ioana-blue\/repos","events_url":"https:\/\/api.github.com\/users\/ioana-blue\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ioana-blue\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.","Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.","Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ","And to clarify, it's not memory, it's disk space. Thank you!","Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```","Thanks for the tip, this is useful. ","Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.","Thank you!"],"created_at":1614695753000,"updated_at":1617113039000,"closed_at":1615887840000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1973\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1972","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1972\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1972\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1972\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1972","id":819752761,"node_id":"MDU6SXNzdWU4MTk3NTI3NjE=","number":1972,"title":"'Dataset' object has no attribute 'rename_column'","user":{"login":"farooqzaman1","id":23195502,"node_id":"MDQ6VXNlcjIzMTk1NTAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23195502?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/farooqzaman1","html_url":"https:\/\/github.com\/farooqzaman1","followers_url":"https:\/\/api.github.com\/users\/farooqzaman1\/followers","following_url":"https:\/\/api.github.com\/users\/farooqzaman1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/farooqzaman1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/farooqzaman1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/farooqzaman1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/farooqzaman1\/orgs","repos_url":"https:\/\/api.github.com\/users\/farooqzaman1\/repos","events_url":"https:\/\/api.github.com\/users\/farooqzaman1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/farooqzaman1\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! `rename_column` has been added recently and will be available in the next release"],"created_at":1614672109000,"updated_at":1614690483000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"'Dataset' object has no attribute 'rename_column'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1972\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1971","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1971\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1971\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1971\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1971","id":819714231,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0","number":1971,"title":"Fix ArrowWriter closes stream at exit","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh nice thanks for adding the context manager ! All the streams and RecordBatchWriter will be properly closed now. Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nNot sure about the error, it looks like a process crashed silently.\r\nLet me take a look","> Hopefully this gives a better experience on windows on which it's super important to close stuff.\r\n\r\nExactly! On Windows, you got:\r\n> PermissionError: [WinError 32] The process cannot access the file because it is being used by another process\r\n\r\nwhen trying to access the unclosed `stream` file, e.g. by `with incomplete_dir(self._cache_dir) as tmp_data_dir`: `shutil.rmtree(tmp_dir)`\r\n\r\nThe reason is: https:\/\/docs.python.org\/3\/library\/os.html#os.remove\r\n\r\n> On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use.\r\n\r\n\r\n","The test passes on my windows. This was probably a circleCI issue. I re-ran the circleCI tests","NICE! It passed!","Maybe you can merge master into this branch and check the CI before merging ?","@lhoestq done! ;)","Thanks ! merging"],"created_at":1614669154000,"updated_at":1615394217000,"closed_at":1615394217000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1971","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1971","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1971.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1971.patch"},"body":"Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and\/or an Exception is raised before\/during the call to its `finalize()` method.\r\n\r\nTherefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1971\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1970","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1970\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1970\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1970\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1970","id":819500620,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw","number":1970,"title":"Fixing the URL filtering for bad MLSUM examples in GEM","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614648178000,"updated_at":1614655146000,"closed_at":1614650493000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1970","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1970","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1970.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1970.patch"},"body":"This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r\r\n\r\ncc @sebastianGehrmann ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1970\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1967","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1967\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1967\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1967\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1967","id":819129568,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx","number":1967,"title":"Add Turkish News Category Dataset - 270K - Lite Version","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the change, merging now !"],"created_at":1614622919000,"updated_at":1614705900000,"closed_at":1614705900000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1967","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1967","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1967.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1967.patch"},"body":"This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.\r\nThis dataset contains the same news from the current [interpress_news_category_tr dataset](https:\/\/huggingface.co\/datasets\/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes (\"k\u00fclt\u00fcrsanat\", \"ekonomi\", \"siyaset\", \"e\u011fitim\", \"d\u00fcnya\", \"spor\", \"teknoloji\", \"magazin\", \"sa\u011fl\u0131k\", \"g\u00fcndem\") were rearranged.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1967\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1966","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1966\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1966\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1966\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1966","id":819101253,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0","number":1966,"title":"Fix metrics collision in separate multiprocessed experiments","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Since the failure was originally intermittent, there is no 100% telling that the problem is gone. \r\nBut if my artificial race condition setup https:\/\/github.com\/huggingface\/datasets\/issues\/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.\r\n\r\nThank you for taking care of this, @lhoestq - locking can be very tricky to do right!"],"created_at":1614620718000,"updated_at":1614690345000,"closed_at":1614690344000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1966","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1966","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1966.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1966.patch"},"body":"As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.\r\n\r\nIndeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing\/reading\/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.\r\n\r\nTo fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.\r\n\r\nFinally I added missing tests for separate experiments in distributed setup.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1966\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1965","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1965\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1965\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1965\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1965","id":818833460,"node_id":"MDU6SXNzdWU4MTg4MzM0NjA=","number":1965,"title":"Can we parallelized the add_faiss_index process over dataset shards ?","user":{"login":"shamanez","id":16892570,"node_id":"MDQ6VXNlcjE2ODkyNTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16892570?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shamanez","html_url":"https:\/\/github.com\/shamanez","followers_url":"https:\/\/api.github.com\/users\/shamanez\/followers","following_url":"https:\/\/api.github.com\/users\/shamanez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shamanez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shamanez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shamanez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shamanez\/orgs","repos_url":"https:\/\/api.github.com\/users\/shamanez\/repos","events_url":"https:\/\/api.github.com\/users\/shamanez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shamanez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n","Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.","@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. "],"created_at":1614602854000,"updated_at":1614886856000,"closed_at":1614886842000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?\r\n\r\nI feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.\r\n\r\n@lhoestq\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1965\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1964","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1964\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1964\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1964\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1964","id":818624864,"node_id":"MDU6SXNzdWU4MTg2MjQ4NjQ=","number":1964,"title":"Datasets.py function load_dataset does not match squad dataset","user":{"login":"LeopoldACC","id":44536699,"node_id":"MDQ6VXNlcjQ0NTM2Njk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44536699?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LeopoldACC","html_url":"https:\/\/github.com\/LeopoldACC","followers_url":"https:\/\/api.github.com\/users\/LeopoldACC\/followers","following_url":"https:\/\/api.github.com\/users\/LeopoldACC\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LeopoldACC\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LeopoldACC\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LeopoldACC\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LeopoldACC\/orgs","repos_url":"https:\/\/api.github.com\/users\/LeopoldACC\/repos","events_url":"https:\/\/api.github.com\/users\/LeopoldACC\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LeopoldACC\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nTo fix 1, an you try to run this code ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"squad\", download_mode=\"force_redownload\")\r\n```\r\nMaybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.\r\n\r\nRegarding your 2nd point, you're right that loading the raw json this way doesn't give you a dataset with the column \"context\", \"question\" and \"answers\". Indeed the squad format is a very nested format so you have to preprocess the data. You can do it this way:\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n out = {\"context\": [], \"question\": [], \"answers\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n return out\r\n\r\ndatasets = load_dataset(extension, data_files=data_files, field=\"data\")\r\ncolumn_names = datasets[\"train\"].column_names\r\n\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n```\r\n\r\nHope that helps :)","Thks for quickly answering\uff01\r\n### 1 I try the first way,but seems not work \r\n```\r\nTraceback (most recent call last):\r\n File \"examples\/question-answering\/run_qa.py\", line 503, in \r\n main()\r\n File \"examples\/question-answering\/run_qa.py\", line 218, in main\r\n datasets = load_dataset(data_args.dataset_name, download_mode=\"force_redownload\")\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 633, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/rajpurkar.github.io\/SQuAD-explorer\/dataset\/train-v1.1.json']\r\n```\r\n### 2 I try the second way,and run the examples\/question-answering\/run_qa.py,it lead to another bug orz..\r\n```\r\nTraceback (most recent call last):\r\n File \"examples\/question-answering\/run_qa.py\", line 523, in \r\n main()\r\n File \"examples\/question-answering\/run_qa.py\", line 379, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1120, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1091, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"examples\/question-answering\/run_qa.py\", line 339, in prepare_train_features\r\n if len(answers[\"answer_start\"]) == 0:\r\nTypeError: list indices must be integers or slices, not str\r\n```\r\n## may be the function prepare_train_features in run_qa.py need to fix,I think is that the prep\r\n```python\r\nfor i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n print(examples,answers)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start\/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n``` ","## I have fixed it, @lhoestq \r\n### the first section change as you said and add [\"id\"]\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n # print(examples)\r\n out = {\"context\": [], \"question\": [], \"answers\":[],\"id\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n out[\"id\"].append(qa[\"id\"]) \r\n return out\r\ncolumn_names = datasets[\"train\"].column_names if training_args.do_train else datasets[\"validation\"].column_names\r\n# print(datasets[\"train\"].column_names)\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n# Preprocessing the datasets.\r\n# Preprocessing is slighlty different for training and evaluation.\r\nif training_args.do_train:\r\n column_names = datasets[\"train\"].column_names\r\nelse:\r\n column_names = datasets[\"validation\"].column_names\r\n# print(column_names)\r\nquestion_column_name = \"question\" if \"question\" in column_names else column_names[0]\r\ncontext_column_name = \"context\" if \"context\" in column_names else column_names[1]\r\nanswer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\n```\r\n### the second section\r\n```python\r\ndef prepare_train_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[question_column_name if pad_on_right else context_column_name],\r\n examples[context_column_name if pad_on_right else question_column_name],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=data_args.max_seq_length,\r\n stride=data_args.doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\" if data_args.pad_to_max_length else False,\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position in the original context. This will\r\n # help us compute the start_positions and end_positions.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n # Let's label those examples!\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n # print(examples,answers,offset_mapping,tokenized_examples)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers) == 0:#len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start\/end character index of the answer in the text.\r\n start_char = answers[0][\"answer_start\"]\r\n end_char = start_char + len(answers[0][\"text\"])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n return tokenized_examples\r\n```","I'm glad you managed to fix run_qa.py for your case :)\r\n\r\nRegarding the checksum error, I'm not able to reproduce on my side.\r\nThis errors says that the downloaded file doesn't match the expected file.\r\n\r\nCould you try running this and let me know if you get the same output as me ?\r\n```python\r\nfrom datasets.utils.info_utils import get_size_checksum_dict\r\nfrom datasets import cached_path\r\n\r\nget_size_checksum_dict(cached_path(\"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/dataset\/train-v1.1.json\"))\r\n# {'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```","I run the code,and it show below:\r\n```\r\n>>> from datasets.utils.info_utils import get_size_checksum_dict\r\n>>> from datasets import cached_path\r\n>>> get_size_checksum_dict(cached_path(\"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/dataset\/train-v1.1.json\"))\r\nDownloading: 30.3MB [04:13, 120kB\/s]\r\n{'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```","Alright ! So in this case redownloading the file with `download_mode=\"force_redownload\"` should fix it. Can you try using `download_mode=\"force_redownload\"` again ?\r\n\r\nNot sure why it didn't work for you the first time though :\/"],"created_at":1614588091000,"updated_at":1614870566000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"### 1 When I try to train lxmert,and follow the code in README that --dataset name:\r\n```shell \r\npython examples\/question-answering\/run_qa.py --model_name_or_path unc-nlp\/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir \/home2\/zhenggo1\/checkpoint\/lxmert_squad\r\n```\r\nthe bug is that:\r\n```\r\nDownloading and preparing dataset squad\/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to \/home2\/zhenggo1\/.cache\/huggingface\/datasets\/squad\/plain_text\/1.0.0\/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7...\r\nTraceback (most recent call last):\r\n File \"examples\/question-answering\/run_qa.py\", line 501, in \r\n main()\r\n File \"examples\/question-answering\/run_qa.py\", line 217, in main\r\n datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 633, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"\/home2\/zhenggo1\/anaconda3\/envs\/lpot\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/rajpurkar.github.io\/SQuAD-explorer\/dataset\/train-v1.1.json']\r\n```\r\nAnd I try to find the [checksum link](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/squad\/dataset_infos.json)\r\n,is the problem plain_text do not have a checksum?\r\n\r\n### 2 When I try to train lxmert,and use local dataset:\r\n```\r\npython examples\/question-answering\/run_qa.py --model_name_or_path unc-nlp\/lxmert-base-uncased --train_file $SQUAD_DIR\/train-v1.1.json --validation_file $SQUAD_DIR\/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir \/home2\/zhenggo1\/checkpoint\/lxmert_squad\r\n```\r\nThe bug is that \r\n```\r\n['title', 'paragraphs']\r\nTraceback (most recent call last):\r\n File \"examples\/question-answering\/run_qa.py\", line 501, in \r\n main()\r\n File \"examples\/question-answering\/run_qa.py\", line 273, in main\r\n answer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\nIndexError: list index out of range\r\n```\r\nI print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work:\r\n```\r\nif training_args.do_train:\r\n column_names = datasets[\"train\"].column_names\r\n else:\r\n column_names = datasets[\"validation\"].column_names\r\n print(datasets[\"train\"].column_names)\r\n question_column_name = \"question\" if \"question\" in column_names else column_names[0]\r\n context_column_name = \"context\" if \"context\" in column_names else column_names[1]\r\n answer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\n``` \r\n## Please tell me how to fix the bug,thks a lot!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1964\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1963","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1963\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1963\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1963\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1963","id":818289967,"node_id":"MDU6SXNzdWU4MTgyODk5Njc=","number":1963,"title":"bug in SNLI dataset ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.\r\nFeel free to remove these examples if you don't need them by using\r\n```python\r\ndata = data.filter(lambda x: x[\"label\"] != -1)\r\n```"],"created_at":1614540980000,"updated_at":1614600089000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nThere is label of -1 in train set of SNLI dataset, please find the code below:\r\n\r\n```\r\nimport numpy as np \r\nimport datasets \r\ndata = datasets.load_dataset(\"snli\")[\"train\"]\r\nlabels = []\r\nfor d in data:\r\n labels.append(d[\"label\"])\r\nprint(np.unique(labels))\r\n```\r\n\r\nand results:\r\n\r\n`[-1 0 1 2]`\r\n\r\nversion of datasets used:\r\n`datasets 1.2.1 \r\n`\r\n\r\nthanks for your help. @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1963\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1962","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1962\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1962\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1962\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1962","id":818089156,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4","number":1962,"title":"Fix unused arguments","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Re-added the arg. The ConnectionError in CI seems unrelated to this PR (the same test fails on master as well).","Thanks !\r\nI'm re-running the CI, maybe this was an issue with circleCI","Looks all good now, merged :)"],"created_at":1614480427000,"updated_at":1615429097000,"closed_at":1614789470000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1962","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1962","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1962.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1962.patch"},"body":"Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1962\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1961","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1961\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1961\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1961\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1961","id":818077947,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0","number":1961,"title":"Add sst dataset","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614478109000,"updated_at":1614854333000,"closed_at":1614854333000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1961","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1961","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1961.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1961.patch"},"body":"Related to #1934—Add the Stanford Sentiment Treebank dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1961\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1960","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1960\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1960\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1960\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1960","id":818073154,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4","number":1960,"title":"Allow stateful function in dataset.map","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Added a test. If you can come up with a better stateful callable, I'm all ears \ud83d\ude04. ","Sorry I said earlier that it was good to have it inside the loop, my mistake !","@lhoestq Okay, did some refactoring and now the \"cache\" part comes before the for loop. Thanks for the guidance.\r\n\r\nThink this is ready for the final review."],"created_at":1614475745000,"updated_at":1616513209000,"closed_at":1616513209000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1960","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1960","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1960.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1960.patch"},"body":"Removes the \"test type\" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example.\r\n\r\nFixes #1940 \r\n\r\n@lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1960\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1959","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1959\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1959\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1959\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1959","id":818055644,"node_id":"MDU6SXNzdWU4MTgwNTU2NDQ=","number":1959,"title":"Bug in skip_rows argument of load_dataset function ?","user":{"login":"LedaguenelArthur","id":73159756,"node_id":"MDQ6VXNlcjczMTU5NzU2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73159756?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LedaguenelArthur","html_url":"https:\/\/github.com\/LedaguenelArthur","followers_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/followers","following_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/orgs","repos_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/repos","events_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LedaguenelArthur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,\r\n\r\ntry `skiprows` instead. This part is not properly documented in the docs it seems.\r\n\r\n@lhoestq I'll fix this as part of a bigger PR that fixes typos in the docs."],"created_at":1614468774000,"updated_at":1615285292000,"closed_at":1615285292000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello everyone,\r\n\r\nI'm quite new to Git so sorry in advance if I'm breaking some ground rules of issues posting... :\/\r\nI tried to use the load_dataset function, from Huggingface datasets library, on a csv file using the skip_rows argument described on Huggingface page to skip the first row containing column names\r\n\r\n`test_dataset = load_dataset('csv', data_files=['test_wLabel.tsv'], delimiter='\\t', column_names=[\"id\", \"sentence\", \"label\"], skip_rows=1)`\r\n\r\nBut I got the following error message\r\n\r\n`__init__() got an unexpected keyword argument 'skip_rows'`\r\n\r\nHave I used the wrong argument ? Am I missing something or is this a bug ?\r\n\r\nThank you very much for your time,\r\nBest regards,\r\nArthur","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1959\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1958","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1958\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1958\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1958\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1958","id":818037548,"node_id":"MDU6SXNzdWU4MTgwMzc1NDg=","number":1958,"title":"XSum dataset download link broken","user":{"login":"himat","id":1156974,"node_id":"MDQ6VXNlcjExNTY5NzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1156974?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/himat","html_url":"https:\/\/github.com\/himat","followers_url":"https:\/\/api.github.com\/users\/himat\/followers","following_url":"https:\/\/api.github.com\/users\/himat\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/himat\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/himat\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/himat\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/himat\/orgs","repos_url":"https:\/\/api.github.com\/users\/himat\/repos","events_url":"https:\/\/api.github.com\/users\/himat\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/himat\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Never mind, I ran it again and it worked this time. Strange."],"created_at":1614462476000,"updated_at":1614462616000,"closed_at":1614462616000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I did \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"xsum\")\r\n```\r\n\r\nThis returns\r\n`ConnectionError: Couldn't reach http:\/\/bollin.inf.ed.ac.uk\/public\/direct\/XSUM-EMNLP18-Summary-Data-Original.tar.gz`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1958\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1957","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1957\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1957\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1957\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1957","id":818014624,"node_id":"MDU6SXNzdWU4MTgwMTQ2MjQ=","number":1957,"title":"[request] make load_metric api intutive","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614458634000,"updated_at":1614464470000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\nmetric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)\r\n```\r\n\r\nMay I suggest that `num_process` is confusing as it's singular yet expects a plural value and either \r\n* be deprecated in favor of `num_processes` which is more intuitive since it's plural as its expected value\r\n* or even better why not mimic the established dist environment convention for that purpose, which uses `world_size`. \r\n\r\nSame for `process_id` - why reinvent the naming and needing to explain that this is **NOT** `PID`, when we have `rank` already. That is:\r\n\r\n```\r\nmetric = load_metric('glue', 'mrpc', world_size=world_size, rank=rank)\r\n```\r\n\r\nThis then fits like a glove into the pytorch DDP and alike envs. and we just need to call:\r\n\r\n* `dist.get_world_size()`\r\n* `dist.get_rank()`\r\n\r\nSo it'd be as simple as:\r\n\r\n```\r\nmetric = load_metric('glue', 'mrpc', world_size=dist.get_world_size(), rank=dist.get_rank())\r\n```\r\n\r\nFrom: https:\/\/pytorch.org\/docs\/stable\/distributed.html#torch.distributed.init_process_group\r\n\r\n* `world_size (int, optional)` \u2013 Number of processes participating in the job. Required if store is specified.\r\n* `rank (int, optional)` \u2013 Rank of the current process. Required if store is specified.\r\n\r\nAnd may be an example would be useful, so that the user doesn't even need to think about where to get `dist`:\r\n```\r\nimport torch.distributed as dist\r\nif dist.is_initialized():\r\n metric = load_metric(metric_name, world_size=dist.get_world_size(), rank=dist.get_rank())\r\nelse:\r\n metric = load_metric(metric_name)\r\n```\r\n\r\nI'm aware this is pytorch-centric, but it's better than no examples, IMHO.\r\n\r\nThank you.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1957\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1956","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1956\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1956\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1956\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1956","id":818013741,"node_id":"MDU6SXNzdWU4MTgwMTM3NDE=","number":1956,"title":"[distributed env] potentially unsafe parallel execution","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can pass the same `experiment_id` for all the metrics of the same group, and use another `experiment_id` for the other groups.\r\nMaybe we can add an environment variable that sets the default value for `experiment_id` ? What do you think ?","Ah, you're absolutely correct, @lhoestq - it's exactly the equivalent of the shared secret. Thank you!"],"created_at":1614458325000,"updated_at":1614619482000,"closed_at":1614619482000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\nmetric = load_metric('glue', 'mrpc', num_process=num_process, process_id=rank)\r\n```\r\n\r\npresumes that there is only one set of parallel processes running - and will intermittently fail if you have multiple sets running as they will surely overwrite each other. Similar to https:\/\/github.com\/huggingface\/datasets\/issues\/1942 (but for a different reason).\r\nThat's why dist environments use some unique to a group identifier so that each group is dealt with separately. \r\n\r\ne.g. the env-way of pytorch dist syncing is done with a unique per set `MASTER_ADDRESS+MASTER_PORT`\r\n\r\nSo ideally this interface should ask for a shared secret to do the right thing.\r\n\r\nI'm not reporting an immediate need, but am only flagging that this will hit someone down the road.\r\n\r\nThis problem can be remedied by adding a new optional `shared_secret` option, which can then be used to differentiate different groups of processes. and this secret should be part of the file lock name and the experiment.\r\n\r\nThank you","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1956\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1955","id":818010664,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5","number":1955,"title":"typos + grammar","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614457303000,"updated_at":1614619238000,"closed_at":1614609799000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1955","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1955","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1955.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1955.patch"},"body":"This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability.\r\n\r\nN.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as \"`datasets` library\", that is \"`datasets` library is ...\" and not \"are ...\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1955\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1954","id":817565563,"node_id":"MDU6SXNzdWU4MTc1NjU1NjM=","number":1954,"title":"add a new column ","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ","Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https:\/\/github.com\/huggingface\/datasets\/issues\/853#issuecomment-727872188\r\n\r\nIn the future we'll add support for a more native way of adding a new column ;)"],"created_at":1614363447000,"updated_at":1619707843000,"closed_at":1619707843000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI'd need to add a new column to the dataset, I was wondering how this can be done? thanks \r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1954\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1953","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1953\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1953\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1953\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1953","id":817498869,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgwOTgyMDMz","number":1953,"title":"Documentation for to_csv, to_pandas and to_dict","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614357349000,"updated_at":1614607428000,"closed_at":1614607427000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1953","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1953","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1953.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1953.patch"},"body":"I added these methods to the documentation with a small paragraph.\r\n\r\nI also fixed some formatting issues in the docstrings","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1953\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1952","id":817428160,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgwOTIyNjQw","number":1952,"title":"Handle timeouts","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I never said the calls were hanging indefinitely, what we need is quite different - in the firewalled env with a network, there should be no network calls or they should fail instantly.\r\n\r\nTo make this work I suppose on top of this PR we need:\r\n1. `DATASETS_OFFLINE` env var to force set timeout to 0 globally (or to 0.0001 if 0 has a special meaning of no timeout)\r\n2. `DATASETS_OFFLINE` should guard against failing network calls and not fail the program if it has all the data it needs locally.\r\n\r\nBottom line - if the logic wants to check online if the local file matches online dataset name, let it go wild, but it should fail instantly, recover and use the local file - if one is specified explicitly or cache if there is one. And only if neither was found only then assert.\r\n\r\nI hope this makes sense and is doable.\r\n\r\nI have started on the same approach for transformers https:\/\/github.com\/huggingface\/transformers\/pull\/10407\r\n\r\nThank you, @lhoestq ","Yes that was the first step to add DATASETS_OFFLINE :)\r\n\r\nWith this PR, if a request times out (which couldn't happen before because no time out was set), it falls back on the local files with no error.\r\n\r\nAs you said, setting the timeout to something like 1e-16 makes the requests fail instantly, which is one step forward. One last thing left is to disable request retries and everything will be instant !","Ah, fantastic. Thank you for elucidating that this PR is part of a bigger master plan! ","Merging this one, then I'll open a new PR for the `DATASETS_OFFLINE` env var :)"],"created_at":1614351727000,"updated_at":1614608964000,"closed_at":1614608964000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1952.patch"},"body":"As noticed in https:\/\/github.com\/huggingface\/datasets\/issues\/1939, timeouts were not properly handled when loading a dataset.\r\nThis caused the connection to hang indefinitely when working in a firewalled environment cc @stas00 \r\n\r\nI added a default timeout, and included an option to our offline environment for tests to be able to simulate both connection errors and timeout errors (previously it was simulating connection errors only).\r\n\r\nNow networks calls don't hang indefinitely.\r\nThe default timeout is set to 10sec (we might reduce it).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1952\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1951","id":817423573,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgwOTE4ODE2","number":1951,"title":"Add cross-platform support for datasets-cli","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@mariosasko This is kinda cool! "],"created_at":1614351385000,"updated_at":1615429106000,"closed_at":1614353426000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1951","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1951","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1951.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1951.patch"},"body":"One thing I've noticed while going through the codebase is the usage of `scripts` in `setup.py`. This [answer](https:\/\/stackoverflow.com\/a\/28119736\/14095927) on SO explains it nicely why it's better to use `entry_points` instead of `scripts`. To add cross-platform support to the CLI, this PR replaces `scripts` with `entry_points` in `setup.py` and moves datasets-cli to src\/datasets\/commands\/datasets_cli.py. All *.md and *.rst files are updated accordingly. The same changes were made in the transformers repo to add cross-platform ([link to PR](https:\/\/github.com\/huggingface\/transformers\/pull\/4131)).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1951\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1950","id":817295235,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgwODExMjMz","number":1950,"title":"updated multi_nli dataset with missing fields","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614340476000,"updated_at":1614596910000,"closed_at":1614596909000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1950","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1950","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1950.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1950.patch"},"body":"1) updated fields which were missing earlier\r\n2) added tags to README\r\n3) updated a few fields of README \r\n4) new dataset_infos.json and dummy files","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1950\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1949","id":816986936,"node_id":"MDU6SXNzdWU4MTY5ODY5MzY=","number":1949,"title":"Enable Fast Filtering using Arrow Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @gchhablani :)\r\nThanks for proposing your help !\r\n\r\nI'll be doing a refactor of some parts related to filtering in the scope of https:\/\/github.com\/huggingface\/datasets\/issues\/1877\r\nSo I would first wait for this refactor to be done before working on the filtering. In particular because I plan to make things simpler to manipulate.\r\n\r\nYour feedback on this refactor would also be appreciated since it also aims at making the core code more accessible (basically my goal is that no one's ever \"having troubles getting started\" ^^)\r\n\r\nThis will be available in a few days, I will be able to give you more details at that time if you don't mind waiting a bit !","Sure! I don't mind waiting. I'll check the refactor and try to understand what you're trying to do :)"],"created_at":1614308017000,"updated_at":1614367109000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi @lhoestq,\r\n\r\nAs mentioned in Issue #1796, I would love to work on enabling fast filtering\/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods\/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble getting started ;-;\r\n\r\nAny help would be appreciated.\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1949\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1948","id":816689329,"node_id":"MDU6SXNzdWU4MTY2ODkzMjk=","number":1948,"title":"dataset loading logger level","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["These warnings are showed when there's a call to `.map` to say to the user that a dataset is reloaded from the cache instead of being recomputed.\r\nThey are warnings since we want to make sure the users know that it's not recomputed.","Thank you for explaining the intention, @lhoestq \r\n\r\n1. Could it be then made more human-friendly? Currently the hex gibberish tells me nothing of what's really going on. e.g. the following is instructive, IMHO:\r\n\r\n```\r\nWARNING: wmt16\/ro-en\/train dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16\/ro-en\/validation dataset was loaded from cache instead of being recomputed\r\nWARNING: wmt16\/ro-en\/test dataset was loaded from cache instead of being recomputed\r\n```\r\nnote that it removes the not so useful hex info and tells the user instead which split it's referring to - but probably no harm in keeping the path if it helps the debug. But the key is that now the warning is telling me what it is it's warning me about.\r\n```\r\nWarning:Loading cache path\r\n```\r\non the other hand isn't telling what it is warning about.\r\n\r\nAnd I still suggest this is INFO level, otherwise you need to turn all 'using cache' statements to WARNING to be consistent. The user is most likely well aware the cache is used for models, etc. So this feels very similar.\r\n\r\n2. Should there be a way for a user to void warranty by having a flag - `I know I'm expecting the cached version to load if it's available - please do not warn me about it=True`\r\n\r\nTo explain the need: Warnings are a problem, they constantly take attention away because they could be the harbinger of a problem. Therefore I prefer not to have any warnings in the log, and if I get any I usually try to deal with those so that my log is clean. \r\n\r\nIt's less of an issue for somebody doing long runs. It's a huge issue for someone who does a new run every few minutes and on the lookout for any potential problems which is what I have been doing a lot of integrating DeepSpeed and other things. And since there are already problems to deal with during the integration it's nice to have a clean log to start with. \r\n\r\nI hope my need is not unreasonable and I was able to explain it adequately. \r\n\r\nThank you."],"created_at":1614278017000,"updated_at":1614302824000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"on master I get this with `--dataset_name wmt16 --dataset_config ro-en`:\r\n\r\n```\r\nWARNING:datasets.arrow_dataset:Loading cached processed dataset at \/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f\/cache-2e01bead8cf42e26.arrow\r\nWARNING:datasets.arrow_dataset:Loading cached processed dataset at \/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f\/cache-ac3bebaf4f91f776.arrow\r\nWARNING:datasets.arrow_dataset:Loading cached processed dataset at \/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/9dc00622c30446e99c4c63d12a484ea4fb653f2f37c867d6edcec839d7eae50f\/cache-810c3e61259d73a9.arrow\r\n```\r\n\r\nwhy are those WARNINGs? Should be INFO, no?\r\n\r\nwarnings should only be used when a user needs to pay attention to something, this is just informative - I'd even say it should be DEBUG, but definitely not WARNING.\r\n\r\nThank you.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1948\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1947","id":816590299,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgwMjI2MDk5","number":1947,"title":"Update documentation with not in place transforms and update DatasetDict","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614270198000,"updated_at":1614609414000,"closed_at":1614609413000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1947","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1947","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1947.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1947.patch"},"body":"In #1883 were added the not in-place transforms `flatten`, `remove_columns`, `rename_column` and `cast`.\r\n\r\nI added them to the documentation and added a paragraph on how to use them\r\n\r\nYou can preview the documentation [here](https:\/\/28862-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/processing.html#renaming-removing-casting-and-flattening-columns)\r\n\r\nI also added these methods to the DatasetDict class.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1947\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1946","id":816526294,"node_id":"MDExOlB1bGxSZXF1ZXN0NTgwMTcyNzI2","number":1946,"title":"Implement Dataset from CSV","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq question about public API: `keep_in_memory` or just `in_memory`?","For consistence I'd say `keep_in_memory`, but no strong opinion.","@lhoestq done!"],"created_at":1614265813000,"updated_at":1615542168000,"closed_at":1615542168000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1946","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1946","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1946.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1946.patch"},"body":"Implement `Dataset.from_csv`.\r\n\r\nAnalogue to #1943.\r\n\r\nIf finally, the scripts should be used instead, at least we can reuse the tests here. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1946\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1945","id":816421966,"node_id":"MDU6SXNzdWU4MTY0MjE5NjY=","number":1945,"title":"AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'","user":{"login":"dorost1234","id":79165106,"node_id":"MDQ6VXNlcjc5MTY1MTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/79165106?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dorost1234","html_url":"https:\/\/github.com\/dorost1234","followers_url":"https:\/\/api.github.com\/users\/dorost1234\/followers","following_url":"https:\/\/api.github.com\/users\/dorost1234\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dorost1234\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dorost1234\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dorost1234\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dorost1234\/orgs","repos_url":"https:\/\/api.github.com\/users\/dorost1234\/repos","events_url":"https:\/\/api.github.com\/users\/dorost1234\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dorost1234\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["sorry my mistake, datasets were overwritten closing now, thanks a lot"],"created_at":1614258585000,"updated_at":1614259235000,"closed_at":1614259226000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying to concatenate a list of huggingface datastes as:\r\n\r\n` train_dataset = datasets.concatenate_datasets(train_datasets)\r\n`\r\nHere is the `train_datasets` when I print:\r\n\r\n```\r\n[Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 120361\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 2670\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 6944\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 38140\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 173711\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 1655\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 4274\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 2019\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 2109\r\n}), Dataset({\r\n features: ['attention_mask', 'idx', 'input_ids', 'label', 'question1', 'question2', 'token_type_ids'],\r\n num_rows: 11963\r\n})]\r\n```\r\n\r\nI am getting the following error:\r\n\r\n`AttributeError: 'DatasetDict' object has no attribute 'concatenate_datasets'\r\n`\r\n\r\nI was wondering if you could help me with this issue, thanks a lot ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1945\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1944","id":816267216,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc5OTU2Nzc3","number":1944,"title":"Add Turkish News Category Dataset (270K - Lite Version)","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I updated your suggestions. Thank you very much for your support. @lhoestq ","> Thanks for changing to ClassLabel :)\r\n> This is all good now !\r\n> \r\n> However I can see changes in other files than the ones for interpress_news_category_tr_lite, can you please fix that ?\r\n> To do so you can create another branch and another PR to only include the interpress_news_category_tr_lite files.\r\n> \r\n> Maybe this happened because of a git rebase ? Once you've already pushed your code, please use git merge instead of rebase in order to avoid this.\r\n\r\nThanks for the feedback.\r\nNew PR https:\/\/github.com\/huggingface\/datasets\/pull\/1967"],"created_at":1614246322000,"updated_at":1614707201000,"closed_at":1614623001000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1944","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1944","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1944.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1944.patch"},"body":"This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol. \r\nThis dataset contains the same news from the current [interpress_news_category_tr dataset](https:\/\/huggingface.co\/datasets\/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes (\"k\u00fclt\u00fcrsanat\", \"ekonomi\", \"siyaset\", \"e\u011fitim\", \"d\u00fcnya\", \"spor\", \"teknoloji\", \"magazin\", \"sa\u011fl\u0131k\", \"g\u00fcndem\") were rearranged.\r\n\r\n@SBrandeis @lhoestq, can you please review this PR?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1944\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1943","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1943\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1943\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1943\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1943","id":816160453,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc5ODY5NTk0","number":1943,"title":"Implement Dataset from JSON and JSON Lines","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @lhoestq. I was trying to follow @thomwolf suggestion about integrating that script but as `from_json` method...\r\n> Note that I don't think this is necessary a breaking change, we can still keep the old scripts around\r\n\r\nDo you think there is a better way of doing it?\r\n\r\nI was trying to implement more or less the same logic as in the script, but I confess I assumed the target was in-memory only...","Basically, I was trying to reimplement `Json(datasets.ArrowBasedBuilder)._generate_tables`, and no writing to arrow file (I assumed only in-memory usage). I started with the first \"else\" clause... \r\n\r\nI was planning to remove my `_cast_table_to_info_features` and use `paj.read_json(parse_options=...)` instead (like in the script).","@lhoestq I am wondering why `keep_in_memory` has no effect for JSON...","What's the issue exactly ? Apparently it's correctly passed to as_dataset so I don't find the issue","Nevermind @lhoestq, I found where the problem was in my code... I push!","merging master into this branch should fix the CI issue :)<\/s>\r\n\r\nOops I didn't refresh the page sorry ^^'\r\n\r\nLooks all good !","Good job ! I think we can merge after the last changes regarding the error message and the docstring above :)","@lhoestq Done! And I have also added some tests for the `field` parameter.","Let me add some more tests for dict of lists JSON file, please.","@lhoestq done! ;)","We can merge. Additional work will be done in another PR. ;)"],"created_at":1614237453000,"updated_at":1616060528000,"closed_at":1616060528000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1943","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1943","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1943.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1943.patch"},"body":"Implement `Dataset.from_jsonl`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1943\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1942","id":816037520,"node_id":"MDU6SXNzdWU4MTYwMzc1MjA=","number":1942,"title":"[experiment] missing default_experiment-1-0.arrow","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi !\r\n\r\nThe cache at `~\/.cache\/huggingface\/metrics` stores the users data for metrics computations (hence the arrow files).\r\n\r\nHowever python modules (i.e. dataset scripts, metric scripts) are stored in `~\/.cache\/huggingface\/modules\/datasets_modules`.\r\n\r\nIn particular the metrics are cached in `~\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/`\r\n\r\nFeel free to take a look at your cache and let me know if you find any issue that would help explaining why you had an issue with `rouge` with no connection. I'm doing some tests on my side to try to reproduce the issue you have\r\n","Thank you for clarifying that the metrics files are to be found elsewhere, @lhoestq \r\n\r\n> The cache at ~\/.cache\/huggingface\/metrics stores the users data for metrics computations (hence the arrow files).\r\n\r\ncould it be renamed to reflect that? otherwise it misleadingly suggests that it's the metrics. Perhaps `~\/.cache\/huggingface\/metrics-user-data`?\r\n\r\nAnd there are so many `.lock` files w\/o corresponding files under `~\/.cache\/huggingface\/metrics\/`. Why are they there? \r\n\r\nfor example after I wipe out the dir completely and do one training I end up with:\r\n```\r\n~\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow.lock\r\n```\r\nwhat is that lock file locking when nothing is running?","The lock files come from an issue with filelock (see comment in the code [here](https:\/\/github.com\/benediktschmitt\/py-filelock\/blob\/master\/filelock.py#L394-L398)). Basically on unix there're always .lock files left behind. I haven't dove into this issue","are you sure you need an external lock file? if it's a single purpose locking in the same scope you can lock the caller `__file__` instead, e.g. here is how one can `flock` the script file itself to ensure atomic printing:\r\n\r\n```\r\nimport fcntl\r\ndef printflock(*msgs):\r\n \"\"\" print in multiprocess env so that the outputs from different processes don't get interleaved \"\"\"\r\n with open(__file__, \"r\") as fh:\r\n fcntl.flock(fh, fcntl.LOCK_EX)\r\n try:\r\n print(*msgs)\r\n finally:\r\n fcntl.flock(fh, fcntl.LOCK_UN)\r\n```\r\n","OK, this issue is not about caching but some internal conflict\/race condition it seems, I have just run into it on my normal env:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 356, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/arrow_reader.py\", line 236, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/arrow_reader.py\", line 171, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/arrow_reader.py\", line 302, in _get_dataset_from_filename\r\n pa_table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/arrow_reader.py\", line 322, in read_table\r\n stream = stream_from(filename)\r\n File \"pyarrow\/io.pxi\", line 782, in pyarrow.lib.memory_map\r\n File \"pyarrow\/io.pxi\", line 743, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 97, in pyarrow.lib.check_status\r\nFileNotFoundError: [Errno 2] Failed to open local file '\/home\/stas\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 655, in \r\n main()\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 619, in main\r\n test_results = trainer.predict(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-master\/src\/transformers\/trainer_seq2seq.py\", line 121, in predict\r\n return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-master\/src\/transformers\/trainer.py\", line 1706, in predict\r\n output = self.prediction_loop(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-master\/src\/transformers\/trainer.py\", line 1813, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 388, in compute\r\n self._finalize()\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 358, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\nI'm just running `run_seq2seq.py` under DeepSpeed:\r\n\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples\/seq2seq\/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \" --deepspeed examples\/tests\/deepspeed\/ds_config.json\r\n```\r\n\r\nIt finished the evaluation OK and crashed on the prediction part of the Trainer. But the eval \/ predict parts no longer run under Deepspeed, it's just plain ddp.\r\n\r\nIs this some kind of race condition? It happens intermittently - there is nothing else running at the same time.\r\n\r\nBut if 2 independent instances of the same script were to run at the same time it's clear to see that this problem would happen. Perhaps it'd help to create a unique hash which is shared between all processes in the group and use that as the default experiment id?\r\n","When you're using metrics in a distributed setup, there are two cases:\r\n1. you're doing two completely different experiments (two evaluations) and the 2 metrics jobs have nothing to do with each other\r\n2. you're doing one experiment (one evaluation) but use multiple processes to feed the data to the metric.\r\n\r\nIn case 1. you just need to provide two different `experiment_id` so that the metrics don't collide.\r\nIn case 2. they must have the same experiment_id (or use the default one), but in this case you also need to provide the `num_processes` and `process_id`\r\n\r\nIf understand correctly you're in situation 2.\r\n\r\nIf so, you make sure that you instantiate the metrics with both the right `num_processes` and `process_id` parameters ?\r\n\r\nIf they're not set, then the cache files of the two metrics collide it can cause issues. For example if one metric finishes before the other, then the cache file is deleted and the other metric gets a FileNotFoundError\r\nThere's more information in the [documentation](https:\/\/huggingface.co\/docs\/datasets\/loading_metrics.html#distributed-setups) if you want\r\n\r\nHope that helps !","Thank you for explaining that in a great way, @lhoestq \r\n\r\nSo the bottom line is that the `transformers` examples are broken since they don't do any of that. At least `run_seq2seq.py` just does `metric = load_metric(metric_name)`\r\n\r\nWhat test would you recommend to reliably reproduce this bug in `examples\/seq2seq\/run_seq2seq.py`?","To give more context, we are just using the metrics for the `comput_metric` function and nothing else. Is there something else we can use that just applies the function to the full arrays of predictions and labels? Because that's all we need, all the gathering has already been done because the datasets Metric multiprocessing relies on file storage and thus does not work in a multi-node distributed setup (whereas the Trainer does).\r\n\r\nOtherwise, we'll have to switch to something else to compute the metrics :-(","OK, it definitely leads to a race condition in how it's used right now. Here is how you can reproduce it - by injecting a random sleep time different for each process before the locks are acquired. \r\n```\r\n--- a\/src\/datasets\/metric.py\r\n+++ b\/src\/datasets\/metric.py\r\n@@ -348,6 +348,16 @@ class Metric(MetricInfoMixin):\r\n\r\n elif self.process_id == 0:\r\n # Let's acquire a lock on each node files to be sure they are finished writing\r\n+\r\n+ import time\r\n+ import random\r\n+ import os\r\n+ pid = os.getpid()\r\n+ random.seed(pid)\r\n+ secs = random.randint(1, 15)\r\n+ time.sleep(secs)\r\n+ print(f\"sleeping {secs}\")\r\n+\r\n file_paths, filelocks = self._get_all_cache_files()\r\n\r\n # Read the predictions and references\r\n@@ -385,7 +395,10 @@ class Metric(MetricInfoMixin):\r\n\r\n if predictions is not None:\r\n self.add_batch(predictions=predictions, references=references)\r\n+ print(\"FINALIZE START\")\r\n+\r\n self._finalize()\r\n+ print(\"FINALIZE END\")\r\n\r\n self.cache_file_name = None\r\n self.filelock = None\r\n```\r\n\r\nthen run with 2 procs: `python -m torch.distributed.launch --nproc_per_node=2`\r\n```\r\nexport BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples\/seq2seq\/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 10 --max_val_samples 10 --max_test_samples 10 --dataset_name wmt16 --dataset_config ro-en --source_prefix \"translate English to Romanian: \"\r\n```\r\n\r\n```\r\n***** Running Evaluation *****\r\n Num examples = 10\r\n Batch size = 16\r\n 0%| | 0\/1 [00:00\r\n main()\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 601, in main\r\n metrics = trainer.evaluate(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-mp-pp\/src\/transformers\/trainer_seq2seq.py\", line 74, in evaluate\r\n return super().evaluate(eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-mp-pp\/src\/transformers\/trainer.py\", line 1703, in evaluate\r\n output = self.prediction_loop(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-mp-pp\/src\/transformers\/trainer.py\", line 1876, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 556, in compute_metrics\r\n result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 402, in compute\r\n self._finalize()\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 370, in _finalize\r\n raise ValueError(\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```","I tried to adjust `run_seq2seq.py` and trainer to use the suggested dist env:\r\n```\r\n import torch.distributed as dist\r\n metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n```\r\nand in `trainer.py` added to call just for rank 0:\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\nand then the process hangs in a deadlock. \r\n\r\nHere is the tb:\r\n```\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/filelock.py\", line 275 in acquire\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 306 in _check_all_processes_locks\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 501 in _init_writer\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 440 in add_batch\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/metric.py\", line 397 in compute\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 558 in compute_metrics\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-mp-pp\/src\/transformers\/trainer.py\", line 1876 in prediction_loop\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-mp-pp\/src\/transformers\/trainer.py\", line 1703 in evaluate\r\n File \"\/mnt\/nvme1\/code\/huggingface\/transformers-mp-pp\/src\/transformers\/trainer_seq2seq.py\", line 74 in evaluate\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 603 in main\r\n File \"examples\/seq2seq\/run_seq2seq.py\", line 651 in \r\n```\r\n\r\nBut this sounds right, since in the above diff I set up a distributed metric and only called one process - so it's blocking on waiting for other processes to do the same.\r\n\r\nSo one working solution is to leave:\r\n\r\n```\r\n metric = load_metric(metric_name)\r\n```\r\nalone, and only call `compute_metrics` from rank 0\r\n```\r\n if self.is_world_process_zero() and self.compute_metrics is not None and preds is not None and label_ids is not None:\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n```\r\n\r\nso we now no longer use the distributed env as far as `datasets` is concerned, it's just a single process.\r\n\r\nAre there any repercussions\/side-effects to this proposed change in Trainer? If it always gathers all inputs on rank 0 then this is how it should have been done in first place - i.e. only run for rank 0. It appears that currently it was re-calculating the metrics on all processes on the same data just to throw the results away other than for rank 0. Unless I missed something.\r\n","But no, since \r\n`\r\n metric = load_metric(metric_name)\r\n`\r\nis called for each process, the race condition is still there. So still getting:\r\n\r\n```\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\ni.e. the only way to fix this is to `load_metric` only for rank 0, but this requires huge changes in the code and all end users' code.\r\n","OK, here is a workaround that works. The onus here is absolutely on the user:\r\n\r\n```\r\ndiff --git a\/examples\/seq2seq\/run_seq2seq.py b\/examples\/seq2seq\/run_seq2seq.py\r\nindex 2a060dac5..c82fd83ea 100755\r\n--- a\/examples\/seq2seq\/run_seq2seq.py\r\n+++ b\/examples\/seq2seq\/run_seq2seq.py\r\n@@ -520,7 +520,11 @@ def main():\r\n\r\n # Metric\r\n metric_name = \"rouge\" if data_args.task.startswith(\"summarization\") else \"sacrebleu\"\r\n- metric = load_metric(metric_name)\r\n+ import torch.distributed as dist\r\n+ if dist.is_initialized():\r\n+ metric = load_metric(metric_name, num_process=dist.get_world_size(), process_id=dist.get_rank())\r\n+ else:\r\n+ metric = load_metric(metric_name)\r\n\r\n def postprocess_text(preds, labels):\r\n preds = [pred.strip() for pred in preds]\r\n@@ -548,12 +552,17 @@ def main():\r\n # Some simple post-processing\r\n decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)\r\n\r\n+ kwargs = dict(predictions=decoded_preds, references=decoded_labels)\r\n+ if metric_name == \"rouge\":\r\n+ kwargs.update(use_stemmer=True)\r\n+ result = metric.compute(**kwargs) # must call for all processes\r\n+ if result is None: # only process with rank-0 will return metrics, others None\r\n+ return {}\r\n+\r\n if metric_name == \"rouge\":\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)\r\n # Extract a few results from ROUGE\r\n result = {key: value.mid.fmeasure * 100 for key, value in result.items()}\r\n else:\r\n- result = metric.compute(predictions=decoded_preds, references=decoded_labels)\r\n result = {\"bleu\": result[\"score\"]}\r\n\r\n prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]\r\n```\r\n\r\nThis is not user-friendly to say the least. And it's still wasteful as we don't need other processes to do anything.\r\n\r\nBut it solves the current race condition.\r\n\r\nClearly this calls for a design discussion as it's the responsibility of the Trainer to handle this and not user's. Perhaps in the `transformers` land?","I don't see how this could be the responsibility of `Trainer`, who hasn't the faintest idea of what a `datasets.Metric` is. The trainer takes a function `compute_metrics` that goes from predictions + labels to metric results, there is nothing there. That computation is done on all processes \r\n\r\nThe fact a `datasets.Metric` object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in `datasets`. Especially since, as I mentioned before, the multiprocessing part of `datasets.Metric` has a deep flaw since it can't work in a multinode environment. So you actually need to do the job of gather predictions and labels yourself.\r\n\r\nThe changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels `number_of_processes` times I believe, which is not going to make the metric computation any faster.\r\n\r\n","Right, to clarify, I meant it'd be good to have it sorted on the library side and not requiring the user to figure it out. This is too complex and error-prone and if not coded correctly the bug will be intermittent which is even worse.\r\n\r\nOh I guess I wasn't clear in my message - in no way I'm proposing that we use this workaround code - I was just showing what I had to do to make it work.\r\n\r\nWe are on the same page.\r\n\r\n> The changes you are proposing Stas are making the code less readable and also concatenate all the predictions and labels number_of_processes times I believe, which is not going to make the metric computation any faster.\r\n\r\nAnd yes, this is another problem that my workaround introduces. Thank you for pointing it out, @sgugger \r\n","> The fact a datasets.Metric object cannot be used as a simple compute function in a multi-process environment is, in my opinion, a bug in datasets\r\n\r\nYes totally, this use case is supposed to be supported by `datasets`. And in this case there shouldn't be any collision between the metrics. I'm looking into it :)\r\nMy guess is that at one point the metric isn't using the right file name. It's supposed to use one with a unique uuid in order to avoid the collisions.","I just opened #1966 to fix this :)\r\n@stas00 if have a chance feel free to try it !","Thank you, @lhoestq - I will experiment and report back. \r\n\r\nedit: It works! Thank you"],"created_at":1614222135000,"updated_at":1614623611000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"the original report was pretty bad and incomplete - my apologies!\r\n\r\nPlease see the complete version here: https:\/\/github.com\/huggingface\/datasets\/issues\/1942#issuecomment-786336481\r\n\r\n------------\r\n\r\nAs mentioned here https:\/\/github.com\/huggingface\/datasets\/issues\/1939 metrics don't get cached, looking at my local `~\/.cache\/huggingface\/metrics` - there are many `*.arrow.lock` files but zero metrics files.\r\n\r\nw\/o the network I get:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: '~\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow\r\n```\r\nthere is just `~\/.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow.lock`\r\n\r\nI did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind.\r\n\r\nthis is with master.\r\n\r\nThank you.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1942\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1941","id":815985167,"node_id":"MDU6SXNzdWU4MTU5ODUxNjc=","number":1941,"title":"Loading of FAISS index fails for index_name = 'exact'","user":{"login":"mkserge","id":2992022,"node_id":"MDQ6VXNlcjI5OTIwMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2992022?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mkserge","html_url":"https:\/\/github.com\/mkserge","followers_url":"https:\/\/api.github.com\/users\/mkserge\/followers","following_url":"https:\/\/api.github.com\/users\/mkserge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mkserge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mkserge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mkserge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mkserge\/orgs","repos_url":"https:\/\/api.github.com\/users\/mkserge\/repos","events_url":"https:\/\/api.github.com\/users\/mkserge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mkserge\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting ! I'm taking a look","Index training was missing, I fixed it here: https:\/\/github.com\/huggingface\/datasets\/commit\/f5986c46323583989f6ed1dabaf267854424a521\r\n\r\nCan you try again please ?","Works great \ud83d\udc4d I just put a minor comment on the commit, I think you meant to pass the `train_size` from the one obtained from the config.\r\n\r\nThanks for a quick response!"],"created_at":1614216654000,"updated_at":1614263326000,"closed_at":1614263326000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nIt looks like loading of FAISS index now fails when using index_name = 'exact'.\r\n\r\nFor example, from the RAG [model card](https:\/\/huggingface.co\/facebook\/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage).\r\n\r\nRunning `transformers==4.3.2` and datasets installed from source on latest `master` branch.\r\n\r\n```bash\r\n(venv) sergey_mkrtchyan datasets (master) $ python\r\nPython 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)\r\n[Clang 6.0 (clang-600.0.57)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n>>> tokenizer = RagTokenizer.from_pretrained(\"facebook\/rag-token-nq\")\r\n>>> retriever = RagRetriever.from_pretrained(\"facebook\/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nUsing custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\r\nReusing dataset wiki_dpr (\/Users\/sergey_mkrtchyan\/.cache\/huggingface\/datasets\/wiki_dpr\/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\/0.0.0\/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\r\nReusing dataset wiki_dpr (\/Users\/sergey_mkrtchyan\/.cache\/huggingface\/datasets\/wiki_dpr\/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\/0.0.0\/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\n 0%| | 0\/10 [00:00\", line 1, in \r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 425, in from_pretrained\r\n return cls(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 387, in __init__\r\n self.init_retrieval()\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 458, in init_retrieval\r\n self.index.init_index()\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 284, in init_index\r\n self.dataset = load_dataset(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/builder.py\", line 734, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/utils\/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/builder.py\", line 769, in _build_single_dataset\r\n post_processed = self._post_process(ds, resources_paths)\r\n File \"\/Users\/sergey_mkrtchyan\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wiki_dpr\/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb\/wiki_dpr.py\", line 205, in _post_process\r\n dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/arrow_dataset.py\", line 2516, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/search.py\", line 416, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/search.py\", line 281, in add_vectors\r\n self.faiss_index.add(vecs)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/faiss\/__init__.py\", line 104, in replacement_add\r\n self.add_c(n, swig_ptr(x))\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/faiss\/swigfaiss.py\", line 3263, in add\r\n return _swigfaiss.IndexHNSW_add(self, n, x)\r\nRuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at \/Users\/runner\/work\/faiss-wheels\/faiss-wheels\/faiss\/faiss\/IndexHNSW.cpp:356: Error: 'is_trained' failed\r\n>>>\r\n```\r\n\r\nThe issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1941\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1940","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1940\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1940\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1940\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1940","id":815770012,"node_id":"MDU6SXNzdWU4MTU3NzAwMTI=","number":1940,"title":"Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`","user":{"login":"francisco-perez-sorrosal","id":918006,"node_id":"MDQ6VXNlcjkxODAwNg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/918006?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal","html_url":"https:\/\/github.com\/francisco-perez-sorrosal","followers_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/followers","following_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/orgs","repos_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/repos","events_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the report !\r\n\r\nCurrently we don't have a way to let the user easily disable this behavior.\r\nHowever I agree that we should support stateful processing functions, ideally by removing `does_function_return_dict`.\r\n\r\nWe needed this function in order to know whether the `map` functions needs to write data or not. if `does_function_return_dict` returns False then we don't write anything.\r\n\r\nInstead of checking the output of the processing function outside of the for loop that iterates through the dataset to process it, we can check the output of the first processed example and at that point decide if we need to write data or not.\r\n\r\nTherefore it's definitely possible to fix this unwanted behavior, any contribution going into this direction is welcome :)","Thanks @mariosasko for the PR!"],"created_at":1614194336000,"updated_at":1616513209000,"closed_at":1616513209000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there!\r\n\r\nIn my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows\/examples already selected per each class, which are the ones I want to keep in the end:\r\n\r\n```python\r\n def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter):\r\n label = int(example['label'])\r\n current_counter = counter.get(label, 0)\r\n if current_counter < per_class_limit:\r\n counter[label] = current_counter + 1\r\n return True\r\n return False\r\n```\r\n\r\nAt some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this:\r\n\r\n```python\r\n...\r\nkwargs = {\"per_class_limit\": train_examples_per_class_limit, \"counter\": Counter()}\r\ndatasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs)\r\n...\r\n```\r\n\r\nThe problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https:\/\/github.com\/huggingface\/datasets\/blob\/96578adface7e4bc1f3e8bafbac920d72ca1ca60\/src\/datasets\/arrow_dataset.py#L1290). \r\n\r\nWhen this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained. \r\n\r\nIn my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call.\r\n\r\nI've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect.\r\n\r\nIs there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1) \r\n\r\nThanks in advance,\r\n\r\nFrancisco Perez-Sorrosal\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1940\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1939","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1939\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1939\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1939\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1939","id":815680510,"node_id":"MDU6SXNzdWU4MTU2ODA1MTA=","number":1939,"title":"[firewalled env] OFFLINE mode","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting and for all the details and suggestions.\r\n\r\nI'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.\r\n\r\nMoreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you already can:\r\n- first load datasets and metrics from an instance with internet connection\r\n- then be able to reload datasets and metrics from another instance without connection (as long as the filesystem is shared)\r\n\r\nThis is already implemented, but currently it only works if the requests return a `ConnectionError` (or any error actually). Not sure why it would hang instead of returning an error.\r\n\r\nMaybe this is just a issue with the timeout value being not set or too high ?\r\nIs there a way I can have access to one of the instances on which there's this issue (we can discuss this offline) ?\r\n","I'm on master, so using all the available bells and whistles already.\r\n\r\nIf you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.\r\n\r\n--------------\r\n\r\nYes, there is a nuance to it. As I mentioned it's firewalled - that is it has a network but making any calls outside - it just hangs in:\r\n\r\n```\r\nsin_addr=inet_addr(\"xx.xx.xx.xx\")}, [28->16]) = 0\r\nclose(5) = 0\r\nsocket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 5\r\nconnect(5, {sa_family=AF_INET, sin_port=htons(3128), sin_addr=inet_addr(\"yy.yy.yy.yy\")}, 16^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)\r\n```\r\nuntil it times out.\r\n\r\nThat's why we need to be able to tell the software that there is no network to rely on even if there is one (good for testing too).\r\n\r\nSo what I'm thinking is that this is a simple matter of pre-ambling any network call wrappers with:\r\n\r\n```\r\nif HF_DATASETS_OFFLINE:\r\n assert \"Attempting to make a network call under Offline mode\"\r\n```\r\n\r\nand then fixing up if there is anything else to fix to make it work.\r\n\r\n--------------\r\n\r\nOtherwise I think the only other problem I encountered is that we need to find a way to pre-cache metrics, for some reason it's not caching it and wanting to fetch it from online.\r\n\r\nWhich is extra strange since it already has those files in the `datasets` repo itself that is on the filesystem.\r\n\r\nThe workaround I had to do is to copy `rouge\/rouge.py` (with the parent folder) from the datasets repo to the current dir - and then it proceeded.","Ok understand better the hanging issue.\r\nI guess catching connection errors is not enough, we should also avoid all the hangings.\r\nCurrently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.\r\n\r\nI'll also take a look at why you had to do this for `rouge`.\r\n","FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there is some network. So we just need to help it to bail out instantly rather than hang waiting for it to time out. And afterwards everything else you said.","Update on this: \r\n\r\nI managed to create a mock environment for tests that makes the connections hang until timeout.\r\nI managed to reproduce the issue you're having in this environment.\r\n\r\nI'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it's needed in the code. This should cover the _automatic_ section you mentioned.","Fabulous! I'm glad you were able to reproduce the issues, @lhoestq!","I lost access to the firewalled setup, but I emulated it with:\r\n\r\n```\r\nsudo ufw enable\r\nsudo ufw default deny outgoing\r\n```\r\n(thanks @mfuntowicz)\r\n\r\nI was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.\r\n\r\nThank you!"],"created_at":1614186822000,"updated_at":1614920994000,"closed_at":1614920994000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.\r\n\r\nI propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.\r\n\r\n## 1. Manual\r\n\r\nmanually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run:\r\n\r\n```\r\nDATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ...\r\n```\r\n\r\n`datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.\r\n\r\n## 2. Automatic\r\n\r\nIn some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:\r\n\r\n1. on the non-firewalled instance:\r\n```\r\nrun_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...\r\n```\r\n\r\nwhich should download and cached everything.\r\n\r\n2. and then immediately after on the firewalled instance, which shares the same filesystem\r\n```\r\nDATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...\r\n```\r\n\r\nand the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.\r\n\r\n## Common Issues\r\n\r\n1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided\r\n\r\n```\r\n if dataset and path in _PACKAGED_DATASETS_MODULES:\r\n```\r\n\r\n2. it has an issue with metrics. e.g. I had to manually copy `rouge\/rouge.py` from the `datasets` repo to the current dir - or it was hanging.\r\n\r\nI had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1`\r\n\r\nHere is the corresponding issue for `transformers`: https:\/\/github.com\/huggingface\/transformers\/issues\/10379\r\n\r\nThanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1939\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1938","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1938\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1938\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1938\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1938","id":815647774,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc5NDQyNDkw","number":1938,"title":"Disallow ClassLabel with no names","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614184677000,"updated_at":1614252449000,"closed_at":1614252449000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1938.patch"},"body":"It was possible to create a ClassLabel without specifying the names or the number of classes.\r\nThis was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str.\r\n\r\ncc @justin-yan ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1938\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1937","id":815163943,"node_id":"MDU6SXNzdWU4MTUxNjM5NDM=","number":1937,"title":"CommonGen dataset page shows an error OSError: [Errno 28] No space left on device","user":{"login":"yuchenlin","id":10104354,"node_id":"MDQ6VXNlcjEwMTA0MzU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10104354?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yuchenlin","html_url":"https:\/\/github.com\/yuchenlin","followers_url":"https:\/\/api.github.com\/users\/yuchenlin\/followers","following_url":"https:\/\/api.github.com\/users\/yuchenlin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yuchenlin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yuchenlin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yuchenlin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yuchenlin\/orgs","repos_url":"https:\/\/api.github.com\/users\/yuchenlin\/repos","events_url":"https:\/\/api.github.com\/users\/yuchenlin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yuchenlin\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Facing the same issue for [Squad](https:\/\/huggingface.co\/datasets\/viewer\/?dataset=squad) and [TriviaQA](https:\/\/huggingface.co\/datasets\/viewer\/?dataset=trivia_qa) datasets as well.","We just fixed the issue, thanks for reporting !"],"created_at":1614149253000,"updated_at":1614337806000,"closed_at":1614337806000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The page of the CommonGen data https:\/\/huggingface.co\/datasets\/viewer\/?dataset=common_gen shows \r\n![image](https:\/\/user-images.githubusercontent.com\/10104354\/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1937\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1936","id":814726512,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc4NjY3NTQ4","number":1936,"title":"[WIP] Adding Support for Reading Pandas Category","user":{"login":"justin-yan","id":7731709,"node_id":"MDQ6VXNlcjc3MzE3MDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7731709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/justin-yan","html_url":"https:\/\/github.com\/justin-yan","followers_url":"https:\/\/api.github.com\/users\/justin-yan\/followers","following_url":"https:\/\/api.github.com\/users\/justin-yan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/justin-yan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/justin-yan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/justin-yan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/justin-yan\/orgs","repos_url":"https:\/\/api.github.com\/users\/justin-yan\/repos","events_url":"https:\/\/api.github.com\/users\/justin-yan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/justin-yan\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! could you maybe add a few tests in test_arrow_dataset.py to make sure from_pandas works as expected with categorical types ?\r\n\r\nIn particular I'm pretty sure that if you now try to `cast` the dataset to the same features at its current features, it will break instead of just being a no-op.\r\nThis is because `features.type` returns an arrow int64 type for the classlabel column instead of the arrow dictionary type that you have in the arrow table. There are two issues in this case:\r\n- it will try to replace the arrow type from dictionary to int64 instead of being a no-op\r\n- it will crash because pyarrow is not able to cast a dictionary to int64 (even if it's actually possible do cast the column by hand by accessing the sub-array of the dictionary array containing the indices\/integers)\r\n\r\nIt would be awesome to fix this case ! Ideally the arrow `pa_type` of classlabel ([here](https:\/\/github.com\/huggingface\/datasets\/blob\/7072e1becd69d421d863374b825e3da4c6551798\/src\/datasets\/features.py#L558)) should be an arrow dictionary type. This should fix the issue. Then we can start working on backward compatibility.\r\n\r\nLet me know if you have questions or if I can help.\r\nIn particular if there is some glue-ing to do I can take care of that if you want ;)\r\n\r\n--------------\r\n\r\nAlso just a few information regarding the functions you mentioned\r\n\r\n`int2str` and `str2int` are used by users to transforms the labels if they want to. Here sine ClassLabel is instantiated without the class names, they would crash. I was about to make a PR to disallow the creation of an empty ClassLabel feature type.\r\nTherefore can you provide class_names= when creating the ClassLabel ?\r\n\r\n`encode_example` is mostly used with a dataset builder (e.g. squad.py) so it's not used when using .from_pandas.\r\n\r\n\r\n","Got it - that's super helpful, I was trying to figure out what would break!\r\n\r\nI think there are two issues we're discussing here:\r\n\r\n1. modifying the pa_type of ClassLabel: totally agree with you on that one if that's OK from a back-compat perspective. (i.e. are users of `datasets` not supposed to access or use the .pa_type attribute of ClassLabel?)\r\n2. creating a ClassLabel requires information that's not present on the pa.DictionaryType object: I think the crux of the problem is that at this line (https:\/\/github.com\/huggingface\/datasets\/pull\/1936\/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR933) - you only have access to the `pa_type`, which is `DictionaryType[int8, string]`. I've unpacked it and looked at all of the available methods, and I don't believe that any of the actual values (\"names\") are present - those are stored on the `pyarrow.DictArray.dictionary` attribute (i.e. as data, not on the pyarrow.DataType) - so in order to actually be able to instantiate the ClassLabel with the names= parameter, we need to pass in more information to this method.\r\n\r\nWe *could* mostly accomplish this by modifying https:\/\/github.com\/huggingface\/datasets\/pull\/1936\/files#diff-54081ede051fd0a7ef65748c481cc06f90209f01bb89968747089d13a2ca052bR909 to accept a pyarrow Table in addition to the type, and it's not too difficult to do, but it feels a little bit off to me:\r\n\r\n- It feels a bit off that a \"schema\" definition will change depending on what data gets added to the dataset. In particular, if someone adds rows or concatenates two datasets, the ClassLabel \"names\" will also need to change, right? I think maybe we're getting around this because a Dataset is immutable (I think?) and so any new dataset is freshly constructed, but for example - I think this check wouldn't work for `ClassLabel`s if we were to compare the `Dataset.features` instead of the underlying pyarrow type https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/arrow_dataset.py#L2664\r\n- To that end I wonder if ClassLabel should actually just be the \"type\" akin to Category, and the \"names\" should be considered \"data\" and not part of the \"type\"? Similar to how pyarrow maintains two data objects - the array of indices and the array of string values.\r\n\r\nWith that in mind, I'm wondering if you *should* allow an empty ClassLabel (and`int2str`, etc. can be updated to have more descriptive error messages if labels aren't provided or inferred), and if the underlying data is a pa.DictionaryType, then the names can be inferred and applied at these points in the code:\r\n- https:\/\/github.com\/huggingface\/datasets\/blob\/96578adface7e4bc1f3e8bafbac920d72ca1ca60\/src\/datasets\/arrow_dataset.py#L274\r\n- https:\/\/github.com\/huggingface\/datasets\/blob\/96578adface7e4bc1f3e8bafbac920d72ca1ca60\/src\/datasets\/arrow_dataset.py#L686\r\n- https:\/\/github.com\/huggingface\/datasets\/blob\/96578adface7e4bc1f3e8bafbac920d72ca1ca60\/src\/datasets\/arrow_dataset.py#L673\r\n\r\nI think perhaps the mismatch here is when the data is stored on disk as an int there should be a convenient way of saying \"this is a dictionary and here are some explicitly provided labels\", whereas when it's stored as a string, we'd ideally like to say \"this is a Category and please condense the representation and automatically infer the labels\".\r\n\r\nSorry for the long comment! Hopefully my thoughts make sense - thanks for taking the time to discuss!","Yes that makes sense. I completely forgot that the label names of an arrow Dictionary type were not stored in the type but in the DictionaryArray.\r\n\r\nThis is made me realize that it's actually pretty unpractical and I feel that handling this can add unnecessary complexity in the handling of dtypes.\r\nMore specifically:\r\n- it's not possible to create a DictionaryArray from a call to pyarrow.array with python objects, which is the function we use to convert python objects to pyarrow objects (or we would need to convert the python objects to pandas categorical series beforehand but it doesn't work for nested types)\r\n- casting nested types containing Dictionary types would require a lot of array manipulations since it's not compatible with pyarrow.array.cast\r\n\r\nI feel like the original feature request (support of pandas Categorical) should be addressable without adding so much complexity to the library.\r\n\r\nIf we admit that we don't want to deal with arrow Dictionary type, maybe we can simply convert the pandas categorical series to an int64 series and set the feature type to the right ClassLabel in `from_pandas`. We can have the reverse operation in `to_pandas`. This way we don't need to support the arrow DictionaryType and so we can keep simple\/accessible code for conversion from python to arrow and also for type casting. Let me know what you think.\r\n\r\nIn the future depending on the usage of the ClassLabel types with pandas\/pyarrow we might reconsider this but for now I believe this simple solution is enough.","I like that idea! Let me try working up a PR for this","OK! I just whipped up the `from_pandas()` portion of this PR, and it works, though I'm not *super* familiar with the available APIs so I'm not sure if there's a more \"vectorized\" way of doing all of these updates - so happy to get some feedback and iterate!\r\n\r\nApologies for multiple commits - I realized how to solve a few different problems right after I gave up and pushed with the intent to ask for help :-)\r\n\r\nI wanted to get some guidance on how to handle the reverse direction: I think there are two main areas to look at, `.to_pandas()` and also `.set_format('pandas')` and then pulling out a dataframe like so: `dataset[:]`. Is there a single place where I can handle both of these cases at once or do these need to be handled independently?","Thanks ! This is awesome :) \r\nCould you also add a test ? There is already `test_to_pandas` in test_arrow_dataset.py\r\nFeel free to complete this test to make sure it works for Categorical :)\r\n\r\nTo make it work with the \"pandas\" formating (when you do `set_format(\"pandas\")` and then query `dataset[0]`, `dataset[:]`, etc.), you can take a look and the `PandasFormatter` in formatting.py\r\nIt takes a pyarrow table as input of its formatting methods (one method for rows, one for columns and one for batches) and returns a pandas DataFrame (or a Series for the method for formatting a column). You can cast to Categorical in each one of the formatter methods and it should work directly when you use a pandas-formatted dataset.\r\n\r\nThis formatter can then also be used in `to_pandas` (currently it does `pa_table.to_pandas()` but `PandasFormatter().format_batch(pa_table)` can be used instead)."],"created_at":1614105174000,"updated_at":1615273745000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1936","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1936","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1936.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1936.patch"},"body":"@lhoestq - continuing our conversation from https:\/\/github.com\/huggingface\/datasets\/issues\/1906#issuecomment-784247014\r\n\r\nThe goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category.\r\n\r\nJust the 4 line change below actually does seem to work:\r\n\r\n```\r\n>>> from datasets import Dataset\r\n>>> import pandas as pd\r\n>>> df = pd.DataFrame(pd.Series([\"a\", \"b\", \"c\", \"a\"], dtype=\"category\"))\r\n>>> ds = Dataset.from_pandas(df)\r\n>>> ds.to_pandas()\r\n 0\r\n0 a\r\n1 b\r\n2 c\r\n3 a\r\n>>> ds.to_pandas().dtypes\r\n0 category\r\ndtype: object\r\n```\r\n\r\nsave_to_disk, etc. all seem to work as well. The main things that are theoretically \"incorrect\" if we leave this are:\r\n\r\n```\r\n>>> ds.features.type\r\nStructType(struct<0: int64>)\r\n```\r\nthere are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone\r\n\r\nAdditionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1936\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1935","id":814623827,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1","number":1935,"title":"add CoVoST2","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@patrickvonplaten \r\nI removed the mp3 files, dummy_data is much smaller now!"],"created_at":1614097696000,"updated_at":1614190172000,"closed_at":1614189909000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1935","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1935","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1935.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1935.patch"},"body":"This PR adds the CoVoST2 dataset for speech translation and ASR.\r\nhttps:\/\/github.com\/facebookresearch\/covost#covost-2\r\n\r\nThe dataset requires manual download as the download page requests an email address and the URLs are temporary.\r\n\r\nThe dummy data is a bit bigger because of the mp3 files and 36 configs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1935\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1934","id":814437190,"node_id":"MDU6SXNzdWU4MTQ0MzcxOTA=","number":1934,"title":"Add Stanford Sentiment Treebank (SST)","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Dataset added in release [1.5.0](https:\/\/github.com\/huggingface\/datasets\/releases\/tag\/1.5.0), I think I can close this."],"created_at":1614084796000,"updated_at":1616089904000,"closed_at":1616089904000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I am going to add SST:\r\n\r\n- **Name:** The Stanford Sentiment Treebank\r\n- **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language\r\n- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https:\/\/nlp.stanford.edu\/~socherr\/EMNLP2013_RNTN.pdf)\r\n- **Data:** https:\/\/nlp.stanford.edu\/sentiment\/index.html\r\n- **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification\r\n\r\nWhat's the difference with the [_SST-2_](https:\/\/huggingface.co\/datasets\/viewer\/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where:\r\n- the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1}\r\n- the labels of the *sub-sentences* were included only in the training set\r\n- the labels in the test set are obfuscated\r\n\r\nSo there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https:\/\/groups.google.com\/g\/word2vec-toolkit\/c\/QIUjLw6RqFk\/m\/_iEeyt428wkJ) \ud83c\udfb5). The only solution I found was to manually replace all the \u00e8, \u00eb, \u00e7 and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset.\r\n\r\nAlso, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous.\r\n\r\nI plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1934\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1933","id":814335846,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc4MzQwMzk3","number":1933,"title":"Use arrow ipc file format","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614076704000,"updated_at":1614076704000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1933","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1933","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1933.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1933.patch"},"body":"According to the [documentation](https:\/\/arrow.apache.org\/docs\/format\/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample:\r\n\r\n> We define a \u201cfile format\u201d supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.\r\n\r\nSince it stores more metadata regarding the positions of the examples in the file, it should enable better example retrieval performances. However from the discussion in https:\/\/github.com\/huggingface\/datasets\/issues\/1803 it looks like it's not the case unfortunately. Maybe in the future this will allow speed gains.\r\n\r\nI think it's still a good idea to start using it anyway for these reasons:\r\n- in the future we may have speed gains\r\n- it contains the arrow streaming format data\r\n- it's compatible with the pyarrow Dataset implementation (it allows to load remote dataframes for example) if we want to use it in the future\r\n- it's also the format used by arrow feather if we want to use it in the future\r\n- it's roughly the same size as the streaming format\r\n- it's easy to have backward compatibility with the streaming format\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1933\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1932","id":814326116,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc4MzMyMTQy","number":1932,"title":"Fix builder config creation with data_dir","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614075962000,"updated_at":1614077128000,"closed_at":1614077127000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1932","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1932","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1932.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1932.patch"},"body":"The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError(\"Cannot name a custom BuilderConfig the same as an available...\")`\r\n\r\nI fixed that by commenting the line that used to ignore the data_dir when creating the config.\r\n\r\nIt was previously ignored before the introduction of config id because we didn't want to change the config name. Now it's fine to take it into account for the config id.\r\n\r\nNow creating a config with a data_dir works again @patrickvonplaten ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1932\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1931","id":814225074,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5","number":1931,"title":"add m_lama (multilingual lama) dataset","user":{"login":"pdufter","id":13961899,"node_id":"MDQ6VXNlcjEzOTYxODk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13961899?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pdufter","html_url":"https:\/\/github.com\/pdufter","followers_url":"https:\/\/api.github.com\/users\/pdufter\/followers","following_url":"https:\/\/api.github.com\/users\/pdufter\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pdufter\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pdufter\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pdufter\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pdufter\/orgs","repos_url":"https:\/\/api.github.com\/users\/pdufter\/repos","events_url":"https:\/\/api.github.com\/users\/pdufter\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pdufter\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, it seems I am somewhat stuck here. The failed test `ci\/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to be resolved now.","I guess the `dummy_data.zip` is too large. I can reduce the languages that are contained there, but when testing it, it obviously throws an error, as not all files can be found. I guess I can either i) change the default value regarding which languages are loaded or ii) let the `_generate_examples` silently skip any language for which it cannot find files. Both solutions are not really pretty - is there another way around this?","Thanks for the review and the constructive comments :) ! I tried to address them, and reduced the number of lines in the dummy data to 1 to reduce its size. "],"created_at":1614067917000,"updated_at":1614592863000,"closed_at":1614592863000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1931","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1931","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1931.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1931.patch"},"body":"Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https:\/\/arxiv.org\/pdf\/2102.00894.pdf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1931\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1930","id":814055198,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0","number":1930,"title":"updated the wino_bias dataset","user":{"login":"JieyuZhao","id":22306304,"node_id":"MDQ6VXNlcjIyMzA2MzA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22306304?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JieyuZhao","html_url":"https:\/\/github.com\/JieyuZhao","followers_url":"https:\/\/api.github.com\/users\/JieyuZhao\/followers","following_url":"https:\/\/api.github.com\/users\/JieyuZhao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JieyuZhao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JieyuZhao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JieyuZhao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JieyuZhao\/orgs","repos_url":"https:\/\/api.github.com\/users\/JieyuZhao\/repos","events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !","> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will have dev\/test splits.","> Cool thanks !\r\n> This looks perfect this way.\r\n> \r\n> Now we just need to update the dataset_infos.json (it contains the metadata of the dataset) and add dummy data to be able to test this script automatically.\r\n> \r\n> To update the dataset_infos.json you just need delete the current one at `.\/datasets\/wino_biais\/dataset_infos.json`, and then run this command:\r\n> \r\n> ```\r\n> datasets-cli test .\/datasets\/wino_biais --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> To add the dummy data there's also a tool to add them automatically.\r\n> First delete the folder at `.\/datasets\/wino_biais\/dummy` and then run\r\n> \r\n> ```\r\n> datasets-cli dummy_data .\/datasets\/wino_biais --auto_generate --match_text_files \"*conll\" --n_lines 15\r\n> ```\r\n> \r\n> Let me know if you have questions :)\r\n> Also don't forget to run `make style` to format the code properly.\r\n\r\nThanks for the instruction! I've updated the metadata and the dummy data and also do the formatting. Please let me know if more is needed. :)"],"created_at":1614049660000,"updated_at":1617809096000,"closed_at":1617809096000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1930","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1930","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1930.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1930.patch"},"body":"Updated the wino_bias.py script.\r\n- updated the data_url\r\n- added different configurations for different data splits\r\n- added the coreference_cluster to the data features","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1930\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1929","id":813929669,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4","number":1929,"title":"Improve typing and style and fix some inconsistencies","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thanks for the quick review.","I merged master to this branch to re-run the CI before merging :)"],"created_at":1614034061000,"updated_at":1614183374000,"closed_at":1614175434000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1929","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1929","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1929.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1929.patch"},"body":"This PR:\r\n* improves typing (mostly more consistent use of `typing.Optional`)\r\n* `DatasetDict.cleanup_cache_files` now correctly returns a dict \r\n* replaces `dict()` with the corresponding literal\r\n* uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1929\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1928","id":813793434,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4","number":1928,"title":"Updating old cards","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614021964000,"updated_at":1614104365000,"closed_at":1614104365000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1928.patch"},"body":"Updated the cards for [Allocine](https:\/\/github.com\/mcmillanmajora\/datasets\/tree\/updating-old-cards\/datasets\/allocine), [CNN\/DailyMail](https:\/\/github.com\/mcmillanmajora\/datasets\/tree\/updating-old-cards\/datasets\/cnn_dailymail), and [SNLI](https:\/\/github.com\/mcmillanmajora\/datasets\/tree\/updating-old-cards\/datasets\/snli). For the most part, the information was just rearranged or rephrased, but the social impact statements are new. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1928\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1927","id":813768935,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3ODYxODM5","number":1927,"title":"Update README.md","user":{"login":"JieyuZhao","id":22306304,"node_id":"MDQ6VXNlcjIyMzA2MzA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22306304?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JieyuZhao","html_url":"https:\/\/github.com\/JieyuZhao","followers_url":"https:\/\/api.github.com\/users\/JieyuZhao\/followers","following_url":"https:\/\/api.github.com\/users\/JieyuZhao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JieyuZhao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JieyuZhao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JieyuZhao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JieyuZhao\/orgs","repos_url":"https:\/\/api.github.com\/users\/JieyuZhao\/repos","events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614019894000,"updated_at":1614077565000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1927","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1927","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1927.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1927.patch"},"body":"Updated the info for the wino_bias dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1927\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1926","id":813607994,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy","number":1926,"title":"Fix: Wiki_dpr - add missing scalar quantizer","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1614007925000,"updated_at":1614008994000,"closed_at":1614008993000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1926","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1926","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1926.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1926.patch"},"body":"All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done.\r\n\r\nThe scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG.\r\nThe quantizer reduces the size of the index a lot but increases index building time.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1926\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1925","id":813600902,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3","number":1925,"title":"Fix: Wiki_dpr - fix when with_embeddings is False or index_name is \"no_index\"","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq ,\r\n\r\nI am running into an issue now when trying to run RAG. Running exactly as described [here](https:\/\/huggingface.co\/facebook\/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage) I get the error below. Wondering if it's related to this.\r\n\r\nRunning Transformers 4.3.2 with datasets installed from source from `master` branch.\r\n\r\n```bash\r\n(venv) sergey_mkrtchyan datasets (master) $ python\r\nPython 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39)\r\n[Clang 6.0 (clang-600.0.57)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n>>> tokenizer = RagTokenizer.from_pretrained(\"facebook\/rag-token-nq\")\r\n>>> retriever = RagRetriever.from_pretrained(\"facebook\/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\r\nUsing custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\r\nReusing dataset wiki_dpr (\/Users\/sergey_mkrtchyan\/.cache\/huggingface\/datasets\/wiki_dpr\/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False\/0.0.0\/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\nUsing custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\r\nReusing dataset wiki_dpr (\/Users\/sergey_mkrtchyan\/.cache\/huggingface\/datasets\/wiki_dpr\/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4\/0.0.0\/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb)\r\n 0%| | 0\/10 [00:00\", line 1, in \r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 425, in from_pretrained\r\n return cls(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 387, in __init__\r\n self.init_retrieval()\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 458, in init_retrieval\r\n self.index.init_index()\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/transformers\/models\/rag\/retrieval_rag.py\", line 284, in init_index\r\n self.dataset = load_dataset(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/load.py\", line 750, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/builder.py\", line 734, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/utils\/py_utils.py\", line 195, in map_nested\r\n return function(data_struct)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/builder.py\", line 769, in _build_single_dataset\r\n post_processed = self._post_process(ds, resources_paths)\r\n File \"\/Users\/sergey_mkrtchyan\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wiki_dpr\/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb\/wiki_dpr.py\", line 205, in _post_process\r\n dataset.add_faiss_index(\"embeddings\", custom_index=index)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/arrow_dataset.py\", line 2516, in add_faiss_index\r\n super().add_faiss_index(\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/search.py\", line 416, in add_faiss_index\r\n faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/huggingface\/datasets\/src\/datasets\/search.py\", line 281, in add_vectors\r\n self.faiss_index.add(vecs)\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/faiss\/__init__.py\", line 104, in replacement_add\r\n self.add_c(n, swig_ptr(x))\r\n File \"\/Users\/sergey_mkrtchyan\/workspace\/cformers\/venv\/lib\/python3.8\/site-packages\/faiss\/swigfaiss.py\", line 3263, in add\r\n return _swigfaiss.IndexHNSW_add(self, n, x)\r\nRuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at \/Users\/runner\/work\/faiss-wheels\/faiss-wheels\/faiss\/faiss\/IndexHNSW.cpp:356: Error: 'is_trained' failed\r\n>>>\r\n```\r\n\r\nThe error message is hinting that it could be related to this, but I might be wrong. Any ideas?\r\n\r\n\r\nEdit: Can confirm it's working fine with datasets==1.2.0\r\n\r\nDouble Edit: Did some further digging. The issue is related to this commit: 8c5220307c33f00e01c3bf7b8. I opened a separate issue #1941 for proper tracking."],"created_at":1614007426000,"updated_at":1614216828000,"closed_at":1614008168000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1925","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1925","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1925.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1925.patch"},"body":"Fix the bugs noticed in #1915 \r\n\r\nThere was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`).\r\n\r\nAnother issue was that setting `index_name=\"no_index\"` didn't set `with_index` to False.\r\n\r\nI fixed both of them and added dummy data for those configurations for testing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1925\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1924","id":813599733,"node_id":"MDU6SXNzdWU4MTM1OTk3MzM=","number":1924,"title":"Anonymous Dataset Addition (i.e Anonymous PR?)","user":{"login":"PierreColombo","id":22492839,"node_id":"MDQ6VXNlcjIyNDkyODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22492839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PierreColombo","html_url":"https:\/\/github.com\/PierreColombo","followers_url":"https:\/\/api.github.com\/users\/PierreColombo\/followers","following_url":"https:\/\/api.github.com\/users\/PierreColombo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PierreColombo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PierreColombo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PierreColombo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PierreColombo\/orgs","repos_url":"https:\/\/api.github.com\/users\/PierreColombo\/repos","events_url":"https:\/\/api.github.com\/users\/PierreColombo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PierreColombo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok","Hello,\r\nI would prefer to do the reverse: adding a link to an anonymous paper without the people names\/institution in the PR. Would it be conceivable ?\r\nCheers\r\n","Sure, I think it's ok on our side","Yup, sounds good!"],"created_at":1614007350000,"updated_at":1614104890000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\nThanks a lot for your librairy.\r\nWe plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ? \r\nCheers \r\n@eusip","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1924\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1923","id":813363472,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0","number":1923,"title":"Fix save_to_disk with relative path","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613989639000,"updated_at":1613992964000,"closed_at":1613992963000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1923","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1923","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1923.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1923.patch"},"body":"As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step.\r\n\r\nI added the `makedirs` call using `fs.makedirs` in order to support remote filesystems.\r\nI also fixed the issue with the target path being the temporary path.\r\n\r\nI added a test case for relative paths as well for save_to_disk.\r\n\r\nThanks to @M-Salti for reporting and investigating","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1923\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1922","id":813140806,"node_id":"MDU6SXNzdWU4MTMxNDA4MDY=","number":1922,"title":"How to update the \"wino_bias\" dataset","user":{"login":"JieyuZhao","id":22306304,"node_id":"MDQ6VXNlcjIyMzA2MzA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22306304?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JieyuZhao","html_url":"https:\/\/github.com\/JieyuZhao","followers_url":"https:\/\/api.github.com\/users\/JieyuZhao\/followers","following_url":"https:\/\/api.github.com\/users\/JieyuZhao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JieyuZhao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JieyuZhao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JieyuZhao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JieyuZhao\/orgs","repos_url":"https:\/\/api.github.com\/users\/JieyuZhao\/repos","events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JieyuZhao\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/wino_bias\r\nAlso the homepage url is also mentioned in the wino_bias.py so feel free to update it there as well.\r\n\r\nYou can create a Pull Request directly from the github interface by editing the files you want and submit a PR, or from a local clone of the repository.\r\n\r\nThanks for noticing !"],"created_at":1613972379000,"updated_at":1613990159000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi all,\r\n\r\nThanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1922\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1921","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1921\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1921\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1921\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1921","id":812716042,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4","number":1921,"title":"Standardizing datasets dtypes","user":{"login":"justin-yan","id":7731709,"node_id":"MDQ6VXNlcjc3MzE3MDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7731709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/justin-yan","html_url":"https:\/\/github.com\/justin-yan","followers_url":"https:\/\/api.github.com\/users\/justin-yan\/followers","following_url":"https:\/\/api.github.com\/users\/justin-yan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/justin-yan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/justin-yan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/justin-yan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/justin-yan\/orgs","repos_url":"https:\/\/api.github.com\/users\/justin-yan\/repos","events_url":"https:\/\/api.github.com\/users\/justin-yan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/justin-yan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq - apologies for the multiple PRs, my previous one (#1905) got mangled due to some merge conflicts that I had trouble resolving so I just cherry-picked my changes onto a fresh branch here."],"created_at":1613858641000,"updated_at":1613987050000,"closed_at":1613987050000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1921","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1921","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1921.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1921.patch"},"body":"This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets.\r\n\r\nThis moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.\r\n\r\nI believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1921\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1920","id":812628220,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2","number":1920,"title":"Fix save_to_disk issue","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["So I was curious why the issue reported at #1919 wasn't caught in [this test](https:\/\/github.com\/huggingface\/datasets\/blob\/248104c4bdb2e01c036b7578867199191fbff181\/tests\/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\r\nwith tempfile.TemporaryDirectory() as requested_tempdir:\r\n squad.save_to_disk(requested_tempdir) # no error\r\n```\r\nand it executes succesfuly without problems.\r\nSo why does it work, but this doesn't?\r\n```python\r\nsquad.save_to_disk(\".\/squad\") # error\r\n```\r\nIt's because `save_to_disk` also creates a temporary directory (let's call it `tempdir`), and since `tempdir` and `requested_tempdir` share the same parents, the `Path.joinpath` method [(here)](https:\/\/github.com\/huggingface\/datasets\/blob\/248104c4bdb2e01c036b7578867199191fbff181\/src\/datasets\/arrow_dataset.py#L469) will keep `requested_tempdir` as it is and the *train* directory will be created under `requested_tempdir` and hence no errors will arise.\r\n\r\nBut in the second case (where we are saving to a local dir), the *train* directory is created under *squad* which in turn is created under `tempdir`, not under `.` (current dir).\r\n\r\nSo, all of this probably doesn't help solving the issue but it might help creating a better test, and it also makes me wonder why are we saving to a temporary dir in `save_to_disk` anyway? I mean, won't it be removed with all its contents upon execution completion? what's the point then? ","CLosing in favor of #1923"],"created_at":1613830959000,"updated_at":1613989811000,"closed_at":1613989811000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1920","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1920","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1920.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1920.patch"},"body":"Fixes #1919 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1920\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1919","id":812626872,"node_id":"MDU6SXNzdWU4MTI2MjY4NzI=","number":1919,"title":"Failure to save with save_to_disk","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !","Closing since this has been fixed by #1923"],"created_at":1613830690000,"updated_at":1614793227000,"closed_at":1614793227000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When I try to save a dataset locally using the `save_to_disk` method I get the error:\r\n\r\n```bash\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/content\/squad\/train\/squad-train.arrow'\r\n```\r\n\r\nTo replicate:\r\n\r\n1. Install `datasets` from master\r\n2. Run this code:\r\n\r\n ```python\r\n from datasets import load_dataset\r\n squad = load_dataset(\"squad\") # or any other dataset\r\n squad.save_to_disk(\"squad\") # error here\r\n ```\r\n\r\nThe problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.\r\nI'll open a PR soon doing that and linking this issue.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1919\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1918","id":812541510,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0","number":1918,"title":"Fix QA4MRE download URLs","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613806337000,"updated_at":1614000906000,"closed_at":1614000906000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1918","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1918","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1918.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1918.patch"},"body":"The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1918\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1917","id":812390178,"node_id":"MDU6SXNzdWU4MTIzOTAxNzg=","number":1917,"title":"UnicodeDecodeError: windows 10 machine","user":{"login":"yosiasz","id":900951,"node_id":"MDQ6VXNlcjkwMDk1MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/900951?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yosiasz","html_url":"https:\/\/github.com\/yosiasz","followers_url":"https:\/\/api.github.com\/users\/yosiasz\/followers","following_url":"https:\/\/api.github.com\/users\/yosiasz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yosiasz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yosiasz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yosiasz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yosiasz\/orgs","repos_url":"https:\/\/api.github.com\/users\/yosiasz\/repos","events_url":"https:\/\/api.github.com\/users\/yosiasz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yosiasz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["upgraded to php 3.9.2 and it works!"],"created_at":1613772785000,"updated_at":1613774471000,"closed_at":1613774428000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Windows 10\r\nPhp 3.6.8\r\n\r\nwhen running\r\n\r\n```\r\nimport datasets\r\n\r\noscar_am = datasets.load_dataset(\"oscar\", \"unshuffled_deduplicated_am\")\r\nprint(oscar_am[\"train\"][0])\r\n```\r\nI get the following error\r\n\r\n```\r\nfile \"C:\\PYTHON\\3.6.8\\lib\\encodings\\cp1252.py\", line 23, in decode\r\n return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to \r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1917\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1916","id":812291984,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5","number":1916,"title":"Remove unused py_utils objects","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?","Sorry @lhoestq, I forgot to update the imports... :\/","It's fine, the CI should have caught this tbh. Not sure why it did't fail"],"created_at":1613764285000,"updated_at":1614005816000,"closed_at":1614000769000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1916","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1916","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1916.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1916.patch"},"body":"Remove unused\/unnecessary py_utils functions\/classes.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1916\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1915","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1915\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1915\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1915\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1915","id":812229654,"node_id":"MDU6SXNzdWU4MTIyMjk2NTQ=","number":1915,"title":"Unable to download `wiki_dpr`","user":{"login":"nitarakad","id":18504534,"node_id":"MDQ6VXNlcjE4NTA0NTM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18504534?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nitarakad","html_url":"https:\/\/github.com\/nitarakad","followers_url":"https:\/\/api.github.com\/users\/nitarakad\/followers","following_url":"https:\/\/api.github.com\/users\/nitarakad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nitarakad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nitarakad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nitarakad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nitarakad\/orgs","repos_url":"https:\/\/api.github.com\/users\/nitarakad\/repos","events_url":"https:\/\/api.github.com\/users\/nitarakad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nitarakad\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix","I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !","Closing since this has been fixed by #1925"],"created_at":1613758292000,"updated_at":1614793248000,"closed_at":1614793248000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings\/no index. In order to do so, I ran:\r\n\r\n`curr_dataset = load_dataset(\"wiki_dpr\", embeddings_name=\"multiset\", index_name=\"no_index\")` \r\n\r\nHowever, I got the following error:\r\n`datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}`\r\n\r\nI tried adding in flags `with_embeddings=False` and `with_index=False`:\r\n\r\n`curr_dataset = load_dataset(\"wiki_dpr\", with_embeddings=False, with_index=False, embeddings_name=\"multiset\", index_name=\"no_index\")`\r\n\r\nBut I got the following error:\r\n`raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {\u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_5\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_15\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_30\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_36\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_18\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_41\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_13\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_48\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_10\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_23\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_14\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_34\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_43\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_40\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_47\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_3\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_24\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_7\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_33\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_46\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_42\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_27\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_29\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_26\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_22\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_4\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_20\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_39\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_6\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_16\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_8\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_35\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_49\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_17\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_25\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_0\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_38\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_12\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_44\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_1\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_32\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_19\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_31\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_37\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_9\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_11\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_21\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_28\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_45\u2019, \u2018https:\/\/dl.fbaipublicfiles.com\/rag\/rag_multiset_embeddings\/wiki_passages_2\u2019}`\r\n\r\nIs there anything else I need to set to download the dataset?\r\n\r\n**UPDATE**: just running `curr_dataset = load_dataset(\"wiki_dpr\", with_embeddings=False, with_index=False)` gives me the same error.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1915\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1914","id":812149201,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz","number":1914,"title":"Fix logging imports and make all datasets use library logger","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613751154000,"updated_at":1613936883000,"closed_at":1613936883000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1914","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1914","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1914.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1914.patch"},"body":"Fix library relative logging imports and make all datasets use library logger.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1914\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1913","id":812127307,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw","number":1913,"title":"Add keep_linebreaks parameter to text loader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?","Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the documentation to explain this","Perfect!"],"created_at":1613749425000,"updated_at":1613759772000,"closed_at":1613759771000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1913","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1913","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1913.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1913.patch"},"body":"As asked in #870 and https:\/\/github.com\/huggingface\/transformers\/issues\/10269 there should be a parameter to keep the linebreaks when loading a text dataset.\r\ncc @sgugger @jncasey","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1913\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1912","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1912\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1912\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1912\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1912","id":812034140,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx","number":1912,"title":"Update: WMT - use mirror links","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["So much better - thank you for doing that, @lhoestq!","Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https:\/\/github.com\/huggingface\/datasets\/issues\/1893","Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well."],"created_at":1613742154000,"updated_at":1614174293000,"closed_at":1614174293000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1912","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1912","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1912.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1912.patch"},"body":"As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.\r\nNow downloading the wmt datasets is blazing fast :)\r\n\r\ncc @stas00 @patrickvonplaten ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1912\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1911","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1911\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1911\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1911\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1911","id":812009956,"node_id":"MDU6SXNzdWU4MTIwMDk5NTY=","number":1911,"title":"Saving processed dataset running infinitely","user":{"login":"ayubSubhaniya","id":20911334,"node_id":"MDQ6VXNlcjIwOTExMzM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20911334?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ayubSubhaniya","html_url":"https:\/\/github.com\/ayubSubhaniya","followers_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/followers","following_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/orgs","repos_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/repos","events_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@thomwolf @lhoestq can you guys please take a look and recommend some solution.","am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSystem` or any implementation of ``fsspec.spec.AbstractFileSystem``.\r\n\r\n Args:\r\n dataset_path (``str``): path (e.g. ``dataset\/train``) or remote uri (e.g. ``s3:\/\/my-bucket\/dataset\/train``) of the dataset directory where the dataset will be saved to\r\n fs (Optional[:class:`datasets.filesystem.S3FileSystem`,``fsspec.spec.AbstractFileSystem``], `optional`, defaults ``None``): instance of :class:`datasets.filesystem.S3FileSystem` or ``fsspec.spec.AbstractFileSystem`` used to download the files from remote filesystem.\r\n \"\"\"\r\n assert (\r\n not self.list_indexes()\r\n ), \"please remove all the indexes using `dataset.drop_index` before saving a dataset\"\r\n self = pickle.loads(pickle.dumps(self))\r\n ```","It's been 24 hours and sadly it's still running. With not a single byte written","Tried finding the root cause but was unsuccessful.\r\nI am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute.","Hi ! This very probably comes from the hack you used.\r\n\r\nThe pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you altered the pyarrow table.\r\nI would suggest you to rebuild a valid Dataset object from your new pyarrow table. To do so you must first save your new table to a file, and then make a new Dataset object from that arrow file.\r\n\r\nYou can save the raw arrow table (without all the `datasets.Datasets` metadata) by calling `map` with `cache_file_name=\"path\/to\/outut.arrow\"` and `function=None`. Having `function=None` makes the `map` write your dataset on disk with no data transformation.\r\n\r\nOnce you have your new arrow file, load it with `datasets.Dataset.from_file` to have a brand new Dataset object :)\r\n\r\nIn the future we'll have a better support for the fast filtering method from pyarrow so you don't have to do this very unpractical workaround. Since it breaks somes assumptions regarding the core behavior of Dataset objects, this is very discouraged.","Thanks, @lhoestq for your response. Will try your solution and let you know."],"created_at":1613740159000,"updated_at":1614065684000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have a text dataset of size 220M.\r\n\r\nFor pre-processing, I need to tokenize this and filter rows with the large sequence.\r\n\r\nMy tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.\r\n\r\nfilter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https:\/\/github.com\/huggingface\/datasets\/issues\/1796)\r\n\r\n```dataset._data = dataset._data.filter(...)```\r\nIt took 1 hr for the filter.\r\n\r\nThen i use `save_to_disk()` on processed dataset and it is running forever.\r\n\r\nI have been waiting since 8 hrs, it has not written a single byte. \r\n\r\nInfact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`. \r\nSecond process is the one.\r\n\"Screenshot\r\n\r\n\r\nI am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1911\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1910","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1910\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1910\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1910\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1910","id":811697108,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3","number":1910,"title":"Adding CoNLLpp dataset.","user":{"login":"ZihanWangKi","id":21319243,"node_id":"MDQ6VXNlcjIxMzE5MjQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21319243?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZihanWangKi","html_url":"https:\/\/github.com\/ZihanWangKi","followers_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/followers","following_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/repos","events_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZihanWangKi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."],"created_at":1613711550000,"updated_at":1614895367000,"closed_at":1614895367000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1910","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1910","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1910.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1910.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1910\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1907","id":811520569,"node_id":"MDU6SXNzdWU4MTE1MjA1Njk=","number":1907,"title":"DBPedia14 Dataset Checksum bug?","user":{"login":"francisco-perez-sorrosal","id":918006,"node_id":"MDQ6VXNlcjkxODAwNg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/918006?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal","html_url":"https:\/\/github.com\/francisco-perez-sorrosal","followers_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/followers","following_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/orgs","repos_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/repos","events_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/francisco-perez-sorrosal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! :)\r\n\r\nThis looks like the same issue as https:\/\/github.com\/huggingface\/datasets\/issues\/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.","Thanks @lhoestq! Yes, it seems back to normal after a couple of days."],"created_at":1613687148000,"updated_at":1614036125000,"closed_at":1614036124000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there!!!\r\n\r\nI've been using successfully the DBPedia dataset (https:\/\/huggingface.co\/datasets\/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".\/conditional_classification\/basic_pipeline.py\", line 178, in \r\n main()\r\n File \".\/conditional_classification\/basic_pipeline.py\", line 128, in main\r\n corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class,\r\n File \"\/home\/fp\/dev\/conditional_classification\/conditional_classification\/datasets_base.py\", line 83, in load_data\r\n datasets = load_dataset(self.name, split=dataset_split)\r\n File \"\/home\/fp\/anaconda3\/envs\/conditional\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 609, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/fp\/anaconda3\/envs\/conditional\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 526, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/fp\/anaconda3\/envs\/conditional\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 586, in _download_and_prepare\r\n verify_checksums(\r\n File \"\/home\/fp\/anaconda3\/envs\/conditional\/lib\/python3.8\/site-packages\/datasets\/utils\/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/drive.google.com\/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']\r\n```\r\n\r\nI've seen this has happened before in other datasets as reported in #537.\r\n\r\nI've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. \r\n\r\nCan you please check if there's a problem with the checksums? \r\n\r\nOr this is related to any other stuff? I've seen that the path in the cache for the dataset is `\/home\/fp\/.cache\/huggingface\/datasets\/d_bpedia14\/dbpedia_14\/2.0.0\/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently?\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1907\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1906","id":811405274,"node_id":"MDU6SXNzdWU4MTE0MDUyNzQ=","number":1906,"title":"Feature Request: Support for Pandas `Categorical`","user":{"login":"justin-yan","id":7731709,"node_id":"MDQ6VXNlcjc3MzE3MDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7731709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/justin-yan","html_url":"https:\/\/github.com\/justin-yan","followers_url":"https:\/\/api.github.com\/users\/justin-yan\/followers","following_url":"https:\/\/api.github.com\/users\/justin-yan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/justin-yan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/justin-yan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/justin-yan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/justin-yan\/orgs","repos_url":"https:\/\/api.github.com\/users\/justin-yan\/repos","events_url":"https:\/\/api.github.com\/users\/justin-yan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/justin-yan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corresponds to `pa.int64()` in pyarrow and `dtype('int64')` in pandas (so the label names are lost during conversions).\r\n\r\nWhat do you think ?","Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?\r\n\r\nOther thoughts:\r\n\r\n- changing the resulting patype on ClassLabel might be backward-incompatible? I'm not totally sure if users of the `datasets` library tend to directly access the `patype` attribute (I don't think we really do, but we haven't been using it for very long yet).\r\n- would ClassLabel's dtype change to `dict[int64, string]`? It seems like in practice a ClassLabel (when not explicitly specified) would be constructed from the DictionaryType branch of `generate_from_arrow_type`, so it's not totally clear to me that anyone ever actually accesses\/uses that dtype?\r\n- I don't quite know how `.int2str` and `.str2int` are used in practice - would those be kept? Perhaps the implementation might actually be substantially smaller if we can just delegate to pyarrow's dict methods?\r\n\r\nAnother idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L932 , and then don't touch anything else.\r\n\r\nIn practice, I don't think this would be backward-incompatible in a way anyone would care about since the current behavior just throws an exception, and this way, we could support *reading* a pandas Categorical into a `Dataset` as a ClassLabel. I *think* from there, while it would require some custom glue it wouldn't be too hard to convert the ClassLabel into a pandas Category if we want to go back - I think this would improve on the current behavior without risking changing the behavior of ClassLabel in a backward-incompat way.\r\n\r\nThoughts? I'm not sure if this is overly cautious. Whichever approach you think is better, I'd be happy to take it on!\r\n","I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.\r\n\r\nIn this scope, I really like the idea of checking for the dictionary type:\r\n\r\n> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L932 , and then don't touch anything else.\r\n\r\nThis looks like a great start.\r\n\r\nThen as you said we'd have to add the conversion from classlabel to the correct arrow dictionary type. Arrow is already able to convert from arrow Dictionary to pandas Categorical so it should be enough.\r\n\r\nI can see two things that we must take case of to make this change backward compatible:\r\n- first we must still be able to load an arrow file with arrow int64 dtype and `datasets` ClassLabel type without crashing. This can be fixed by casting the arrow int64 array to an arrow Dictionary array on-the-fly when loading the table in the ArrowReader.\r\n- then we still have to return integers when accessing examples from a ClassLabel column. Currently it would return the strings values since it's based on the pandas behavior for converting from pandas to python\/numpy. To do so we just have to adapt the python\/numpy extractors in formatting.py (it takes care of converting an arrow table to a dictionary of python objects by doing arrow table -> pandas dataframe -> python dictionary)\r\n\r\nAny help on this matter is very much welcome :)"],"created_at":1613677565000,"updated_at":1614091130000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\nimport pyarrow\r\n\r\ndf = pd.DataFrame(pd.Series([\"a\", \"b\", \"c\", \"a\"], dtype=\"category\"))\r\npyarrow.Table.from_pandas(df)\r\nDataset.from_pandas(df)\r\n# Throws NotImplementedError\r\n# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table\r\n```\r\n\r\nI'm curious if https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L796 could be built out in a way similar to `Sequence`?\r\n\r\ne.g. a `Map` class (or whatever name the maintainers might prefer) that can accept:\r\n\r\n```\r\nindex_type = generate_from_arrow_type(pa_type.index_type)\r\nvalue_type = generate_from_arrow_type(pa_type.value_type)\r\n```\r\n\r\nand then additional code points to modify:\r\n\r\n- FeatureType: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L694\r\n- A branch to handle Map in get_nested_type: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L719\r\n- I don't quite understand what `encode_nested_example` does but perhaps a branch there? https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L755\r\n- Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L775\r\n\r\nI couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1906\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1905","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1905\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1905\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1905\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1905","id":811384174,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1","number":1905,"title":"Standardizing datasets.dtypes","user":{"login":"justin-yan","id":7731709,"node_id":"MDQ6VXNlcjc3MzE3MDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7731709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/justin-yan","html_url":"https:\/\/github.com\/justin-yan","followers_url":"https:\/\/api.github.com\/users\/justin-yan\/followers","following_url":"https:\/\/api.github.com\/users\/justin-yan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/justin-yan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/justin-yan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/justin-yan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/justin-yan\/orgs","repos_url":"https:\/\/api.github.com\/users\/justin-yan\/repos","events_url":"https:\/\/api.github.com\/users\/justin-yan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/justin-yan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly."],"created_at":1613675731000,"updated_at":1613858490000,"closed_at":1613858490000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1905","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1905","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1905.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1905.patch"},"body":"This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https:\/\/github.com\/huggingface\/datasets\/pull\/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here).\r\n\r\nThis moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.\r\n\r\nI believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1905\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1904","id":811260904,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0","number":1904,"title":"Fix to_pandas for boolean ArrayXD","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1613665846000,"updated_at":1613668203000,"closed_at":1613668201000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1904","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1904","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1904.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1904.patch"},"body":"As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.\r\n\r\nzero copy is available for all primitive types except booleans\r\nsee https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.Array.html#pyarrow.Array.to_numpy\r\nand https:\/\/issues.apache.org\/jira\/browse\/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22\r\n\r\ncc @SBrandeis ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1904\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1903","id":811145531,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2","number":1903,"title":"Initial commit for the addition of TIMIT dataset","user":{"login":"vrindaprabhu","id":16264631,"node_id":"MDQ6VXNlcjE2MjY0NjMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16264631?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vrindaprabhu","html_url":"https:\/\/github.com\/vrindaprabhu","followers_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/followers","following_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/orgs","repos_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/repos","events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@patrickvonplaten could you please review and help me close this PR?","@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my side!:' Will be more careful from next time! :)\r\n\r\n\r\n"],"created_at":1613658192000,"updated_at":1614591552000,"closed_at":1614591552000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1903.patch"},"body":"Below points needs to be addressed:\r\n\r\n- Creation of dummy dataset is failing\r\n- Need to check on the data representation\r\n- License is not creative commons. Copyright: Portions \u00a9 1993 Trustees of the University of Pennsylvania\r\n\r\nAlso the links (_except the download_) point to the ami corpus! ;-)\r\n\r\n@patrickvonplaten Requesting your comments, will be happy to address them!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1903\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1902","id":810931171,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1","number":1902,"title":"Fix setimes_2 wmt urls","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613641346000,"updated_at":1613642141000,"closed_at":1613642141000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1902","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1902","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1902.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1902.patch"},"body":"Continuation of #1901 \r\nSome other urls were missing https","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1902\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1901","id":810845605,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy","number":1901,"title":"Fix OPUS dataset download errors","user":{"login":"YangWang92","id":3883941,"node_id":"MDQ6VXNlcjM4ODM5NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3883941?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YangWang92","html_url":"https:\/\/github.com\/YangWang92","followers_url":"https:\/\/api.github.com\/users\/YangWang92\/followers","following_url":"https:\/\/api.github.com\/users\/YangWang92\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YangWang92\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YangWang92\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YangWang92\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YangWang92\/orgs","repos_url":"https:\/\/api.github.com\/users\/YangWang92\/repos","events_url":"https:\/\/api.github.com\/users\/YangWang92\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YangWang92\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613633981000,"updated_at":1613660840000,"closed_at":1613641161000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1901","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1901","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1901.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1901.patch"},"body":"Replace http to https.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/issues\/854\r\n\r\nhttps:\/\/discuss.huggingface.co\/t\/cannot-download-wmt16\/2081\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1901\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1900","id":810512488,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3","number":1900,"title":"Issue #1895: Bugfix for string_to_arrow timestamp[ns] support","user":{"login":"justin-yan","id":7731709,"node_id":"MDQ6VXNlcjc3MzE3MDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7731709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/justin-yan","html_url":"https:\/\/github.com\/justin-yan","followers_url":"https:\/\/api.github.com\/users\/justin-yan\/followers","following_url":"https:\/\/api.github.com\/users\/justin-yan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/justin-yan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/justin-yan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/justin-yan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/justin-yan\/orgs","repos_url":"https:\/\/api.github.com\/users\/justin-yan\/repos","events_url":"https:\/\/api.github.com\/users\/justin-yan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/justin-yan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["OK! Thank you for the review - I will follow up with a separate PR for the comments here (https:\/\/github.com\/huggingface\/datasets\/pull\/1900#discussion_r578319725)!"],"created_at":1613593564000,"updated_at":1613759231000,"closed_at":1613759231000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1900","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1900","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1900.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1900.patch"},"body":"Should resolve https:\/\/github.com\/huggingface\/datasets\/issues\/1895\r\n\r\nThe main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.\r\n\r\nWhile adding unit-testing, I noticed that support for the double\/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant:\r\n\r\n```\r\n def __post_init__(self):\r\n if self.dtype == \"double\": # fix inferred type\r\n self.dtype = \"float64\"\r\n if self.dtype == \"float\": # fix inferred type\r\n self.dtype = \"float32\"\r\n```\r\n\r\nHowever, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that.\r\n\r\nThe rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1900\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1899","id":810308332,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4","number":1899,"title":"Fix: ALT - fix duplicated examples in alt-parallel","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613577236000,"updated_at":1613582449000,"closed_at":1613582449000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1899","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1899","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1899.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1899.patch"},"body":"As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.\r\nThis was due to a bad copy of a python dict.\r\n\r\nThis PR fixes that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1899\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1898","id":810157251,"node_id":"MDU6SXNzdWU4MTAxNTcyNTE=","number":1898,"title":"ALT dataset has repeating instances in all splits","user":{"login":"10-zin","id":33179372,"node_id":"MDQ6VXNlcjMzMTc5Mzcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33179372?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/10-zin","html_url":"https:\/\/github.com\/10-zin","followers_url":"https:\/\/api.github.com\/users\/10-zin\/followers","following_url":"https:\/\/api.github.com\/users\/10-zin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/10-zin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/10-zin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/10-zin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/10-zin\/orgs","repos_url":"https:\/\/api.github.com\/users\/10-zin\/repos","events_url":"https:\/\/api.github.com\/users\/10-zin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/10-zin\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting. This looks like a very bad issue. I'm looking into it","I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch","Thanks!!! works perfectly in the bleading edge master version","Closed by #1899"],"created_at":1613566302000,"updated_at":1613715526000,"closed_at":1613715526000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The [ALT](https:\/\/huggingface.co\/datasets\/alt) dataset has all the same instances within each split :\/\r\nSeemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.\r\n\r\nWould be great if this could be fixed :)\r\n\r\nAdded a snapshot of the contents from `explore-datset` feature, for quick reference.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/33179372\/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1898\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1897","id":810113263,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy","number":1897,"title":"Fix PandasArrayExtensionArray conversion to native type","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613562504000,"updated_at":1613567716000,"closed_at":1613567715000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1897","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1897","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1897.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1897.patch"},"body":"To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.\r\nHowever previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because\r\n1. the PandasExtensionArray.isna method was wrong\r\n2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https:\/\/pandas.pydata.org\/pandas-docs\/stable\/reference\/api\/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray))\r\n\r\nI fixed these two issues and now the conversion to native types works, and so is the export to csv.\r\ncc @SBrandeis ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1897\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1895","id":809630271,"node_id":"MDU6SXNzdWU4MDk2MzAyNzE=","number":1895,"title":"Bug Report: timestamp[ns] not recognized","user":{"login":"justin-yan","id":7731709,"node_id":"MDQ6VXNlcjc3MzE3MDk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7731709?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/justin-yan","html_url":"https:\/\/github.com\/justin-yan","followers_url":"https:\/\/api.github.com\/users\/justin-yan\/followers","following_url":"https:\/\/api.github.com\/users\/justin-yan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/justin-yan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/justin-yan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/justin-yan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/justin-yan\/orgs","repos_url":"https:\/\/api.github.com\/users\/justin-yan\/repos","events_url":"https:\/\/api.github.com\/users\/justin-yan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/justin-yan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n","Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L778) only to convert it back into the arrow type (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L143, and https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!","The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ","OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L698) and `generate_from_arrow_type()` (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!","Yes you're totally right :)"],"created_at":1613507884000,"updated_at":1613759231000,"closed_at":1613759231000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Repro:\r\n\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\nimport pyarrow\r\n\r\ndf = pd.DataFrame(pd.date_range(\"2018-01-01\", periods=3, freq=\"H\"))\r\npyarrow.Table.from_pandas(df)\r\nDataset.from_pandas(df)\r\n# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.\r\n```\r\n\r\nThe factory function seems to be just \"timestamp\": https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.timestamp.html#pyarrow.timestamp\r\n\r\nIt seems like https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method.\r\n\r\nAlternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well!\r\n\r\n```\r\n$ pip list # only the relevant libraries\/versions\r\ndatasets 1.2.1\r\npandas 1.0.3\r\npyarrow 3.0.0\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1895\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1894","id":809609654,"node_id":"MDU6SXNzdWU4MDk2MDk2NTQ=","number":1894,"title":"benchmarking against MMapIndexedDataset","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I\/O performance is the speed of your hard drive\/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for reading text, even though I found recently that we could still slightly improve speed for big datasets (see [here](https:\/\/github.com\/huggingface\/datasets\/issues\/1803)).\r\n\r\nIn terms of number of examples and example sizes, the only limit is the available disk space you have.\r\n\r\nI haven't used `psrecord` yet but it seems to be a very interesting tool for benchmarking. Currently for benchmarks we only have github actions to avoid regressions in terms of speed. But it would be cool to have benchmarks with comparisons with other dataset tools ! This would be useful to many people","Also I would be interested to know what data types `MMapIndexedDataset` supports. Is there some documentation somewhere ?","no docs haha, it's written to support integer numpy arrays.\r\n\r\nYou can build one in fairseq with, roughly:\r\n```bash\r\n\r\nwget https:\/\/s3.amazonaws.com\/research.metamind.io\/wikitext\/wikitext-103-raw-v1.zip\r\nunzip wikitext-103-raw-v1.zip\r\nexport dd=$HOME\/fairseq-py\/wikitext-103-raw\r\n\r\nexport mm_dir=$HOME\/mmap_wikitext2\r\nmkdir -p gpt2_bpe\r\nwget -O gpt2_bpe\/encoder.json https:\/\/dl.fbaipublicfiles.com\/fairseq\/gpt2_bpe\/encoder.json\r\nwget -O gpt2_bpe\/vocab.bpe https:\/\/dl.fbaipublicfiles.com\/fairseq\/gpt2_bpe\/vocab.bpe\r\nwget -O gpt2_bpe\/dict.txt https:\/\/dl.fbaipublicfiles.com\/fairseq\/gpt2_bpe\/dict.txt\r\nfor SPLIT in train valid; do \\\r\n python -m examples.roberta.multiprocessing_bpe_encoder \\\r\n --encoder-json gpt2_bpe\/encoder.json \\\r\n --vocab-bpe gpt2_bpe\/vocab.bpe \\\r\n --inputs \/scratch\/stories_small\/${SPLIT}.txt \\\r\n --outputs \/scratch\/stories_small\/${SPLIT}.bpe \\\r\n --keep-empty \\\r\n --workers 60; \\\r\ndone\r\n\r\nmkdir -p $mm_dir\r\nfairseq-preprocess \\\r\n --only-source \\\r\n --srcdict gpt2_bpe\/dict.txt \\\r\n --trainpref $dd\/wiki.train.bpe \\\r\n --validpref $dd\/wiki.valid.bpe \\\r\n --destdir $mm_dir \\\r\n --workers 60 \\\r\n --dataset-impl mmap\r\n```\r\n\r\nI'm noticing in my benchmarking that it's much smaller on disk than arrow (200mb vs 900mb), and that both incur significant cost by increasing the number of data loader workers. \r\nThis somewhat old [post](https:\/\/ray-project.github.io\/2017\/10\/15\/fast-python-serialization-with-ray-and-arrow.html) suggests there are some gains to be had from using `pyarrow.serialize(array).tobuffer()`. I haven't yet figured out how much of this stuff `pa.Table` does under the hood.\r\n\r\nThe `MMapIndexedDataset` bottlenecks we are working on improving (by using arrow) are:\r\n1) `MMapIndexedDataset`'s index, which stores offsets, basically gets read in its entirety by each dataloading process.\r\n2) we have separate, identical, `MMapIndexedDatasets` on each dataloading worker, so there's redundancy there; we wonder if there is a way that arrow can somehow dedupe these in shared memory.\r\n\r\nIt will take me a few hours to get `MMapIndexedDataset` benchmarks out of `fairseq`\/onto a branch in this repo, but I'm happy to invest the time if you're interested in collaborating on some performance hacking."],"created_at":1613505898000,"updated_at":1613587948000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https:\/\/github.com\/pytorch\/fairseq\/blob\/master\/fairseq\/data\/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens).\r\n\r\nQuestions:\r\n1) Is this (basically identical) performance expected? \r\n2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples\/larger examples?)\r\n3) Should I be using different benchmarking tools than `psrecord`\/how do you guys do benchmarks?\r\n\r\nThanks in advance! Sam","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1894\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1893","id":809556503,"node_id":"MDU6SXNzdWU4MDk1NTY1MDM=","number":1893,"title":"wmt19 is broken","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This was also mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https:\/\/conferences.unite.un.org\/uncorpus\/en\/downloadoverview ?","Closing since this has been fixed by #1912"],"created_at":1613500798000,"updated_at":1614793322000,"closed_at":1614793322000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"1. Check which lang pairs we have: `--dataset_name wmt19`:\r\n\r\nPlease pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']\r\n\r\n \r\n2. OK, let's pick `ru-en`:\r\n\r\n`--dataset_name wmt19 --dataset_config \"ru-en\"`\r\n\r\nno cookies:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".\/run_seq2seq.py\", line 661, in \r\n main()\r\n File \".\/run_seq2seq.py\", line 317, in main\r\n datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/builder.py\", line 572, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/builder.py\", line 628, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/stas\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt19\/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052\/wmt_utils.py\", line 755, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/download_manager.py\", line 276, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/download_manager.py\", line 191, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/py_utils.py\", line 233, in map_nested\r\n mapped = [\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/py_utils.py\", line 234, in \r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/py_utils.py\", line 190, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/py_utils.py\", line 190, in \r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/py_utils.py\", line 172, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/download_manager.py\", line 211, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/file_utils.py\", line 274, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/file_utils.py\", line 584, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/storage.googleapis.com\/tfdataset-data\/downloadataset\/uncorpus\/UNv1.0.en-ru.tar.gz\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1893\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1892","id":809554174,"node_id":"MDU6SXNzdWU4MDk1NTQxNzQ=","number":1892,"title":"request to mirror wmt datasets, as they are really slow to download","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets\/wmt19\/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it should be possible to redistribute the data with no issues.\r\n\r\ncc @patrickvonplaten who knows more about the wmt scripts","Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download","I'm downloading them.\r\nI'm starting with the ones hosted on http:\/\/data.statmt.org which are the slowest ones","@lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.)","Closing since the urls were changed to mirror urls in #1912 "],"created_at":1613500571000,"updated_at":1616673203000,"closed_at":1616673203000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1892\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1891","id":809550001,"node_id":"MDU6SXNzdWU4MDk1NTAwMDE=","number":1891,"title":"suggestion to improve a missing dataset error","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613500153000,"updated_at":1613500214000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`:\r\n\r\n```\r\nTrue, predict_with_generate=True)\r\nTraceback (most recent call last):\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/load.py\", line 323, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/file_utils.py\", line 274, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/file_utils.py\", line 584, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/wmt20\/wmt20.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/load.py\", line 335, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/file_utils.py\", line 274, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/utils\/file_utils.py\", line 584, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/wmt20\/wmt20.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \".\/run_seq2seq.py\", line 661, in \r\n main()\r\n File \".\/run_seq2seq.py\", line 317, in main\r\n datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/load.py\", line 706, in load_dataset\r\n module_path, hash, resolved_file_path = prepare_module(\r\n File \"\/mnt\/nvme1\/code\/huggingface\/datasets-master\/src\/datasets\/load.py\", line 343, in prepare_module\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find file locally at wmt20\/wmt20.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/wmt20\/wmt20.py.\r\nThe file is also not present on the master branch on github.\r\n```\r\n\r\nSuggestion: if it is not in a local path, check that there is an actual `https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/wmt20` first and assert \"dataset `wmt20` doesn't exist in datasets\", rather than trying to find a load script - since the whole repo is not there.\r\n\r\nThe error occured when running:\r\n```\r\ncd examples\/seq2seq\r\nexport BS=16; rm -r output_dir; PYTHONPATH=..\/..\/src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python .\/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config \"ro-en\" --source_prefix \"translate English to Romanian: \"\r\n```\r\n\r\nThanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1891\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1890","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1890\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1890\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1890\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1890","id":809395586,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx","number":1890,"title":"Reformat dataset cards section titles","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613488307000,"updated_at":1613488354000,"closed_at":1613488353000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1890","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1890","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1890.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1890.patch"},"body":"Titles are formatted like [Foo](#foo) instead of just Foo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1890\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1889","id":809276015,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz","number":1889,"title":"Implement to_dict and to_pandas for Dataset","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Next step is going to add these two in the documentation ^^"],"created_at":1613479099000,"updated_at":1613673757000,"closed_at":1613673754000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1889","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1889","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1889.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1889.patch"},"body":"With options to return a generator or the full dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1889\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1888","id":809241123,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4","number":1888,"title":"Docs for adding new column on formatted dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Close #1872"],"created_at":1613475900000,"updated_at":1617112863000,"closed_at":1613476737000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1888","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1888","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1888.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1888.patch"},"body":"As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added\r\n\r\nClose #1872","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1888\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1887","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1887\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1887\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1887\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1887","id":809229809,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy","number":1887,"title":"Implement to_csv for Dataset","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.Array.html#pyarrow.Array.to_numpy","Good catch ! I must be able to fix that one by allowing copies for this kind of arrays.\r\nThis is the kind of surprise you get sometimes when playing with arrow x)","Raising this error for booleans was introduced in https:\/\/issues.apache.org\/jira\/browse\/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 without much explanations unfortunately.\r\nSo \"no copy\" only works for primitive types - except booleans.\r\nThis is confirmed in the source code at https:\/\/github.com\/wesm\/arrow\/blob\/c07b9b48cf3e0bbbab493992a492ae47e5b04cad\/python\/pyarrow\/array.pxi#L621\r\n\r\nI'm opening a PR to allow copies for booleans...","I just merged the fix for boolean ArrayXD, feel free to merge from master to see if it fixes the ci :)","@lhoestq unfirtunately, arrays of strings (or any other non-primitive type) require a copy too\r\n\r\nA list of primitive types can be found here: https:\/\/github.com\/wesm\/arrow\/blob\/c07b9b48cf3e0bbbab493992a492ae47e5b04cad\/python\/pyarrow\/types.pxi#L821\r\n\r\npyarrow provides a `is_primitive` function to check whether a type is primitive , I used it to set `zero_copy_only`\r\n\r\nAlso, `PandasArrayExtensionArray.isna` was using `numpy.isnan` which fails for arrays of strings. I replaced it with `pandas.isna`. Let me know what you think! :) "],"created_at":1613474849000,"updated_at":1613727719000,"closed_at":1613727719000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1887","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1887","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1887.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1887.patch"},"body":"cc @thomwolf \r\n\r\n`to_csv` supports passing either a file path or a *binary* file object\r\nThe writing is batched to avoid loading the whole table in memory","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1887\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1886","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1886\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1886\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1886\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1886","id":809221885,"node_id":"MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz","number":1886,"title":"Common voice","user":{"login":"BirgerMoell","id":1704131,"node_id":"MDQ6VXNlcjE3MDQxMzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1704131?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BirgerMoell","html_url":"https:\/\/github.com\/BirgerMoell","followers_url":"https:\/\/api.github.com\/users\/BirgerMoell\/followers","following_url":"https:\/\/api.github.com\/users\/BirgerMoell\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BirgerMoell\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BirgerMoell\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BirgerMoell\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BirgerMoell\/orgs","repos_url":"https:\/\/api.github.com\/users\/BirgerMoell\/repos","events_url":"https:\/\/api.github.com\/users\/BirgerMoell\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BirgerMoell\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have to figure out how to host them). An even more creative idea would be to host the dataset inside a torrent and figure out a way to download specific datasets from within that torrent.\r\n\r\nHere is some information about the download authorization. They are hosting the data on S3.\r\n\r\nhttps:\/\/docs.aws.amazon.com\/AmazonS3\/latest\/API\/sigv4-auth-using-authorization-header.html\r\n\r\nHere is an example of how a download link looks.\r\n\r\nhttps:\/\/mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com\/cv-corpus-6.1-2020-12-11\/nl.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3ND4UAQXB%2F20210217%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210217T080740Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEGIaDCC6ALh%2FwIK9ovvRdCKSBCs5WaSJNsZ2h0SnhpnWFv4yiAJHJTe%2BY6pBcCqadRMs0RABHeQ2n1QDACJ5V9WOqIHfMfT0AI%2Bfe6iFkTGLgRrJOMYpgV%2FmIBcXCjeb72r4ZvudMA8tprkSxZsEh53bJkIDQx1tXqfpz0yoefM0geD3461suEGhHnLIyiwffrUpRg%2BkNZN9%2FLZZXpF5F2pogieKKV533Jetkd1xlWOR%2Bem9R2bENu2RV563XX3JvbWxSYN9IHkVT1xwd4ZiOpUtX7%2F2RoluJUKw%2BUPpyml3J%2FOPPGdr7CyPLjqNxdq9ceRi8lRybty64XvNYZGt45VNTQ3pkTTz4VpUCJAGkgxq95Ve%2BOwW%2Fsc8JtblTFKrH11vej62NB7C0n7JPPS4SLKXHKW%2B7ZbybcNf3BnsAVouPdsGTMslcgkD81b9trnjyXJdOZkzdHUf2KcWVXVceEsZnMhcCZQ1cJpI7qXPEk8QrKCQcNByPLHmPIEdHpj9IrIBKDkl2qO7VX7CCB65WDt2eZRltOcNHXWVFXFktMdQOQztI1j0XSZz2iOX4jPKKaqz193VEytlAqmehNi8pePOnxkP9Z1SP7d3I6rayuBF3phmpHxw499tY3ECYYgoCnJ6QSFa3KxMjFmEpQlmjxuwEMHd4CDL2FJYGcCiIxbCcL1r8ZE3%2BbGdcu7PRsVCHX3Huh%2FqGIaF4h40FgteN6teyKCHKOebs4EGMipb9xmEMZ9ZbVopz4bkhLdMTrjKon9w624Xem0MTPqN7XY%2BB6lRgrW8rd4%3D&X-Amz-Signature=28eabdfce72a472a70b0f9e1e2c37fe1471b5ec8ed60614fbe900bfa97ae1ac8&X-Amz-SignedHeaders=host\r\n\r\nIt could be that we simply need to make a http-request with the right parameters and we can download the datasets.","> Wow, this looks great already! It's really a difficult dataset so thanks a lot for opening a PR.\r\n> I think the tagging tool is not too important for now and we can take a look at that later!\r\n> \r\n> At the moment, it would be very good to correctly generate some dummy data for all the possible languages. I think the structure of the `.tsv` file as you've noted in the PR is the one we want to use as the structure for `features = datasets.Features(`\r\n> \r\n> The splits `'Train\"`, `\"Test\"`, `\"Validation\"` look great to me! Because this is a special dataset that also has files called `\"Invalidated\"` I think the best option is to also add those as splits, _i.e._ `\"other\"`, `\"invalidated\"`, `\"reported\"`, `\"validated\"` . Those split names can be gives as shown here for example:\r\n> \r\n> https:\/\/github.com\/huggingface\/datasets\/blob\/28be129db862ec89a87ac9349c64df6b6118aff4\/datasets\/librispeech_asr\/librispeech_asr.py#L124\r\n> \r\n> Also putting @lhoestq in cc here to hear his opinion on the different splits. @lhoestq Common Voicie is a crowd collected dataset where if a collected data sample did not receive enough \"up_votes\" from the community -> then it is (If I understood it correctly) marked as invalid -> hence the file `\"invalidated.tsv\"`. I think this is still useful data, so I would include it what do you think?\r\n> \r\n> @BirgerMoell let me know if you have any more questions :-)\r\n\r\nI think reporting is a separate feature. People can help annotate the data and then they can report things while annotating.\r\nhttps:\/\/commonvoice.mozilla.org\/sv-SE\/listen\r\n\r\nHere is the interface that shows reporting and the thumbs up and down which gives upvotes and downvotes.\r\n\r\n","I added splits and features. I'm not sure how you want me to generate dummy data for all the languages?","Hey @BirgerMoell,\r\n\r\nI tweaked your dataset file a bit to have a first working version. To test this dataset downloading script, you can do the following:\r\n\r\n- 1) Download the Common Voice Georgian dataset from https:\/\/commonvoice.mozilla.org\/en\/datasets (It's pretty small which is why I chose it)\r\n- 2) Run the following command using this branch: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\".\/..\/datasets\/datasets\/common_voice\", \"Georgian\", data_dir=\".\/cv-corpus-6.1-2020-12-11\/ka\/\", split=\"train\")\r\n```\r\n\r\nNote that I'm loading a local version of the dataset script (`\".\/..\/datasets\/datasets\/common_voice\/\"` points to the folder in your branch) and that I also insert the downloaded data with the `data_dir` arg.\r\n\r\n-> You'll see that the data is correctly loaded and that `ds` contains all the information we need.\r\n\r\nNow there are a lot of different datasets on Common Voice, so it probably takes too much time to test all of those, but maybe you can test whether the current script works as well *e.g.* for Swedish, 3,4 other languages.\r\n\r\nIt would be very nice if we can use the exact same structure for all languages, meaning that we don't have to change the `datasets.Features(...)` structure depending on the language, but can use the exact same one for every language.\r\n\r\nIf everything works as expected we can then go over to cleaning the script and seeing how to add dummy data tests for it."],"created_at":1613474170000,"updated_at":1615315891000,"closed_at":1615315891000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1886","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1886","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1886.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1886.patch"},"body":"Started filling out information about the dataset and a dataset card.\r\n\r\nTo do\r\nCreate tagging file\r\nUpdate the common_voice.py file with more information","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1886\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1885","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1885\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1885\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1885\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1885","id":808881501,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz","number":1885,"title":"add missing info on how to add large files","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613432799000,"updated_at":1613492539000,"closed_at":1613475852000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1885","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1885","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1885.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1885.patch"},"body":"Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to.\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1885\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1884","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1884\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1884\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1884\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1884","id":808755894,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5","number":1884,"title":"dtype fix when using numpy arrays","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613415325000,"updated_at":1627642878000,"closed_at":1627642878000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1884","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1884","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1884.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1884.patch"},"body":"As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1884\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1883","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1883\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1883\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1883\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1883","id":808750623,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz","number":1883,"title":"Add not-in-place implementations for several dataset transforms","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)","I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.","Now let's update the documentation to use the new methods x)"],"created_at":1613414666000,"updated_at":1614178489000,"closed_at":1614178406000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1883","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1883","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1883.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1883.patch"},"body":"Should we deprecate in-place versions of such methods?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1883\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1882","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1882\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1882\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1882\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1882","id":808716576,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw","number":1882,"title":"Create Remote Manager","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_file.fetch(dst_file)\r\n```\r\n\r\nI have created `RemotePath` (analogue to Path) with method `.open()` that returns `FtpFile`\/`HttpFile` (analogue to file-like).\r\n\r\nNow I am going to implement `RemotePath.exists()` method (analogue to the Path's method) to check if remote resource is accessible, using `Ftp\/Http.head()`.","Quick update on this one:\r\nwe discussed offline with @albertvillanova on this PR and I think using `fsspec` can help a lot, since it already implements many parts of the abstraction we need to have nice download tools for both http and ftp (and others !)"],"created_at":1613410584000,"updated_at":1615220110000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1882","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1882","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1882.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1882.patch"},"body":"Refactoring to separate the concern of remote (HTTP\/FTP requests) management.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1882\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1881","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1881\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1881\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1881\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1881","id":808578200,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw","number":1881,"title":"`list_datasets()` returns a list of strings, not objects","user":{"login":"pminervini","id":227357,"node_id":"MDQ6VXNlcjIyNzM1Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/227357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pminervini","html_url":"https:\/\/github.com\/pminervini","followers_url":"https:\/\/api.github.com\/users\/pminervini\/followers","following_url":"https:\/\/api.github.com\/users\/pminervini\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pminervini\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pminervini\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pminervini\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pminervini\/orgs","repos_url":"https:\/\/api.github.com\/users\/pminervini\/repos","events_url":"https:\/\/api.github.com\/users\/pminervini\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pminervini\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613398815000,"updated_at":1613401789000,"closed_at":1613401788000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1881","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1881","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1881.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1881.patch"},"body":"Here and there in the docs there is still stuff like this:\r\n\r\n```python\r\n>>> datasets_list = list_datasets()\r\n>>> print(', '.join(dataset.id for dataset in datasets_list))\r\n```\r\n\r\nHowever, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1881\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1880","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1880\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1880\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1880\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1880","id":808563439,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0","number":1880,"title":"Update multi_woz_v22 checksums","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613397618000,"updated_at":1613398699000,"closed_at":1613398698000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1880","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1880","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1880.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1880.patch"},"body":"As noticed in #1876 the checksums of this dataset are outdated.\r\nI updated them in this PR","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1880\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1879","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1879\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1879\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1879\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1879","id":808541442,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx","number":1879,"title":"Replace flatten_nested","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)"],"created_at":1613395780000,"updated_at":1613759714000,"closed_at":1613759714000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1879","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1879","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1879.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1879.patch"},"body":"Replace `flatten_nested` with `NestedDataStructure.flatten`.\r\n\r\nThis is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller\/user of the data structure.\r\n\r\nEventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class.\r\n\r\nI have also generalized the flattening, and now it handles multiple levels of nesting.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1879\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1878","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1878\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1878\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1878\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1878","id":808526883,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3","number":1878,"title":"Add LJ Speech dataset","user":{"login":"anton-l","id":26864830,"node_id":"MDQ6VXNlcjI2ODY0ODMw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26864830?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anton-l","html_url":"https:\/\/github.com\/anton-l","followers_url":"https:\/\/api.github.com\/users\/anton-l\/followers","following_url":"https:\/\/api.github.com\/users\/anton-l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anton-l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anton-l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anton-l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anton-l\/orgs","repos_url":"https:\/\/api.github.com\/users\/anton-l\/repos","events_url":"https:\/\/api.github.com\/users\/anton-l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anton-l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n\r\n2) That's perfect! Yeah good question - we're currently thinking about a better design with @lhoestq \r\n\r\n3) Again tagging @yjernite & @lhoestq here - guess we should add this license though!","Thanks @anton-l for adding this one :)\r\nAbout the points you mentioned:\r\n1. Sure as soon as we've updated the tag sets in https:\/\/github.com\/huggingface\/datasets-tagging\/blob\/main\/task_set.json, we can update the tags in this dataset card and also in the other audio dataset card.\r\n2. For now we just try to have them as small as possible but we may switch to S3\/LFS at one point indeed\r\n3. If it's not part of the license set at https:\/\/github.com\/huggingface\/datasets-tagging\/blob\/main\/license_set.json we can add it to this license set\r\n\r\nFor now it's ok to have the other-* tags but we'll update them very soon","Let's merge this one and then we'll update the tags for the audio datasets. We'll probably also add something like this:\r\n```\r\ntype:\r\n- text\r\n- audio\r\n```\r\n\r\nThank you so much for adding this one, good job !"],"created_at":1613394642000,"updated_at":1613417981000,"closed_at":1613398689000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1878","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1878","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1878.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1878.patch"},"body":"This PR adds the LJ Speech dataset (https:\/\/keithito.com\/LJ-Speech-Dataset\/)\r\nAs requested by #1841 \r\nThe ASR format is based on #1767 \r\n\r\nThere are a couple of quirks that should be addressed:\r\n- I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list? \r\n- Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo?\r\n- The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well?\r\n\r\nPinging @patrickvonplaten to review","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1878\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1877","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1877\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1877\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1877\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1877","id":808462272,"node_id":"MDU6SXNzdWU4MDg0NjIyNzI=","number":1877,"title":"Allow concatenation of both in-memory and on-disk datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy\/pickle. Then have another wrapper that takes the concatenation of InMemoryTable\/MemoryMappedTable objects.\r\n\r\nWhat's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).","Hi @lhoestq @albertvillanova,\r\n\r\nI checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets? \r\nBased on my understanding, it is something like this, please correct me if I am wrong:\r\n1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .\r\n2. For on-disk\/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.\r\n\r\nIf this is correct, will the feature also handle pickling\/unpickling of a concatenated dataset? Will this be cached?\r\n\r\nThis also leads me to ask whether datasets are chunked during pickling? \r\n\r\nThanks,\r\nGunjan","Hi ! Yes you're totally right about your two points :)\r\n\r\nAnd in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle\/unpickle concatenated datasets","Hi @lhoestq\r\n\r\nThanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?","Hi @lhoestq,\r\n\r\nWill the `add_item` feature also help with lazy writing (or no caching) during `map`\/`filter`?","> Can you explain where the issue of the double memory may arise?\r\n\r\nWe have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.\r\nOn the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data conveniently. That means that each block is accessible twice: once in the full table, and once in the separated blocks. But since pyarrow tables concatenation doesn't double the memory, then building the full table doesn't cost memory which is what we want :)\r\n\r\n> Also, why is the existing concatenate_datasets not sufficient for this purpose?\r\n\r\nThe existing `concatenate_datasets` doesn't support having both in-memory and memory mapped data together (there's no fancy block separation logic). It works for datasets fully in-memory or fully memory mapped but not a mix of the two.\r\n\r\n> Will the add_item feature also help with lazy writing (or no caching) during map\/filter?\r\n\r\nIt will enable the implementation of the fast, masked filter from this discussion: https:\/\/github.com\/huggingface\/datasets\/issues\/1949\r\nHowever I don't think this will affect map."],"created_at":1613389186000,"updated_at":1616777518000,"closed_at":1616777518000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"This is a prerequisite for the addition of the `add_item` feature (see #1870).\r\nCurrently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).\r\nThis assumption is used for pickling for example:\r\n- in-memory dataset can just be pickled\/unpickled in-memory\r\n- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling\r\n\r\nMaybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future\r\n\r\nOne idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table.\r\nThen the dataset would be the concatenation of all these tables.\r\n\r\nDepending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data.\r\n\r\nIf you have some ideas you would like to share about the design\/API feel free to do so :)\r\n\r\ncc @albertvillanova ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1877\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1876","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1876\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1876\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1876\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1876","id":808025859,"node_id":"MDU6SXNzdWU4MDgwMjU4NTk=","number":1876,"title":" load_dataset(\"multi_woz_v22\") NonMatchingChecksumError","user":{"login":"Vincent950129","id":5945326,"node_id":"MDQ6VXNlcjU5NDUzMjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5945326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Vincent950129","html_url":"https:\/\/github.com\/Vincent950129","followers_url":"https:\/\/api.github.com\/users\/Vincent950129\/followers","following_url":"https:\/\/api.github.com\/users\/Vincent950129\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Vincent950129\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Vincent950129\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Vincent950129\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Vincent950129\/orgs","repos_url":"https:\/\/api.github.com\/users\/Vincent950129\/repos","events_url":"https:\/\/api.github.com\/users\/Vincent950129\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Vincent950129\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https:\/\/github.com\/budzianowski\/multiwoz\/pull\/59\r\nI'm opening a PR to update the checksums of the data files.","I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll be able to get the new version with\r\n```\r\npip install --upgrade datasets\r\n```","Hi, I still meet the error when loading the datasets after upgradeing datasets.\r\n\r\nraise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/dialog_acts.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/test\/dialogues_001.json']","This must be related to https:\/\/github.com\/budzianowski\/multiwoz\/pull\/72\r\nThose files have changed, let me update the checksums for this dataset.\r\n\r\nFor now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification."],"created_at":1613330088000,"updated_at":1628100480000,"closed_at":1628100480000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.\r\n\r\nTo reproduce:\r\n\r\n`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`\r\n\r\n\r\nThis will give the following error:\r\n\r\n```\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/dialog_acts.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_001.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_003.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_004.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_005.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_006.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_007.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_008.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_009.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_010.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_012.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_013.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_014.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_015.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_016.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/train\/dialogues_017.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/dev\/dialogues_001.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/dev\/dialogues_002.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/test\/dialogues_001.json', 'https:\/\/github.com\/budzianowski\/multiwoz\/raw\/master\/data\/MultiWOZ_2.2\/test\/dialogues_002.json']\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1876\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1875","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1875\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1875\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1875\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1875","id":807887267,"node_id":"MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0","number":1875,"title":"Adding sari metric","user":{"login":"ddhruvkr","id":6061911,"node_id":"MDQ6VXNlcjYwNjE5MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6061911?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ddhruvkr","html_url":"https:\/\/github.com\/ddhruvkr","followers_url":"https:\/\/api.github.com\/users\/ddhruvkr\/followers","following_url":"https:\/\/api.github.com\/users\/ddhruvkr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ddhruvkr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ddhruvkr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ddhruvkr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ddhruvkr\/orgs","repos_url":"https:\/\/api.github.com\/users\/ddhruvkr\/repos","events_url":"https:\/\/api.github.com\/users\/ddhruvkr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ddhruvkr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613277515000,"updated_at":1613577387000,"closed_at":1613577387000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1875","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1875","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1875.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1875.patch"},"body":"Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1875\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1874","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1874\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1874\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1874\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1874","id":807786094,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy","number":1874,"title":"Adding Europarl Bilingual dataset","user":{"login":"lucadiliello","id":23355969,"node_id":"MDQ6VXNlcjIzMzU1OTY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23355969?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lucadiliello","html_url":"https:\/\/github.com\/lucadiliello","followers_url":"https:\/\/api.github.com\/users\/lucadiliello\/followers","following_url":"https:\/\/api.github.com\/users\/lucadiliello\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lucadiliello\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lucadiliello\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lucadiliello\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lucadiliello\/orgs","repos_url":"https:\/\/api.github.com\/users\/lucadiliello\/repos","events_url":"https:\/\/api.github.com\/users\/lucadiliello\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lucadiliello\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.","I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos","I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `-` 3 files are downloaded:\r\n- dataset for ``\r\n- dataset for ``\r\n- alignments between `` and ``\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.","Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data .\/datasets\/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `.\/datasets\/europarl_bilingual\/dummy\/bg-cs\/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl\/raw\/bg\/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl\/raw\/cs\/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/bible_para\/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http:\/\/opus.nlpl.eu\/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help","I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!","Is there something else I should do? If not can this be integrated?","Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :\/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`"],"created_at":1613235724000,"updated_at":1614854302000,"closed_at":1614854302000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1874","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1874","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1874.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1874.patch"},"body":"Implementation of Europarl bilingual dataset from described [here](https:\/\/opus.nlpl.eu\/Europarl.php).\r\n\r\nThis dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences).\r\nI chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1874\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1873","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1873\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1873\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1873\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1873","id":807750745,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy","number":1873,"title":"add iapp_wiki_qa_squad","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613223267000,"updated_at":1613485318000,"closed_at":1613485318000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1873","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1873","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1873.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1873.patch"},"body":"`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.\r\nIt is adapted from [the original iapp-wiki-qa-dataset](https:\/\/github.com\/iapp-technology\/iapp-wiki-qa-dataset)\r\nto [SQuAD](https:\/\/rajpurkar.github.io\/SQuAD-explorer\/) format, resulting in\r\n5761\/742\/739 questions from 1529\/191\/192 articles.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1873\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1872","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1872\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1872\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1872\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1872","id":807711935,"node_id":"MDU6SXNzdWU4MDc3MTE5MzU=","number":1872,"title":"Adding a new column to the dataset after set_format was called","user":{"login":"villmow","id":2743060,"node_id":"MDQ6VXNlcjI3NDMwNjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2743060?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/villmow","html_url":"https:\/\/github.com\/villmow","followers_url":"https:\/\/api.github.com\/users\/villmow\/followers","following_url":"https:\/\/api.github.com\/users\/villmow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/villmow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/villmow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/villmow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/villmow\/orgs","repos_url":"https:\/\/api.github.com\/users\/villmow\/repos","events_url":"https:\/\/api.github.com\/users\/villmow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/villmow\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column to be unformatted you can re-run this line:\r\n```python\r\ndata.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)\r\n```","Hi, thanks that solved my problem. Maybe mention that in the documentation. ","Ok cool :) \r\nAlso I just did a PR to mention this behavior in the documentation","Closed by #1888"],"created_at":1613207675000,"updated_at":1617112905000,"closed_at":1617112905000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, \r\n\r\nthanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. \r\n\r\nI load some lists of strings and integers, then call `data.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`). \r\n\r\nBelow some pseudo code:\r\n```python\r\n def augment_func(sample: Dict) -> Dict:\r\n # do something\r\n return {\r\n \"some_integer_column1\" : augmented_data[\"some_integer_column1\"], # <-- tensor\r\n \"some_integer_column2\" : augmented_data[\"some_integer_column2\"], # <-- tensor\r\n \"NEW_COLUMN\": targets, # <-- list of strings\r\n }\r\n\r\n\r\n data = datasets.load_dataset(__file__, data_dir=\"...\", split=\"train\")\r\n data.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)\r\n\r\n augmented_dataset = data.map(augment_func, batched=False)\r\n \r\n for sample in augmented_dataset:\r\n print(sample) # fails\r\n\r\n```\r\n\r\nand the exception:\r\n```python\r\nTraceback (most recent call last):\r\n File \"dataset.py\", line 487, in \r\n main()\r\n File \"dataset.py\", line 471, in main\r\n for sample in augmented_dataset:\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 697, in __iter__\r\n yield self._getitem(\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1069, in _getitem\r\n outputs = self._convert_outputs(\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 890, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 850, in command\r\n return [map_nested(command, i, **map_nested_kwargs) for i in x]\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 850, in \r\n return [map_nested(command, i, **map_nested_kwargs) for i in x]\r\n File \"lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 850, in command\r\n return [map_nested(command, i, **map_nested_kwargs) for i in x]\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 850, in \r\n return [map_nested(command, i, **map_nested_kwargs) for i in x]\r\n File \"lib\/python3.8\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 851, in command\r\n return torch.tensor(x, **format_kwargs)\r\nTypeError: new(): invalid data type 'str'\r\n```\r\n\r\nThanks!\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1872\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1871","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1871\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1871\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1871\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1871","id":807697671,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz","number":1871,"title":"Add newspop dataset","user":{"login":"frankier","id":299380,"node_id":"MDQ6VXNlcjI5OTM4MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/299380?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/frankier","html_url":"https:\/\/github.com\/frankier","followers_url":"https:\/\/api.github.com\/users\/frankier\/followers","following_url":"https:\/\/api.github.com\/users\/frankier\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/frankier\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/frankier\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/frankier\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/frankier\/orgs","repos_url":"https:\/\/api.github.com\/users\/frankier\/repos","events_url":"https:\/\/api.github.com\/users\/frankier\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/frankier\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the changes :)\r\nmerging"],"created_at":1613201483000,"updated_at":1615198365000,"closed_at":1615198365000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1871","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1871","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1871.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1871.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1871\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1870","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1870\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1870\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1870\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1870","id":807306564,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4","number":1870,"title":"Implement Dataset add_item","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/3","html_url":"https:\/\/github.com\/huggingface\/datasets\/milestone\/3","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/milestones\/3\/labels","id":6644287,"node_id":"MDk6TWlsZXN0b25lNjY0NDI4Nw==","number":3,"title":"1.7","description":"Next minor release","creator":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"open_issues":0,"closed_issues":3,"state":"closed","created_at":1617974191000,"updated_at":1622478053000,"due_on":1620975600000,"closed_at":1622478053000},"comments":["Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.","Sure ! I opened an issue #1877 so we can discuss this specific aspect :)","I am going to implement this consolidation step in #2151.","Sounds good !","I retake this PR once the consolidation step is already implemented by #2151."],"created_at":1613142226000,"updated_at":1619172091000,"closed_at":1619172091000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1870","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1870","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1870.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1870.patch"},"body":"Implement `Dataset.add_item`.\r\n\r\nClose #1854.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1870\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1869","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1869\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1869\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1869\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1869","id":807159835,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy","number":1869,"title":"Remove outdated commands in favor of huggingface-cli","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613129290000,"updated_at":1613146389000,"closed_at":1613146388000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1869","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1869","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1869.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1869.patch"},"body":"Removing the old user commands since `huggingface_hub` is going to be used instead.\r\ncc @julien-c ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1869\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1868","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1868\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1868\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1868\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1868","id":807138159,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0","number":1868,"title":"Update oscar sizes","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1613127335000,"updated_at":1613127787000,"closed_at":1613127786000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1868","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1868","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1868.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1868.patch"},"body":"This commit https:\/\/github.com\/huggingface\/datasets\/commit\/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1868\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1867","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1867\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1867\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1867\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1867","id":807127181,"node_id":"MDU6SXNzdWU4MDcxMjcxODE=","number":1867,"title":"ERROR WHEN USING SET_TRANSFORM() ","user":{"login":"alexvaca0","id":35173563,"node_id":"MDQ6VXNlcjM1MTczNTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35173563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexvaca0","html_url":"https:\/\/github.com\/alexvaca0","followers_url":"https:\/\/api.github.com\/users\/alexvaca0\/followers","following_url":"https:\/\/api.github.com\/users\/alexvaca0\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexvaca0\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexvaca0\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexvaca0\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexvaca0\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexvaca0\/repos","events_url":"https:\/\/api.github.com\/users\/alexvaca0\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexvaca0\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https:\/\/github.com\/huggingface\/transformers\/blob\/f51188cbe74195c14c5b3e2e8f10c2f435f9751a\/src\/transformers\/trainer.py#L442\r\n\r\nThis line sets the format to not return certain unused columns. But this has two issues:\r\n1. it forgets to also set the format_kwargs (this causes the error you got):\r\n```python\r\ndataset.set_format(type=dataset.format[\"type\"], columns=columns, format_kwargs=dataset.format[\"format_kwargs\"])\r\n```\r\n2. the Trainer wants to keep only the fields that are used as input for a model. However for a dataset with a transform, the output fields are often different from the columns fields. For example from a column \"text\" in the dataset, the strings can be transformed on-the-fly into \"input_ids\". If you want your dataset to only output certain fields and not other you must change your transform function.\r\n","FYI that option can be removed with `remove_unused_columns = False` in your `TrainingArguments`, so there is a workaround @alexvaca0 while the fix in `Trainer` is underway.\r\n\r\n@lhoestq I think I will just use the line you suggested and if someone is using the columns that are removed in their transform they will need to change `remove_unused_columns` to `False`. We might switch the default of that argument in the next version if that proves too bug-proof.","I've tried your solutions @sgugger @lhoestq and the good news is that it throws no error. However, TPU training is taking forever, in 1 hour it has only trained 1 batch of 8192 elements, which doesn't make much sense... Is it possible that \"on the fly\" tokenization of batches is slowing down TPU training to that extent?","I'm pretty sure this is because of padding but @sgugger might know better","I don't know what the value of `padding` is in your lines of code pasted above so I can't say for sure. The first batch will be very slow on TPU since it compiles everything, so that's normal (1 hour is long but 8192 elements is also large). Then if your batches are not of the same lengths, it will recompile everything at each step instead of using the same graph, which will be very slow, so you should double check you are using padding to make everything the exact same shape. ","I have tried now on a GPU and it goes smooth! Amazing feature .set_transform() instead of .map()! Now I can pre-train my model without the hard disk limitation. Thanks for your work all HuggingFace team!! :clap: ","In the end, to make it work I turned to A-100 gpus instead of TPUS, among other changes. Set_transform doesn't work as expected and slows down training very much even in GPUs, and applying map destroys the disk, as it multiplies by 100 the size of the data passed to it (due to inefficient implementation converting strings to int64 floats I guess). For that reason, I chose to use datasets to load the data as text, and then edit the Collator from Transformers to tokenize every batch it receives before processing it. That way, I'm being able to train fast, without memory breaks, without the disk being unnecessarily filled, while making use of GPUs almost all the time I'm paying for them (the map function over the whole dataset took ~15hrs, in which you're not training at all). I hope this info helps others that are looking for training a language model from scratch cheaply, I'm going to close the issue as the optimal solution I found after many experiments to the problem posted in it is explained above. ","Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB M.2 NVMe disk that I can not train on because the saved .arrow files goes crazily big. If you can share your Collator I will be grateful. "],"created_at":1613126311000,"updated_at":1614607464000,"closed_at":1614168043000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https:\/\/github.com\/huggingface\/datasets\/issues\/1825#issuecomment-774202797\r\n\r\nHowever, when I try to use Trainer from transformers with such dataset, it throws an error:\r\n\r\n```\r\nTypeError: __init__() missing 1 required positional argument: 'transform'\r\n[INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text.\r\nException in device=TPU:0: __init__() missing 1 required positional argument: 'transform'\r\nTraceback (most recent call last):\r\n File \"\/anaconda3\/envs\/torch-xla-1.7\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 330, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"\/anaconda3\/envs\/torch-xla-1.7\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 324, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/home\/alejandro_vaca\/transformers\/examples\/language-modeling\/run_mlm_wwm.py\", line 368, in _mp_fn\r\n main()\r\n File \"\/home\/alejandro_vaca\/transformers\/examples\/language-modeling\/run_mlm_wwm.py\", line 332, in main\r\n data_collator=data_collator,\r\n File \"\/anaconda3\/envs\/torch-xla-1.7\/lib\/python3.6\/site-packages\/transformers\/trainer.py\", line 286, in __init__\r\n self._remove_unused_columns(self.train_dataset, description=\"training\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.7\/lib\/python3.6\/site-packages\/transformers\/trainer.py\", line 359, in _remove_unused_columns\r\n dataset.set_format(type=dataset.format[\"type\"], columns=columns)\r\n File \"\/home\/alejandro_vaca\/datasets\/src\/datasets\/fingerprint.py\", line 312, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/alejandro_vaca\/datasets\/src\/datasets\/arrow_dataset.py\", line 818, in set_format\r\n _ = get_formatter(type, **format_kwargs)\r\n File \"\/home\/alejandro_vaca\/datasets\/src\/datasets\/formatting\/__init__.py\", line 112, in get_formatter\r\n return _FORMAT_TYPES[format_type](**format_kwargs)\r\nTypeError: __init__() missing 1 required positional argument: 'transform'\r\n```\r\n\r\nThe code I'm using:\r\n\r\n```{python}\r\n\r\n def tokenize_function(examples):\r\n # Remove empty lines\r\n examples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\n return tokenizer(examples[\"text\"], padding=padding, truncation=True, max_length=data_args.max_seq_length)\r\n\r\n datasets.set_transform(tokenize_function)\r\n\r\n data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability)\r\n\r\n # Initialize our Trainer\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=datasets[\"train\"] if training_args.do_train else None,\r\n eval_dataset=datasets[\"val\"] if training_args.do_eval else None,\r\n tokenizer=tokenizer,\r\n data_collator=data_collator,\r\n )\r\n```\r\n\r\nI've installed from source, master branch.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1867\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1866","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1866\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1866\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1866\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1866","id":807017816,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1","number":1866,"title":"Add dataset for Financial PhraseBank","user":{"login":"frankier","id":299380,"node_id":"MDQ6VXNlcjI5OTM4MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/299380?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/frankier","html_url":"https:\/\/github.com\/frankier","followers_url":"https:\/\/api.github.com\/users\/frankier\/followers","following_url":"https:\/\/api.github.com\/users\/frankier\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/frankier\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/frankier\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/frankier\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/frankier\/orgs","repos_url":"https:\/\/api.github.com\/users\/frankier\/repos","events_url":"https:\/\/api.github.com\/users\/frankier\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/frankier\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the feedback. All accepted and metadata regenerated."],"created_at":1613115056000,"updated_at":1613571756000,"closed_at":1613571756000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1866","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1866","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1866.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1866.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1866\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1865","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1865\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1865\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1865\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1865","id":806388290,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2","number":1865,"title":"Updated OPUS Open Subtitles Dataset with metadata information","user":{"login":"Valahaar","id":19476123,"node_id":"MDQ6VXNlcjE5NDc2MTIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19476123?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Valahaar","html_url":"https:\/\/github.com\/Valahaar","followers_url":"https:\/\/api.github.com\/users\/Valahaar\/followers","following_url":"https:\/\/api.github.com\/users\/Valahaar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Valahaar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Valahaar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Valahaar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Valahaar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Valahaar\/repos","events_url":"https:\/\/api.github.com\/users\/Valahaar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Valahaar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of the dataset script, like \".\/datasets\/open_subtitles\". Otherwise the dataset is loaded from the master branch on github.\r\nHope that clarifies things a bit\r\n\r\nAnd of course feel free to add methods or classmethods to your builder.\r\n","Great! Thank you :)\r\nI'll close the issue as well."],"created_at":1613049986000,"updated_at":1613738289000,"closed_at":1613149184000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1865","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1865","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1865.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1865.patch"},"body":"Close #1844 \r\n\r\nProblems:\r\n- I ran `python datasets-cli test datasets\/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be?\r\n- Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e \".[dev]\"` after the changes, and loading the dataset via `load_dataset(\"open_subtitles\", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss?\r\n\r\nQuestions:\r\n- Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic...","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1865\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1864","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1864\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1864\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1864\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1864","id":806172843,"node_id":"MDU6SXNzdWU4MDYxNzI4NDM=","number":1864,"title":"Add Winogender Schemas","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nevermind, this one is already available on the hub under the name `'wino_bias'`: https:\/\/huggingface.co\/datasets\/wino_bias"],"created_at":1613031518000,"updated_at":1613031591000,"closed_at":1613031591000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Winogender Schemas\r\n- **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/1804.09301\r\n- **Data:** https:\/\/github.com\/rudinger\/winogender-schemas (see data directory)\r\n- **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1864\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1863","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1863\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1863\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1863\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1863","id":806171311,"node_id":"MDU6SXNzdWU4MDYxNzEzMTE=","number":1863,"title":"Add WikiCREM","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!","Hi @udapy, are you working on this?"],"created_at":1613031360000,"updated_at":1615102033000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** WikiCREM\r\n- **Description:** A large unsupervised corpus for coreference resolution.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/1905.06290\r\n- **Github repo:**: https:\/\/github.com\/vid-koci\/bert-commonsense\r\n- **Data:** https:\/\/ora.ox.ac.uk\/objects\/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3\r\n- **Motivation:** Coreference resolution, common sense reasoning\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1863\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1862","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1862\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1862\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1862\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1862","id":805722293,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx","number":1862,"title":"Fix writing GPU Faiss index","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612978323000,"updated_at":1612981068000,"closed_at":1612981067000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1862","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1862","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1862.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1862.patch"},"body":"As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU.\r\n\r\nI fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu`\r\n\r\nClose #1859 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1862\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1861","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1861\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1861\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1861\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1861","id":805631215,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1","number":1861,"title":"Fix Limit url","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612971896000,"updated_at":1612973700000,"closed_at":1612973699000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1861","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1861","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1861.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1861.patch"},"body":"The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https:\/\/github.com\/ilmgut\/limit_dataset\r\n\r\nThis PR uses the previous commit sha to download the file instead, as suggested by @Paethon\r\n\r\nClose #1836 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1861\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1860","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1860\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1860\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1860\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1860","id":805510037,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz","number":1860,"title":"Add loading from the Datasets Hub + add relative paths in download manager","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documentation\r\n\r\nI added a few more tests with the \"lhoestq\/test\" dataset I added on the hub and it works fine :) ","Here is the PR adding support for datasets repos in `huggingface_hub`: https:\/\/github.com\/huggingface\/huggingface_hub\/pull\/14"],"created_at":1612963451000,"updated_at":1613157210000,"closed_at":1613157209000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1860","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1860","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1860.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1860.patch"},"body":"With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data.\r\nFor example: https:\/\/huggingface.co\/datasets\/lhoestq\/custom_squad\/tree\/main contains one script and two json files.\r\n\r\nYou can load it using\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"lhoestq\/custom_squad\")\r\n```\r\n\r\nTo be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via\r\n```python\r\n_URLS = {\r\n \"train\": \"train-v1.1.json\",\r\n \"dev\": \"dev-v1.1.json\",\r\n}\r\ndownloaded_files = dl_manager.download_and_extract(_URLS)\r\n```\r\n\r\nTo make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url).\r\n\r\nI also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https:\/\/github.com\/huggingface\/huggingface_hub).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1860\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1859","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1859\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1859\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1859\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1859","id":805479025,"node_id":"MDU6SXNzdWU4MDU0NzkwMjU=","number":1859,"title":"Error \"in void don't know how to serialize this type of index\" when saving index to disk when device=0 (GPU)","user":{"login":"corticalstack","id":3995321,"node_id":"MDQ6VXNlcjM5OTUzMjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3995321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/corticalstack","html_url":"https:\/\/github.com\/corticalstack","followers_url":"https:\/\/api.github.com\/users\/corticalstack\/followers","following_url":"https:\/\/api.github.com\/users\/corticalstack\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/corticalstack\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/corticalstack\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/corticalstack\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/corticalstack\/orgs","repos_url":"https:\/\/api.github.com\/users\/corticalstack\/repos","events_url":"https:\/\/api.github.com\/users\/corticalstack\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/corticalstack\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR","I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next release of `datasets` (in a few days)","Thanks for such a quick fix and merge to master, pip installed git master, tested all OK"],"created_at":1612960860000,"updated_at":1612981932000,"closed_at":1612981067000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Error serializing faiss index. Error as follows:\r\n\r\n`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at \/home\/conda\/feedstock_root\/build_artifacts\/faiss-split_1612472484670\/work\/faiss\/impl\/index_write.cpp:453: don't know how to serialize this type of index`\r\n\r\n\r\nNote:\r\n\r\n`torch.cuda.is_available()` reports:\r\n\r\n```\r\nCuda is available\r\ncuda:0\r\n\r\n```\r\n\r\nAdding index, device=0 for GPU.\r\n\r\n`dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)`\r\n\r\nHowever, during a quick debug, self.faiss_index has no attr \"device\" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK.\r\n\r\n\r\n```\r\ndef save(self, file: str):\r\n \"\"\"Serialize the FaissIndex on disk\"\"\"\r\n import faiss # noqa: F811\r\n\r\n if (\r\n hasattr(self.faiss_index, \"device\")\r\n and self.faiss_index.device is not None\r\n and self.faiss_index.device > -1\r\n ):\r\n index = faiss.index_gpu_to_cpu(self.faiss_index)\r\n else:\r\n index = self.faiss_index\r\n faiss.write_index(index, file)\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1859\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1858","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1858\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1858\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1858\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1858","id":805477774,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx","number":1858,"title":"Clean config getenvs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612960754000,"updated_at":1612972350000,"closed_at":1612972349000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1858","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1858","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1858.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1858.patch"},"body":"Following #1848 \r\nRemove double getenv calls and fix one issue with rarfile\r\n\r\ncc @albertvillanova ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1858\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1857","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1857\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1857\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1857\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1857","id":805391107,"node_id":"MDU6SXNzdWU4MDUzOTExMDc=","number":1857,"title":"Unable to upload \"community provided\" dataset - 400 Client Error","user":{"login":"mwrzalik","id":1376337,"node_id":"MDQ6VXNlcjEzNzYzMzc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1376337?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mwrzalik","html_url":"https:\/\/github.com\/mwrzalik","followers_url":"https:\/\/api.github.com\/users\/mwrzalik\/followers","following_url":"https:\/\/api.github.com\/users\/mwrzalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mwrzalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mwrzalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mwrzalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mwrzalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/mwrzalik\/repos","events_url":"https:\/\/api.github.com\/users\/mwrzalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mwrzalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps:\/\/huggingface.co\/datasets\/lhoestq\/custom_squad\/tree\/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c maybe we can make improve the error message ?"],"created_at":1612953541000,"updated_at":1627967173000,"closed_at":1627967173000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\ni'm trying to a upload a dataset as described [here](https:\/\/huggingface.co\/docs\/datasets\/v1.2.0\/share_dataset.html#sharing-a-community-provided-dataset). This is what happens:\r\n\r\n``` \r\n$ datasets-cli login\r\n$ datasets-cli upload_dataset my_dataset\r\nAbout to upload file \/path\/to\/my_dataset\/dataset_infos.json to S3 under filename my_dataset\/dataset_infos.json and namespace username\r\nAbout to upload file \/path\/to\/my_dataset\/my_dataset.py to S3 under filename my_dataset\/my_dataset.py and namespace username\r\nProceed? [Y\/n] Y\r\nUploading... This might take a while if files are large\r\n400 Client Error: Bad Request for url: https:\/\/huggingface.co\/api\/datasets\/presign\r\nhuggingface.co migrated to a new model hosting system.\r\nYou need to upgrade to transformers v3.5+ to upload new models.\r\nMore info at https:\/\/discuss.hugginface.co or https:\/\/twitter.com\/julien_c. Thank you! \r\n```\r\nI'm using the latest releases of datasets and transformers.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1857\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1856","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1856\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1856\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1856\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1856","id":805360200,"node_id":"MDU6SXNzdWU4MDUzNjAyMDA=","number":1856,"title":"load_dataset(\"amazon_polarity\") NonMatchingChecksumError","user":{"login":"yanxi0830","id":19946372,"node_id":"MDQ6VXNlcjE5OTQ2Mzcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19946372?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yanxi0830","html_url":"https:\/\/github.com\/yanxi0830","followers_url":"https:\/\/api.github.com\/users\/yanxi0830\/followers","following_url":"https:\/\/api.github.com\/users\/yanxi0830\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yanxi0830\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yanxi0830\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yanxi0830\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yanxi0830\/orgs","repos_url":"https:\/\/api.github.com\/users\/yanxi0830\/repos","events_url":"https:\/\/api.github.com\/users\/yanxi0830\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yanxi0830\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`","+1 encountering this issue as well","@lhoestq Hi! I encounter the same error when loading `yelp_review_full`.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_yp = load_dataset(\"yelp_review_full\")\r\n```\r\n\r\nWhen you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?","+1 Also encountering this issue","> When you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?\r\n\r\nEach file on Google Drive can be downloaded only a certain amount of times per day because of a quota. The quota is reset every day. So if too many people download the dataset the same day, then the quota is likely to exceed.\r\nThat's a really bad limitations of Google Drive and we should definitely find another host for these dataset than Google Drive.\r\nFor now I would suggest to wait and try again later..\r\n\r\nSo far the issue happened with CNN DailyMail, Amazon Polarity and Yelp Reviews. \r\nAre you experiencing the issue with other datasets ? @calebchiam @dtch1997 ","@lhoestq Gotcha, that is quite problematic...for what it's worth, I've had no issues with the other datasets I tried, such as `yelp_reviews_full` and `amazon_reviews_multi`.","Same issue today with \"big_patent\", though the symptoms are slightly different.\r\n\r\nWhen running\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nload_dataset(\"big_patent\", split=\"validation\")\r\n```\r\n\r\nI get the following\r\n`FileNotFoundError: Local file \\huggingface\\datasets\\downloads\\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\\bigPatentData\\train.tar.gz doesn't exist`\r\n\r\nI had to look into `6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5` (which is a file instead of a folder) and got the following:\r\n\r\n`Google Drive - Quota exceeded<\/title><meta http-equiv=\"content-type\" content=\"text\/html; charset=utf-8\"\/><link href=/static/doclist/client/css/4033072956-untrustedcontent.css rel=\"stylesheet\" nonce=\"JV0t61Smks2TEKdFCGAUFA\"><link rel=\"icon\" href=\"\/\/ssl.gstatic.com\/images\/branding\/product\/1x\/drive_2020q4_32dp.png\"\/><style nonce=\"JV0t61Smks2TEKdFCGAUFA\">#gbar,#guser{font-size:13px;padding-top:0px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\r\n<\/style><script nonce=\"iNUHigT+ENVQ3UZrLkFtRw\"><\/script><\/head><body><div id=gbar><nobr><a target=_blank class=gb1 href=\"https:\/\/www.google.fr\/webhp?tab=ow\">Search<\/a> <a target=_blank class=gb1 href=\"http:\/\/www.google.fr\/imghp?hl=en&tab=oi\">Images<\/a> <a target=_blank class=gb1 href=\"https:\/\/maps.google.fr\/maps?hl=en&tab=ol\">Maps<\/a> <a target=_blank class=gb1 href=\"https:\/\/play.google.com\/?hl=en&tab=o8\">Play<\/a> <a target=_blank class=gb1 href=\"https:\/\/www.youtube.com\/?gl=FR&tab=o1\">YouTube<\/a> <a target=_blank class=gb1 href=\"https:\/\/news.google.com\/?tab=on\">News<\/a> <a target=_blank class=gb1 href=\"https:\/\/mail.google.com\/mail\/?tab=om\">Gmail<\/a> <b class=gb1>Drive<\/b> <a target=_blank class=gb1 style=\"text-decoration:none\" href=\"https:\/\/www.google.fr\/intl\/en\/about\/products?tab=oh\"><u>More<\/u> »<\/a><\/nobr><\/div><div id=guser width=100%><nobr><span id=gbn class=gbi><\/span><span id=gbf class=gbf><\/span><span id=gbe><\/span><a target=\"_self\" href=\"\/settings?hl=en_US\" class=gb4>Settings<\/a> | <a target=_blank href=\"\/\/support.google.com\/drive\/?p=web_home&hl=en_US\" class=gb4>Help<\/a> | <a target=_top id=gb_70 href=\"https:\/\/accounts.google.com\/ServiceLogin?hl=en&passive=true&continue=https:\/\/drive.google.com\/uc%3Fexport%3Ddownload%26id%3D1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa&service=writely&ec=GAZAMQ\" class=gb4>Sign in<\/a><\/nobr><\/div><div class=gbh style=left:0><\/div><div class=gbh style=right:0><\/div><div class=\"uc-main\"><div id=\"uc-text\"><p class=\"uc-error-caption\">Sorry, you can't view or download this file at this time.<\/p><p class=\"uc-error-subcaption\">Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.<\/p><\/div><\/div><div class=\"uc-footer\"><hr class=\"uc-footer-divider\">© 2021 Google - <a class=\"goog-link\" href=\"\/\/support.google.com\/drive\/?p=web_home\">Help<\/a> - <a class=\"goog-link\" href=\"\/\/support.google.com\/drive\/bin\/answer.py?hl=en_US&answer=2450387\">Privacy & Terms<\/a><\/div><\/body><\/html>`"],"created_at":1612951256000,"updated_at":1626872391000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError.\r\n\r\nTo reproduce:\r\n```\r\nload_dataset(\"amazon_polarity\")\r\n```\r\nThis will give the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n<ipython-input-3-8559a03fe0f8> in <module>()\r\n----> 1 dataset = load_dataset(\"amazon_polarity\")\r\n\r\n3 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 37 if len(bad_urls) > 0:\r\n 38 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 40 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 41 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/drive.google.com\/u\/0\/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download']\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1856\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1855","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1855\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1855\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1855\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1855","id":805256579,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3","number":1855,"title":"Minor fix in the docs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612942063000,"updated_at":1612960389000,"closed_at":1612960389000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1855","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1855","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1855.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1855.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1855\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1854","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1854\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1854\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1854\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1854","id":805204397,"node_id":"MDU6SXNzdWU4MDUyMDQzOTc=","number":1854,"title":"Feature Request: Dataset.add_item","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\nds = Dataset.from_dict(data)\r\nassert (ds[\"input_ids\"][0] == np.array([4,4,2])).all()\r\n```","Hi @sshleifer :) \r\n\r\nWe don't have methods like `Dataset.add_batch` or `Dataset.add_entry\/add_item` yet.\r\nBut that's something we'll add pretty soon. Would an API that looks roughly like this help ? Do you have suggestions ?\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\ntokenized = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])\r\n\r\n# API suggestion (not available yet)\r\nd = Dataset()\r\nfor input_ids in tokenized:\r\n d.add_item({\"input_ids\": input_ids})\r\n\r\nprint(d[0][\"input_ids\"])\r\n# [4, 4, 2]\r\n```\r\n\r\nCurrently you can define a dataset with what @albertvillanova suggest, or via a generator using dataset builders. It's also possible to [concatenate datasets](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?highlight=concatenate#datasets.concatenate_datasets).","Your API looks perfect @lhoestq, thanks!"],"created_at":1612937160000,"updated_at":1619172090000,"closed_at":1619172090000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"I'm trying to integrate `huggingface\/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https:\/\/github.com\/pytorch\/fairseq\/blob\/master\/fairseq\/data\/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`.\r\nIs this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries.\r\n\r\n### Desired API\r\n\r\n```python\r\nimport numpy as np\r\ntokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])\r\n\r\ndef build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset:\r\n \"\"\"FIXME\"\"\"\r\n dataset = EmptyDataset()\r\n for t in tokenized: dataset.append(t)\r\n return dataset\r\nds = build_dataset_from_tokenized(tokenized)\r\nassert (ds[0] == np.array([4,4,2])).all()\r\n```\r\n\r\n### What I tried\r\ngrep, google for \"add one entry at a time\", \"datasets.append\"\r\n\r\n### Current Code\r\nThis code achieves the same result but doesn't fit into the `add_item` abstraction.\r\n\r\n```python\r\n dataset = load_dataset('text', data_files={'train': 'train.txt'})\r\n tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096)\r\n def tokenize_function(examples):\r\n ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids']\r\n return {'input_ids': [x[1:] for x in ids]}\r\n ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache)\r\n\tprint(ds['train'][0]) => np array\r\n```\r\n\r\nThanks in advance!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1854\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1853","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1853\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1853\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1853\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1853","id":804791166,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4","number":1853,"title":"Configure library root logger at the module level","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612894272000,"updated_at":1612960354000,"closed_at":1612960354000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1853","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1853","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1853.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1853.patch"},"body":"Configure library root logger at the datasets.logging module level (singleton-like).\r\n\r\nBy doing it this way:\r\n- we are sure configuration is done only once: module level code is only runned once\r\n- no need of global variable\r\n- no need of threading lock","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1853\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1852","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1852\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1852\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1852\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1852","id":804633033,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1","number":1852,"title":"Add Arabic Speech Corpus ","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612882946000,"updated_at":1613038735000,"closed_at":1613038735000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1852","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1852","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1852.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1852.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1852\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1851","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1851\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1851\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1851\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1851","id":804523174,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5","number":1851,"title":"set bert_score version dependency","user":{"login":"pvl","id":3596,"node_id":"MDQ6VXNlcjM1OTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3596?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pvl","html_url":"https:\/\/github.com\/pvl","followers_url":"https:\/\/api.github.com\/users\/pvl\/followers","following_url":"https:\/\/api.github.com\/users\/pvl\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pvl\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pvl\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pvl\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pvl\/orgs","repos_url":"https:\/\/api.github.com\/users\/pvl\/repos","events_url":"https:\/\/api.github.com\/users\/pvl\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pvl\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612875067000,"updated_at":1612880508000,"closed_at":1612880508000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1851","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1851","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1851.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1851.patch"},"body":"Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1851\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1850","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1850\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1850\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1850\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1850","id":804412249,"node_id":"MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx","number":1850,"title":"Add cord 19 dataset","user":{"login":"ggdupont","id":5583410,"node_id":"MDQ6VXNlcjU1ODM0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5583410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ggdupont","html_url":"https:\/\/github.com\/ggdupont","followers_url":"https:\/\/api.github.com\/users\/ggdupont\/followers","following_url":"https:\/\/api.github.com\/users\/ggdupont\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ggdupont\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ggdupont\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ggdupont\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ggdupont\/orgs","repos_url":"https:\/\/api.github.com\/users\/ggdupont\/repos","events_url":"https:\/\/api.github.com\/users\/ggdupont\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ggdupont\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cleaned-up version of previous PR: https:\/\/github.com\/huggingface\/datasets\/pull\/1129","@lhoestq FYI","Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today","Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging"],"created_at":1612866128000,"updated_at":1612883786000,"closed_at":1612883786000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1850","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1850","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1850.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1850.patch"},"body":"Initial version only reading the metadata in CSV.\r\n\r\n### Checklist:\r\n- [x] Create the dataset script \/datasets\/my_dataset\/my_dataset.py using the template\r\n- [x] Fill the _DESCRIPTION and _CITATION variables\r\n- [x] Implement _infos(), _split_generators() and _generate_examples()\r\n- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.\r\n- [x] Generate the metadata file dataset_infos.json for all configurations\r\n- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card README.md using the template and at least fill the tags\r\n- [x] Both tests for the real data and the dummy data pass.\r\n\r\n### Extras:\r\n- [x] add more metadata\r\n- [x] add full text\r\n- [x] add pre-computed document embedding","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1850\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1849","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1849\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1849\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1849\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1849","id":804292971,"node_id":"MDU6SXNzdWU4MDQyOTI5NzE=","number":1849,"title":"Add TIMIT","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n","Hey @vrindaprabhu - sure I'll help you :-) Could you open a first PR for TIMIT where you copy-paste more or less the `librispeech_asr` script: https:\/\/github.com\/huggingface\/datasets\/blob\/28be129db862ec89a87ac9349c64df6b6118aff4\/datasets\/librispeech_asr\/librispeech_asr.py#L93 (obviously replacing all the naming and links correctly...) and then you can list all possible outputs in the features dict: https:\/\/github.com\/huggingface\/datasets\/blob\/28be129db862ec89a87ac9349c64df6b6118aff4\/datasets\/librispeech_asr\/librispeech_asr.py#L104 (words, phonemes should probably be of kind `datasets.Sequence(datasets.Value(\"string\"))` and texts I think should be of type `\"text\": datasets.Value(\"string\")`.\r\n\r\nWhen you've opened a first PR, I think it'll be much easier for us to take a look together :-) ","I am sorry! I created the PR [#1903](https:\/\/github.com\/huggingface\/datasets\/pull\/1903#). Requesting your comments! CircleCI tests are failing, will address them along with your comments!"],"created_at":1612855781000,"updated_at":1615787977000,"closed_at":1615787977000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *TIMIT*\r\n- **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems*\r\n\r\n- **Paper:** *Homepage*: http:\/\/groups.inf.ed.ac.uk\/ami\/corpus\/ \/ *Wikipedia*: https:\/\/en.wikipedia.org\/wiki\/TIMIT\r\n- **Data:** *https:\/\/deepai.org\/dataset\/timit*\r\n- **Motivation:** Important speech dataset\r\n\r\n\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1849\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1848","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1848\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1848\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1848\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1848","id":803826506,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1","number":1848,"title":"Refactoring: Create config module","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612809831000,"updated_at":1612960175000,"closed_at":1612960175000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1848","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1848","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1848.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1848.patch"},"body":"Refactorize configuration settings into their own module.\r\n\r\nThis could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1848\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1847","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1847\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1847\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1847\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1847","id":803824694,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0","number":1847,"title":"[Metrics] Add word error metric metric","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Feel free to merge once the CI is all green ;)"],"created_at":1612809675000,"updated_at":1612893201000,"closed_at":1612893201000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1847","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1847","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1847.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1847.patch"},"body":"This PR adds the word error rate metric to datasets. \r\nWER: https:\/\/en.wikipedia.org\/wiki\/Word_error_rate\r\nfor speech recognition. WER is the main metric used in ASR. \r\n\r\n`jiwer` seems to be a solid library (see https:\/\/github.com\/asteroid-team\/asteroid\/pull\/329#discussion_r525158939)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1847\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1846","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1846\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1846\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1846\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1846","id":803806380,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy","number":1846,"title":"Make DownloadManager downloaded\/extracted paths accessible","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...","There could be several situations:\r\n- download a file with no extraction\r\n- download a file and extract it\r\n- download a file, extract it and then inside the output folder extract some more files\r\n- extract a local file (for datasets with data that are manually downloaded for example)\r\n- extract a local file, and then inside the output folder extract some more files\r\n\r\nSo I think it's ok to have `downloaded_paths` as a dict url -> downloaded_path and `extracted_paths` as a dict local_path -> extracted_path.","OK. I am refactoring this. I have opened #1879, as an intermediate step..."],"created_at":1612808082000,"updated_at":1614262218000,"closed_at":1614262218000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1846","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1846","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1846.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1846.patch"},"body":"Make accessible the file paths downloaded\/extracted by DownloadManager.\r\n\r\nClose #1831.\r\n\r\nThe approach:\r\n- I set these paths as DownloadManager attributes: these are DownloadManager's concerns\r\n- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1846\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1845","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1845\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1845\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1845\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1845","id":803714493,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz","number":1845,"title":"Enable logging propagation and remove logging handler","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- it is the end user who has to implement any custom handlers\r\n- indeed, the previous logging problem with TensorFlow was due to the fact that absl did not follow best practices and had implemented a custom handler\r\n\r\nOur errors\/warnings will be displayed anyway, even if we do not implement any custom handler. Since Python 3.2, logging has a built-in \"default\" handler (logging.lastResort) with the expected default behavior (sending error\/warning messages to sys.stderr), which is used only if the end user has not configured any custom handler."],"created_at":1612801333000,"updated_at":1612880558000,"closed_at":1612880557000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1845","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1845","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1845.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1845.patch"},"body":"We used to have logging propagation disabled because of this issue: https:\/\/github.com\/tensorflow\/tensorflow\/issues\/26691\r\nBut since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 \r\n\r\nI also removed the handler that was added since, according to the logging [documentation](https:\/\/docs.python.org\/3\/howto\/logging.html#configuring-logging-for-a-library):\r\n> It is strongly advised that you do not add any handlers other than NullHandler to your library\u2019s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers \u2018under the hood\u2019, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements.\r\n\r\nIt could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management.\r\n\r\nTherefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`.\r\n\r\ncc @albertvillanova this should let you use capsys\/caplog in pytest\r\ncc @LysandreJik @sgugger if you want to do the same in `transformers`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1845\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1844","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1844\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1844\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1844\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1844","id":803588125,"node_id":"MDU6SXNzdWU4MDM1ODgxMjU=","number":1844,"title":"Update Open Subtitles corpus with original sentence IDs","user":{"login":"Valahaar","id":19476123,"node_id":"MDQ6VXNlcjE5NDc2MTIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19476123?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Valahaar","html_url":"https:\/\/github.com\/Valahaar","followers_url":"https:\/\/api.github.com\/users\/Valahaar\/followers","following_url":"https:\/\/api.github.com\/users\/Valahaar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Valahaar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Valahaar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Valahaar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Valahaar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Valahaar\/repos","events_url":"https:\/\/api.github.com\/users\/Valahaar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Valahaar\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/open_subtitles\/open_subtitles.py#L103)","Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: \r\n(the following is line `22497315` of the `ids` file of the `de-en` dump)\r\n\r\n\r\n`de\/2017\/7006210\/7063319.xml.gz en\/2017\/7006210\/7050201.xml.gz 335 339 340` (every space is actually a tab, aside from the space between `339` and `340`)\r\n\r\n\r\nWhere filenames encode the information like this: `lang\/year\/imdb_id\/opensubtitles_id.xml.gz` whereas the numbers correspond to the sentence ids which are linked together (i.e. sentence `335` of the German subtitle corresponds to lines `339` and `340` of the English file)\r\n\r\nThat being said, do you think I should stick to the raw sentence id (and replace the current sequential id) or should I include more detailed metadata (or both things maybe)?\r\n\r\nGoing with raw ID is surely simpler, but including `year`, `imdbId` and `subtitleId` should save space as they're just integers; besides, any operation (like filtering or grouping) will be much easier if users don't have to manually parse the ids every time.\r\nAs for the language-specific sentenceIds, what could be the best option? A list of integers or a comma-separated string?\r\n\r\n**Note:** I did not find any official information about this encoding, but it appears to check out:\r\nhttps:\/\/www.imdb.com\/title\/tt7006210\/, https:\/\/www.opensubtitles.org\/en\/subtitles\/7063319 and https:\/\/www.opensubtitles.org\/en\/subtitles\/7050201 all link to the same episode, so I guess (I hope!) it's correct.\r\n\r\n","I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example.\r\nAnd for the `sentenceIds` a list of integers is fine.","Thanks for improving it @Valahaar :) ","Something like this? (adapted from [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/open_subtitles\/open_subtitles.py#L114))\r\n\r\n```python\r\nresult = (\r\n sentence_counter,\r\n {\r\n \"id\": str(sentence_counter),\r\n \"meta\": {\r\n \"year\": year,\r\n \"imdbId\": imdb_id,\r\n \"subtitleId\": {l1: l1_sub_id, l2: l2_sub_id},\r\n \"sentenceIds\": {l1: [... source_sids ...], l2: [... target_sids ...]},\r\n # or maybe src\/tgt? I'd go with the first one for consistency with 'translation'\r\n \"subtitleId\": {\"src\": l1_sub_id, \"tgt\": l2_sub_id},\r\n \"sentenceIds\": {\"src\": [... source_sids ...], \"tgt\": [... target_sids ...]},\r\n },\r\n \"translation\": {l1: x, l2: y},\r\n },\r\n )\r\n```\r\nOr at top level, avoiding nesting into 'meta'?","Merged in #1865, closing. Thanks :)"],"created_at":1612792513000,"updated_at":1613151538000,"closed_at":1613151538000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https:\/\/huggingface.co\/datasets\/open_subtitles).\r\n\r\nI can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts.\r\n\r\nI think I should tag @abhishekkrthakur as he's the one who added it in the first place.\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1844\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1843","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1843\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1843\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1843\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1843","id":803565393,"node_id":"MDU6SXNzdWU4MDM1NjUzOTM=","number":1843,"title":"MustC Speech Translation","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ","That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `datasets\/MuST-C` instead?\r\n\r\nDescription: \r\n_MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems for speech translation from English into several languages. For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations._\r\n\r\nPaper: https:\/\/www.aclweb.org\/anthology\/N19-1202.pdf\r\n\r\nDataset: https:\/\/ict.fbk.eu\/must-c\/ (One needs to fill out a short from to download the data, but it's very easy).\r\n\r\nIt would be awesome if you're interested in adding this datates. I'm very happy to guide you through the PR! I think the easiest way to start would probably be to read [this README on how to add a dataset](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md) and open a PR. Think you can copy & paste some code from:\r\n\r\n- Librispeech_asr: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/librispeech_asr\/librispeech_asr.py\r\n- Flores Translation: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/flores\/flores.py\r\n\r\nThink all the rest can be handled on the PR :-) ","Hi @patrickvonplaten \r\nI have tried downloading this dataset, but the connection seems to reset all the time. I have tried it via the browser, wget, and using gdown . But it gives me an error message. _\"The server is busy or down, pls try again\"_ (rephrasing the message here)\r\n\r\nI have completed adding 4 datasets in the previous data sprint (including the IWSLT dataset #1676 ) ...so just checking if you are able to download it at your end. Otherwise will write to the dataset authors to update the links. \r\n\r\n\r\n\r\n\r\n","Let me check tomorrow! Thanks for leaving this message!","cc @patil-suraj for notification ","@skyprince999, I think I'm getting the same error you're getting :-\/\r\n\r\n```\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nIt would be great if you could write the authors to see whether they can fix it.\r\nAlso cc @lhoestq - do you think we could mirror the dataset? ","Also there are huge those datasets. Think downloading MuST-C v1.2 amounts to ~ 1000GB... because there are 14 possible configs each around 60-70GB. I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in `datasets` no? cc @lhoestq ","> Also cc @lhoestq - do you think we could mirror the dataset?\r\n\r\nYes we can mirror it if the authors are fine with it. You can create a dataset repo on huggingface.co (possibly under the relevant org) and add the mirrored data files.\r\n\r\n> I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in datasets no? cc @lhoestq\r\n\r\nIf there are different download links for each configuration we can make the dataset builder download only the files related to the requested configuration.","I have written to the dataset authors, highlighting this issue. Waiting for their response. \r\n\r\nUpdate on 25th Feb: \r\nThe authors have replied back, they are updating the download link and will revert back shortly! \r\n\r\n```\r\nfirst of all thanks a lot for being interested in MuST-C and for building the data-loader.\r\n\r\nBefore answering your request, I'd like to clarify that the creation, maintenance, and expansion of MuST-c are not supported by any funded project, so this means that we need to find economic support for all these activities. This also includes permanently moving all the data to AWS or GCP. We are working at this with the goal of facilitating the use of MuST-C, but this is not something that can happen today. We hope to have some news ASAP and you will be among the first to be informed.\r\n\r\nI hope you understand our situation.\r\n```\r\n\r\n","Awesome, actually @lhoestq let's just ask the authors if we should host the dataset no? They could just use our links then as well for their website - what do you think? Is it fine to use our AWS dataset storage also as external links? ","Yes definitely. Shall we suggest them to create a dataset repository under their org on huggingface.co ? @julien-c \r\nThe dataset is around 1TB","Sounds good! \r\n\r\nOrder of magnitude is storage costs ~$20 per TB per month (not including bandwidth). \r\n\r\nHappy to provide this to the community as I feel this is an important dataset. Let us know what the authors want to do!\r\n\r\n","Great! @skyprince999, do you think you could ping the authors here or link to this thread? I think it could be a cool idea to host the dataset on our side then","Done. They replied back, and they want to have a call over a meet\/ skype. Is that possible ? \r\nBtw @patrickvonplaten you are looped in that email (_pls check you gmail account_) ","Hello! Any news on this?","@gegallego there were some concerns regarding dataset usage & attribution by a for-profit company, so couldn't take it forward. Also the download links were unstable. \r\nBut I guess if you want to test the fairseq benchmarks, you can connect with them directly for downloading the dataset. ","Yes, that dataset is not easy to download... I had to copy it to my Google Drive and use `rsync` to be able to download it.\r\nHowever, we could add the dataset with a manual download, right?","yes that is possible. I couldn't unfortunately complete this PR, If you would like to add it, please feel free to do it. "],"created_at":1612790865000,"updated_at":1621004014000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *IWSLT19*\r\n- **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.*\r\n- **Hompage:** *https:\/\/sites.google.com\/view\/iwslt-evaluation-2019\/speech-translation*\r\n- **Data:** *https:\/\/sites.google.com\/view\/iwslt-evaluation-2019\/speech-translation* - all data under \"Allowed Training Data\" and \"Development and Evalutaion Data for TED\/How2\"\r\n- **Motivation:** Important speech dataset\r\n\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1843\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1842","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1842\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1842\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1842\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1842","id":803563149,"node_id":"MDU6SXNzdWU4MDM1NjMxNDk=","number":1842,"title":"Add AMI Corpus","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612790700000,"updated_at":1612855576000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *AMI*\r\n- **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.*\r\n\r\n- **Paper:** *Homepage*: http:\/\/groups.inf.ed.ac.uk\/ami\/corpus\/\r\n- **Data:** *http:\/\/groups.inf.ed.ac.uk\/ami\/download\/* - Select all cases in 1) and select \"Individual Headsets\" & \"Microphone array\" for 2)\r\n- **Motivation:** Important speech dataset\r\n\r\n\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1842\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1841","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1841\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1841\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1841\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1841","id":803561123,"node_id":"MDU6SXNzdWU4MDM1NjExMjM=","number":1841,"title":"Add ljspeech","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612790546000,"updated_at":1615787942000,"closed_at":1615787942000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *ljspeech*\r\n- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours.\r\n\r\nThe texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)*\r\n- **Paper:** *Homepage*: https:\/\/keithito.com\/LJ-Speech-Dataset\/\r\n- **Data:** *https:\/\/keithito.com\/LJ-Speech-Dataset\/*\r\n- **Motivation:** Important speech dataset\r\n- **TFDatasets Implementation**: https:\/\/www.tensorflow.org\/datasets\/catalog\/ljspeech\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1841\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1840","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1840\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1840\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1840\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1840","id":803560039,"node_id":"MDU6SXNzdWU4MDM1NjAwMzk=","number":1840,"title":"Add common voice","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have started working on adding this dataset.","Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https:\/\/github.com\/huggingface\/datasets\/blob\/66f2a7eece98d2778bd22bb5034cb7c2376032d4\/datasets\/arxiv_dataset\/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.","Let me know if you have any other questions","I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/1886","Awesome! I left a longer comment on the PR :-)"],"created_at":1612790465000,"updated_at":1615787781000,"closed_at":1615787781000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *common voice*\r\n- **Description:** *Mozilla Common Voice Dataset*\r\n- **Paper:** Homepage: https:\/\/voice.mozilla.org\/en\/datasets\r\n- **Data:** https:\/\/voice.mozilla.org\/en\/datasets\r\n- **Motivation:** Important speech dataset\r\n- **TFDatasets Implementation**: https:\/\/www.tensorflow.org\/datasets\/catalog\/common_voice\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1840\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1839","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1839\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1839\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1839\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1839","id":803559164,"node_id":"MDU6SXNzdWU4MDM1NTkxNjQ=","number":1839,"title":"Add Voxforge","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612790396000,"updated_at":1612790911000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *voxforge* \r\n- **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.*\r\n- **Paper:** *Homepage*: http:\/\/www.voxforge.org\/\r\n- **Data:** *http:\/\/www.voxforge.org\/home\/downloads*\r\n- **Motivation:** Important speech dataset\r\n- **TFDatasets Implementation**: https:\/\/www.tensorflow.org\/datasets\/catalog\/voxforge\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1839\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1838","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1838\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1838\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1838\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1838","id":803557521,"node_id":"MDU6SXNzdWU4MDM1NTc1MjE=","number":1838,"title":"Add tedlium","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https:\/\/github.com\/huggingface\/datasets\/pull\/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0"],"created_at":1612790272000,"updated_at":1617983861000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *tedlium*\r\n- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*\r\n- **Paper:** Homepage: http:\/\/www.openslr.org\/7\/, https:\/\/lium.univ-lemans.fr\/en\/ted-lium2\/ &, https:\/\/www.openslr.org\/51\/\r\n- **Data:** http:\/\/www.openslr.org\/7\/\r\n- **Motivation:** Important speech dataset\r\n- **TFDatasets Implementation**: https:\/\/www.tensorflow.org\/datasets\/catalog\/tedlium\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1838\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1837","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1837\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1837\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1837\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1837","id":803555650,"node_id":"MDU6SXNzdWU4MDM1NTU2NTA=","number":1837,"title":"Add VCTK","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612790128000,"updated_at":1612790128000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *VCTK*\r\n- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.*\r\n- **Paper:** Homepage: https:\/\/datashare.ed.ac.uk\/handle\/10283\/3443\r\n- **Data:** https:\/\/datashare.ed.ac.uk\/handle\/10283\/3443\r\n- **Motivation:** Important speech dataset\r\n- **TFDatasets Implementation**: https:\/\/www.tensorflow.org\/datasets\/catalog\/vctk\r\n\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1837\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1836","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1836\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1836\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1836\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1836","id":803531837,"node_id":"MDU6SXNzdWU4MDM1MzE4Mzc=","number":1836,"title":"test.json has been removed from the limit dataset repo (breaks dataset)","user":{"login":"Paethon","id":237550,"node_id":"MDQ6VXNlcjIzNzU1MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/237550?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Paethon","html_url":"https:\/\/github.com\/Paethon","followers_url":"https:\/\/api.github.com\/users\/Paethon\/followers","following_url":"https:\/\/api.github.com\/users\/Paethon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Paethon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Paethon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Paethon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Paethon\/orgs","repos_url":"https:\/\/api.github.com\/users\/Paethon\/repos","events_url":"https:\/\/api.github.com\/users\/Paethon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Paethon\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the heads up ! I'm opening a PR to fix that"],"created_at":1612788353000,"updated_at":1612973698000,"closed_at":1612973698000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/16042b233dbff2a7585110134e969204c69322c3\/datasets\/limit\/limit.py#L51\r\n\r\nThe URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:\r\n\r\n`https:\/\/raw.githubusercontent.com\/ilmgut\/limit_dataset\/0707d3989cd8848f0f11527c77dcf168fefd2b23\/data`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1836\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1835","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1835\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1835\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1835\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1835","id":803524790,"node_id":"MDU6SXNzdWU4MDM1MjQ3OTA=","number":1835,"title":"Add CHiME4 dataset","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"},{"id":2725241052,"node_id":"MDU6TGFiZWwyNzI1MjQxMDUy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/speech","name":"speech","color":"d93f0b","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612787798000,"updated_at":1612790011000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Chime4\r\n- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR\r\n- **Paper:** Dataset comes from a channel: http:\/\/spandh.dcs.shef.ac.uk\/chime_challenge\/CHiME4\/ . Results paper: \r\n- **Data:** http:\/\/spandh.dcs.shef.ac.uk\/chime_challenge\/CHiME4\/download.html\r\n- **Motivation:** So far there are very little datasets for speech in `datasets`. Only `lbirispeech_asr` so far.\r\n\r\nIf interested in tackling this issue, feel free to tag @patrickvonplaten\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1835\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1834","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1834\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1834\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1834\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1834","id":803517094,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4","number":1834,"title":"Fixes base_url of limit dataset","user":{"login":"Paethon","id":237550,"node_id":"MDQ6VXNlcjIzNzU1MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/237550?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Paethon","html_url":"https:\/\/github.com\/Paethon","followers_url":"https:\/\/api.github.com\/users\/Paethon\/followers","following_url":"https:\/\/api.github.com\/users\/Paethon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Paethon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Paethon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Paethon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Paethon\/orgs","repos_url":"https:\/\/api.github.com\/users\/Paethon\/repos","events_url":"https:\/\/api.github.com\/users\/Paethon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Paethon\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."],"created_at":1612787195000,"updated_at":1612788170000,"closed_at":1612788170000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1834","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1834","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1834.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1834.patch"},"body":"`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1834\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1833","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1833\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1833\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1833\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1833","id":803120978,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx","number":1833,"title":"Add OSCAR dataset card","user":{"login":"pjox","id":635220,"node_id":"MDQ6VXNlcjYzNTIyMA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/635220?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pjox","html_url":"https:\/\/github.com\/pjox","followers_url":"https:\/\/api.github.com\/users\/pjox\/followers","following_url":"https:\/\/api.github.com\/users\/pjox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pjox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pjox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pjox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pjox\/orgs","repos_url":"https:\/\/api.github.com\/users\/pjox\/repos","events_url":"https:\/\/api.github.com\/users\/pjox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pjox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) \ud83d\ude05 ","I just merged the tables as suggested \ud83d\ude04 . However I noticed something weird, the train sizes are identical for both the original and deduplicated files ... This is not normal, in general the original files are almost twice as big as the deduplicated ones \ud83e\udd14 ","Good catch @pjox ! I just checked and this is because the scripts doesn't handle having several blank lines in a row.\r\nBlank lines introduced by deduplication are currently not ignored so we end up with the same number of examples in the dataset as the original version (but with empty examples...)\r\nI fixed that in this [commit](https:\/\/github.com\/huggingface\/datasets\/commit\/837a152e4724adc5308e2c4481908c00a8d93383). I'm re-running the metadata generation for deduplicated configs.","I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow","> I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow\r\n\r\ngreat, I just wanted to report that I got error message \"NonMatchingSplitsSizesError\" when I tried to load one of the oscar dataset.","Hi @cahya-wirawan, which configuration of oscar do you have this issue with ?","Ok I see you're having this issue because I haven't updated the sizes yet ! I'm opening a PR\r\n\r\nI just checked and indeed there's an issue with the `deduplicated` configurations since the commit I mentioned above.\r\nI'm fixing this by using the new sizes I got yesterday :) \r\n","I just updated the size in the table @pjox it should be good now :) \r\nI also updated the sizes in the dataset_infos.json in https:\/\/github.com\/huggingface\/datasets\/pull\/1868 (merged)","Thanks @lhoestq for fixing the issue, it works now","Thank you so much @lhoestq !"],"created_at":1612748389000,"updated_at":1613138965000,"closed_at":1613138904000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1833","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1833","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1833.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1833.patch"},"body":"I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https:\/\/github.com\/huggingface\/datasets\/pull\/1824).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1833\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1832","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1832\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1832\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1832\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1832","id":802880897,"node_id":"MDU6SXNzdWU4MDI4ODA4OTc=","number":1832,"title":"Looks like nokogumbo is up-to-date now, so this is no longer needed.","user":{"login":"JimmyJim1","id":68724553,"node_id":"MDQ6VXNlcjY4NzI0NTUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/68724553?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JimmyJim1","html_url":"https:\/\/github.com\/JimmyJim1","followers_url":"https:\/\/api.github.com\/users\/JimmyJim1\/followers","following_url":"https:\/\/api.github.com\/users\/JimmyJim1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JimmyJim1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JimmyJim1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JimmyJim1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JimmyJim1\/orgs","repos_url":"https:\/\/api.github.com\/users\/JimmyJim1\/repos","events_url":"https:\/\/api.github.com\/users\/JimmyJim1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JimmyJim1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612680727000,"updated_at":1612805249000,"closed_at":1612805249000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Looks like nokogumbo is up-to-date now, so this is no longer needed.\n\n__Originally posted by @dependabot in https:\/\/github.com\/discourse\/discourse\/pull\/11373#issuecomment-738993432__","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1832\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1831","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1831\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1831\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1831\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1831","id":802868854,"node_id":"MDU6SXNzdWU4MDI4Njg4NTQ=","number":1831,"title":"Some question about raw dataset download info in the project .","user":{"login":"svjack","id":27874014,"node_id":"MDQ6VXNlcjI3ODc0MDE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27874014?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/svjack","html_url":"https:\/\/github.com\/svjack","followers_url":"https:\/\/api.github.com\/users\/svjack\/followers","following_url":"https:\/\/api.github.com\/users\/svjack\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/svjack\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/svjack\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/svjack\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/svjack\/orgs","repos_url":"https:\/\/api.github.com\/users\/svjack\/repos","events_url":"https:\/\/api.github.com\/users\/svjack\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/svjack\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so you can download all the raw data files by calling `_split_generators` with a download manager:\r\n```python\r\nfrom datasets import DownloadManager\r\nfrom datasets.load import import_main_class\r\n\r\nconll2003_builder = import_main_class(...)\r\n\r\ndl_manager = DownloadManager()\r\nsplis_generators = conll2003_builder._split_generators(dl_manager)\r\n```\r\n\r\nThen you can see what files have been downloaded with\r\n```python\r\ndl_manager.get_recorded_sizes_checksums()\r\n```\r\nIt returns a dictionary with the format {url: {num_bytes: int, checksum: str}}\r\n\r\nThen you can get the actual location of the downloaded files with\r\n```python\r\nfrom datasets import cached_path\r\n\r\nlocal_path_to_downloaded_file = cached_path(url)\r\n```\r\n\r\n------------------\r\n\r\nNote that you can also get the urls from the Dataset object:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nconll2003 = load_dataset(\"conll2003\")\r\nprint(conll2003[\"train\"].download_checksums)\r\n```\r\nIt returns the same dictionary with the format {url: {num_bytes: int, checksum: str}}","I am afraid that there is not a very straightforward way to get that location.\r\n\r\nAnother option, from _split_generators would be to use:\r\n- `dl_manager._download_config.cache_dir` to get the directory where all the raw downloaded files are:\r\n ```python\r\n download_dir = dl_manager._download_config.cache_dir\r\n ```\r\n- the function `datasets.utils.file_utils.hash_url_to_filename` to get the filenames of the raw downloaded files:\r\n ```python\r\n filenames = [hash_url_to_filename(url) for url in urls_to_download.values()]\r\n ```\r\nTherefore the complete path to the raw downloaded files would be the join of both:\r\n```python\r\ndownloaded_paths = [os.path.join(download_dir, filename) for filename in filenames]\r\n```\r\n\r\nMaybe it would be interesting to make these paths accessible more easily. I could work on this. What do you think, @lhoestq ?","Sure it would be nice to have an easier access to these paths !\r\nThe dataset builder could have a method to return those, what do you think ?\r\nFeel free to work on this @albertvillanova , it would be a nice addition :) \r\n\r\nYour suggestion does work as well @albertvillanova if you complete it by specifying `etag=` to `hash_url_to_filename`.\r\n\r\nThe ETag is obtained by a HEAD request and is used to know if the file on the remote host has changed. Therefore if a file is updated on the remote host, then the hash returned by `hash_url_to_filename` is different.","Once #1846 will be merged, the paths to the raw downloaded files will be accessible as:\r\n```python\r\nbuilder_instance.dl_manager.downloaded_paths\r\n``` "],"created_at":1612676016000,"updated_at":1614262218000,"closed_at":1614262218000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi , i review the code in \r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/conll2003\/conll2003.py\r\nin the _split_generators function is the truly logic of download raw datasets with dl_manager\r\nand use Conll2003 cls by use import_main_class in load_dataset function\r\nMy question is that , with this logic it seems that i can not have the raw dataset download location\r\nin variable in downloaded_files in _split_generators.\r\nIf someone also want use huggingface datasets as raw dataset downloader,\r\nhow can he retrieve the raw dataset download path from attributes in \r\ndatasets.dataset_dict.DatasetDict ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1831\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1830","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1830\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1830\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1830\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1830","id":802790075,"node_id":"MDU6SXNzdWU4MDI3OTAwNzU=","number":1830,"title":"using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?","user":{"login":"wumpusman","id":7662740,"node_id":"MDQ6VXNlcjc2NjI3NDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7662740?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wumpusman","html_url":"https:\/\/github.com\/wumpusman","followers_url":"https:\/\/api.github.com\/users\/wumpusman\/followers","following_url":"https:\/\/api.github.com\/users\/wumpusman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wumpusman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wumpusman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wumpusman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wumpusman\/orgs","repos_url":"https:\/\/api.github.com\/users\/wumpusman\/repos","events_url":"https:\/\/api.github.com\/users\/wumpusman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wumpusman\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your `map` for the cache\r\n2. apply your function on every batch\r\n\r\nThis can explain the time difference between your different experiments.\r\n\r\nThe hash computation time depends of how complex your function is. For a tokenizer, the hash computation scans the lists of the words in the tokenizer to identify this tokenizer. Usually it takes 2-3 seconds.\r\n\r\nAlso note that you can disable caching though using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```","Hi @lhoestq ,\r\n\r\nThanks for the reply. It's entirely possible that is the issue. Since it's a side project I won't be looking at it till later this week, but, I'll verify it by disabling caching and hopefully I'll see the same runtime. \r\n\r\nAppreciate the reference,\r\n\r\nMichael","I believe this is an actual issue, tokenizing a ~4GB txt file went from an hour and a half to ~10 minutes when I switched from my pre-trained tokenizer(on the same dataset) to the default gpt2 tokenizer.\r\nBoth were loaded using:\r\n```\r\nAutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)\r\n```\r\nI trained the tokenizer using ByteLevelBPETokenizer from the Tokenizers library and save it to a tokenizer.json file.\r\n\r\nI have tested the caching ideas above, changing the number of process, the TOKENIZERS_PARALLELISM env variable, keep_in_memory=True and batching with different sizes.\r\n\r\nApologies I can't really upload much code, but wanted to back up the finding and hopefully a fix\/the problem can be found.\r\nI will comment back if I find a fix as well.","Hi @johncookds do you think this can come from one tokenizer being faster than the other one ? Can you try to compare their speed without using `datasets` just to make sure ?","Hi yes, I'm closing the loop here with some timings below. The issue seems to be at least somewhat\/mainly with the tokenizer's themselves. Moreover legacy saves of the trainer tokenizer perform faster but differently than the new tokenizer.json saves(note nothing about the training process\/adding of special tokens changed between the top two trained tokenizer tests, only the way it was saved). This is only a 3x slowdown vs like a 10x but I think the slowdown is most likely due to this.\r\n\r\n```\r\ntrained tokenizer - tokenizer.json save (same results for AutoTokenizer legacy_format=False):\r\nTokenizer time(seconds): 0.32767510414123535\r\nTokenized avg. length: 323.01\r\n\r\ntrained tokenizer - AutoTokenizer legacy_format=True:\r\nTokenizer time(seconds): 0.09258866310119629\r\nTokenized avg. length: 301.01\r\n\r\nGPT2 Tokenizer from huggingface\r\nTokenizer time(seconds): 0.1010282039642334\r\nTokenized avg. length: 461.21\r\n```","@lhoestq ,\r\n\r\nHi, which version of datasets has datasets.set_caching_enabled(False)? I get \r\nmodule 'datasets' has no attribute 'set_caching_enabled'. To hopefully get around this, I reran my code on a new set of data, and did so only once.\r\n\r\n@johncookds , thanks for chiming in, it looks this might be an issue of Tokenizer.\r\n\r\n**Tokenizer**: The runtime of GPT2TokenizerFast.from_pretrained(\"gpt2\") on 1000 chars is: **143 ms**\r\n**SlowTokenizer**: The runtime of a locally saved and loaded Tokenizer using the same vocab on 1000 chars is: **4.43 s**\r\n\r\nThat being said, I compared performance on the map function:\r\n\r\nRunning Tokenizer versus using it in the map function for 1000 chars goes from **141 ms** to **356 ms** \r\nRunning SlowTokenizer versus using it in the map function for 1000 chars with a single element goes from **4.43 s** to **9.76 s**\r\n\r\nI'm trying to figure out why the overhead of map would increase the time by double (figured it would be a fixed increase in time)? Though maybe this is expected behavior.\r\n\r\n@lhoestq, do you by chance know how I can redirect this issue to Tokenizer?\r\n\r\nRegards,\r\n\r\nMichael","Thanks for the experiments @johncookds and @wumpusman ! \r\n\r\n> Hi, which version of datasets has datasets.set_caching_enabled(False)?\r\n\r\nCurrently you have to install `datasets` from source to have this feature, but this will be available in the next release in a few days.\r\n\r\n> I'm trying to figure out why the overhead of map would increase the time by double (figured it would be a fixed increase in time)? Though maybe this is expected behavior.\r\n\r\nCould you also try with double the number of characters ? This should let us have an idea of the fixed cost (hashing) and the dynamic cost (actual tokenization, grows with the size of the input)\r\n\r\n> @lhoestq, do you by chance know how I can redirect this issue to Tokenizer?\r\n\r\nFeel free to post an issue on the `transformers` repo. Also I'm sure there should be related issues so you can also look for someone with the same concerns on the `transformers` repo.","@lhoestq,\r\n\r\nI just checked that previous run time was actually 3000 chars. I increased it to 6k chars, again, roughly double.\r\n\r\nSlowTokenizer **7.4 s** to **15.7 s**\r\nTokenizer: **276 ms** to **616 ms**\r\n\r\nI'll post this issue on Tokenizer, seems it hasn't quite been raised (albeit I noticed a similar issue that might relate).\r\n\r\nRegards,\r\n\r\nMichael","Hi, \r\nI'm following up here as I found my exact issue. It was with saving and re-loading the tokenizer. When I trained then processed the data without saving and reloading it, it was 10x-100x faster than when I saved and re-loaded it.\r\nBoth resulted in the exact same tokenized datasets as well. \r\nThere is additionally a bug where the older legacy tokenizer save does not preserve a learned tokenizing behavior if trained from scratch.\r\nUnderstand its not exactly Datasets related but hope it can help someone if they have the same issue.\r\nThanks!"],"created_at":1612645226000,"updated_at":1614203774000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: \r\n\r\n````\r\ndef save_tokenizer(original_tokenizer,text,path=\"simpledata\/tokenizer\"):\r\n words_unique = set(text.split(\" \"))\r\n for i in words_unique:\r\n original_tokenizer.add_tokens(i)\r\n original_tokenizer.save_pretrained(path)\r\n\r\ntokenizer2 = GPT2Tokenizer.from_pretrained(os.path.join(experiment_path,experiment_name,\"tokenizer_squad\"))\r\n\r\ntrain_set_baby=Dataset.from_dict({\"text\":[train_set[\"text\"][0][0:50]]})\r\n````\r\n\r\nI then applied the dataset map function on a fairly small set of text:\r\n\r\n```\r\n%%time\r\ntrain_set_baby = train_set_baby.map(lambda d:tokenizer2(d[\"text\"]),batched=True)\r\n\r\n```\r\n\r\n\r\nThe run time for train_set_baby.map was 6 seconds, and the batch itself was 2.6 seconds\r\n\r\n**100% 1\/1 [00:02<00:00, 2.60s\/ba] CPU times: user 5.96 s, sys: 36 ms, total: 5.99 s Wall time: 5.99 s**\r\n\r\nIn comparison using (even after adding additional tokens): \r\n`\r\ntokenizer = GPT2TokenizerFast.from_pretrained(\"gpt2\")`\r\n\r\n```\r\n%%time\r\ntrain_set_baby = train_set_baby.map(lambda d:tokenizer2(d[\"text\"]),batched=True)\r\n\r\n```\r\nThe time is \r\n**100% 1\/1 [00:00<00:00, 34.09ba\/s] CPU times: user 68.1 ms, sys: 16 \u00b5s, total: 68.1 ms Wall time: 62.9 ms**\r\n\r\nIt seems this might relate to the tokenizer save or load function, however, the issue appears to come up when I apply the loaded tokenizer to the map function. \r\n\r\nI should also add that playing around with the amount of words I add to the tokenizer before I save it to disk and load it into memory appears to impact the time it takes to run the map function. \r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1830\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1829","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1829\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1829\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1829\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1829","id":802693600,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5","number":1829,"title":"Add Tweet Eval Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612614985000,"updated_at":1612790274000,"closed_at":1612790273000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1829","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1829","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1829.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1829.patch"},"body":"Closes Draft PR #1407. \r\n\r\nNotes:\r\n1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels.\r\n2. I have also exluded the textual names for the emojis mentioned in the [mapping](https:\/\/github.com\/cardiffnlp\/tweeteval\/blob\/main\/datasets\/emoji\/mapping.txt).\r\n3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset.\r\n\r\nRequesting @lhoestq to review.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1829\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1828","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1828\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1828\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1828\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1828","id":802449234,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2","number":1828,"title":"Add CelebA Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification or object detection datasets instead? (Your CIFAR-100 contribution will be super useful for example!)","Hi @yjernite, You're welcome. I am enjoying adding new datasets :)\r\nBy \"pretty problematic\", are you referring to the ethical issues? I used TFDS's [CelebA](https:\/\/github.com\/tensorflow\/datasets\/blob\/5ef7861470896acb6f74dacba85036001e4f1b8c\/tensorflow_datasets\/image\/celeba.py#L91) as a reference. Here they mention in a \"Note\" that CelebA \"may contain potential bias\". Can we not do the same? I skipped the note for now, and we can add it. However, if you feel this isn't the right time, then I won't pursue this further. \r\n\r\nBut, can this issue be handled at a later stage? Does this also apply for my Hateful Memes Issue #1810?\r\n\r\nAlso, how can I \r\n1. load a part of the dataset? since `load_dataset(<>,split='train[10:20]')` still loads all the examples.\r\n2. make `datasets_infos.json` for huge datasets which have a single configuration?\r\n\r\nI will ofcourse be looking for other datasets to add regardless. \r\n","It's definitely a thorny question. The short answer is: Hateful Memes and hate speech detection datasets are different since their use case is specifically to train systems to identify and hopefully remove hateful content, whereas the purpose of a dataset that has an Attractiveness score as output is implicitly to train more models to rate \"Attractiveness\". \r\n\r\nAs far as warning about the \"potential biases\", I do not think it is quite enough, especially because it is hard to guarantee that every potential user will read the documentation (it is also an insufficient warning.)\r\n\r\nNote that we do have higher standards for the dataset cards of hate speech and hateful memes datasets, so if you do choose to add that one yourself we will ask that you summarize the relevant literature in the Social Impact section.\r\n\r\nIf you really need to add this dataset for your own research for the explicit purpose of studying these biases, you can add it as a community provided dataset following https:\/\/huggingface.co\/docs\/datasets\/master\/share_dataset.html#sharing-a-community-provided-dataset but I'd recommend just skipping it for now.","So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\nhttps:\/\/huggingface.co\/docs\/datasets\/master\/filesystems.html\r\n","I don't think we have a great solution for `dataset_infos.json` with a single very large config when storage space is an issue, but it should be solved by the same upcoming feature mentioned above","Okay, then I won't pursue this one further. I'll keep this branch on my repository just in case the possibility of adding this dataset comes up in the future.\r\n\r\n> So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\n> https:\/\/huggingface.co\/docs\/datasets\/master\/filesystems.html\r\n\r\nAfter downloading the whole dataset (around 1.4GB), it still loads all the examples despite using `split='train[:10%]'` or `split='train[10:20]'`. \r\n\r\nEDIT: I think this would happen only when the examples are generated for the first time and saved to the cache. Streaming parts of the data from a remote host sounds amazing! But, would that also allow for streaming examples of the data from the local cache? (without saving all the examples the first time).\r\n\r\nWhat I used:\r\n`d = load_dataset('.\/datasets\/celeb_a',split='train[:10]')`\r\nOutput:\r\n`570 examples [01:33, 6.25 examples\/s]` and it keeps going. \r\n\r\nEDIT 2: After a few thousand images, I get the following error:\r\n```python\r\nOSError: [Errno 24] Too many open files: '~\/.cache\/huggingface\/datasets\/celeb_a\/default\/1.1.0\/01f9dca66039ab7c40b91b09af47a5fa8c3e49dc8d55df50da55b14116229207.incomplete'\r\n```\r\nI understand this is because of the way I load the images :\r\n```python\r\nImage.open(<path>)\r\n```\r\nWhat could be better alternative? I am only asking in case I face the same issues in the future.","Just some addition about loading only a subset of the data:\r\nCurrently if even you specify `split='train[:10]'`, it downloads and generate the full dataset, so that you can pick another part afterward if you want to. We may change that in the future and use streaming.\r\n\r\nAnd about your open files issue, you can try to close each image file after reading its content.","Hi @lhoestq,\r\nThanks for your response.\r\n\r\nI used `gc.collect()` inside the loop and that worked for me. I think since we are using a generator, and if I have something like `train[100000:100002]`, we will need to generate the first 1000001 examples and store. Ofcourse, this feature isn't a necessity right now, I suppose.","Closing this PR."],"created_at":1612556455000,"updated_at":1613657827000,"closed_at":1613657827000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1828","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1828","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1828.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1828.patch"},"body":"Trying to add CelebA Dataset. \r\nNeed help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.\r\n\r\nAdditionally, trying to load a few examples using `load_dataset('.\/datasets\/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1828\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1827","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1827\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1827\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1827\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1827","id":802353974,"node_id":"MDU6SXNzdWU4MDIzNTM5NzQ=","number":1827,"title":"Regarding On-the-fly Data Loading","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Possible duplicate\r\n\r\n#1776 https:\/\/github.com\/huggingface\/datasets\/issues\/\r\n\r\nreally looking PR for this feature","Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using this feature, though :)\r\n\r\nI wanted to ask about on-the-fly data loading from the cache (before pre-processing).","Hi ! Currently when you load a dataset via `load_dataset` for example, then the dataset is memory-mapped from an Arrow file on disk. Therefore there's almost no RAM usage even if your dataset contains TB of data.\r\nUsually at training time only one batch of data at a time is loaded in memory.\r\n\r\nDoes that answer your question or were you thinking about something else ?","Hi @lhoestq,\r\n\r\nI apologize for the late response. This answers my question. Thanks a lot."],"created_at":1612547028000,"updated_at":1613656516000,"closed_at":1613656516000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI was wondering if it is possible to load images\/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point.\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1827\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1826","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1826\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1826\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1826\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1826","id":802074744,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2","number":1826,"title":"Print error message with filename when malformed CSV","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612523279000,"updated_at":1612892367000,"closed_at":1612892367000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1826","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1826","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1826.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1826.patch"},"body":"Print error message specifying filename when malformed CSV file.\r\n\r\nClose #1821","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1826\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1825","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1825\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1825\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1825\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1825","id":802073925,"node_id":"MDU6SXNzdWU4MDIwNzM5MjU=","number":1825,"title":"Datasets library not suitable for huge text datasets.","user":{"login":"alexvaca0","id":35173563,"node_id":"MDQ6VXNlcjM1MTczNTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35173563?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexvaca0","html_url":"https:\/\/github.com\/alexvaca0","followers_url":"https:\/\/api.github.com\/users\/alexvaca0\/followers","following_url":"https:\/\/api.github.com\/users\/alexvaca0\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexvaca0\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexvaca0\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexvaca0\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexvaca0\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexvaca0\/repos","events_url":"https:\/\/api.github.com\/users\/alexvaca0\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexvaca0\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which can take a lot of space. Padding can also increase the size of the tokenized dataset.\r\n\r\nTo make things more convenient, we recently added a \"lazy map\" feature that allows to tokenize each batch at training time as you mentioned. For example you'll be able to do\r\n```python\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ndef encode(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\", truncation=True, max_length=512, return_tensors=\"pt\")\r\n\r\ndataset.set_transform(encode)\r\nprint(dataset.format)\r\n# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}\r\nprint(dataset[:2])\r\n# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}\r\n\r\n```\r\nIn this example the `encode` transform is applied on-the-fly on the \"text\" column.\r\n\r\nThis feature will be available in the next release 2.0 which will happen in a few days.\r\nYou can already play with it by installing `datasets` from source if you want :)\r\n\r\nHope that helps !","How recently was `set_transform` added? I am actually trying to implement it and getting an error:\r\n\r\n`AttributeError: 'Dataset' object has no attribute 'set_transform'\r\n`\r\n\r\nI'm on v.1.2.1.\r\n\r\nEDIT: Oh, wait I see now it's in the v.2.0. Whoops! This should be really useful.","Yes indeed it was added a few days ago. The code is available on master\r\nWe'll do a release next week :)\r\n\r\nFeel free to install `datasets` from source to try it out though, I would love to have some feedbacks","For information: it's now available in `datasets` 1.3.0.\r\nThe 2.0 is reserved for even cooler features ;)","Hi @alexvaca0 , we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs."],"created_at":1612523210000,"updated_at":1617113041000,"closed_at":1615887840000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this big, but for fine-tuning datasets, as this process alone takes so much time, usually in expensive machines (due to the need of tpus - gpus) which is not being used for training. It would possibly be more efficient in such cases to tokenize each batch at training time (receive batch - tokenize batch - train with batch), so that the whole time the machine is up it's being used for training. \r\nMoreover, the pyarrow objects created from a 187 GB datasets are huge, I mean, we always receive OOM, or No Space left on device errors when only 10-12% of the dataset has been processed, and only that part occupies 2.1TB in disk, which is so many times the disk usage of the pure text (and this doesn't make sense, as tokenized texts should be lighter than pure texts).\r\n\r\nAny suggestions??","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1825\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1824","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1824\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1824\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1824\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1824","id":802048281,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3","number":1824,"title":"Add OSCAR dataset card","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:","Next week !","Closing in favor of #1833"],"created_at":1612521026000,"updated_at":1620239054000,"closed_at":1612783833000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1824","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1824","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1824.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1824.patch"},"body":"I started adding the dataset card for OSCAR !\r\n\r\nFor now it's just basic info for all the different configurations in `Dataset Structure`.\r\nIn particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D\r\n\r\nCc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1824\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1823","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1823\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1823\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1823\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1823","id":802042181,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx","number":1823,"title":"Add FewRel Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?","Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What do you think ?","Hi @lhoestq,\r\n\r\nSorry again, the last couple of weeks were a bit busy for me. I am wondering how do you want me to achieve that. Using a custom BuilderConfig which takes in whether it is the regular data or \"pid2name\"? \"pid2name\" is only useful for \"train_wiki\", \"val_nyt\" and \"val_wiki\". So, based on my understanding, it would look like this:\r\n\r\n```python\r\nwiki_data = load_dataset('few_rel','train_wiki')\r\nid2name = load_dataset('few_rel','pid2name')\r\n```\r\nand this will be handled in the multiple configs.\r\n\r\n\r\nA better alternative could be providing name of the relationship in only \"train_wiki\", \"val_nyt\" and \"val_wiki\" as an extra feature in the dataset, and doing away with \"pid2name\" entirely. I'll only download pid2name if any of those datasets are requested, and then during generation I'll return the list with the dataset under \"names\" feature. How does this sound?\r\n\r\nEDIT:\r\nThere is one issue with the second approach, the entire pid2name is saved with all three datasets - \"train_wiki\", \"val_nyt\" and \"val_wiki\" ([see code below](https:\/\/github.com\/huggingface\/datasets\/pull\/1823#issuecomment-786402026)). In dummy data, I can address this by manually editing the pid2name to contain only a few id-name pairs, those matching with the examples in the corresponding example file. But this seems to be inefficient for the entire dataset - storing the same file in multiple places.","Okay, I apologize, I guess I finally understand what is required.\r\n\r\nBasically, using:\r\n\r\n```python\r\nfew_rel = load_dataset('few_rel')\r\n```\r\nshould give all the files. This seems difficult since \"pid2name\" has a different format. Any suggestions on this?","Yes that's it, sorry if that wasn't clear !","Hi @lhoestq,\n\nSince pid2name has different features from the rest of the files, how will I add them to the same config?\n\nDo we want to exclude pid2name totally and add \"names\" to every example?","If I understand correctly each sample in the \"default\" config has one relation, and each relation has corresponding names in pid2name.\r\nWould it be possible to also include the names in the \"default\" configuration for each sample ? The names of one sample can be retrieved using the relation id no ?","Yes, that can be done. But for some files, the name is already given instead of ID. Only \"train_wiki\", \"val_wiki\", \"val_nyc\" have IDs. For others, I can set the names equal to a list of key.","I think that's fine as long as we mention this processing explicitly in the dataset card.","Hi @lhoestq,\r\n\r\nI have added the changes. Please let me know in case of any remaining issues.\r\n\r\nThanks,\r\nGunjan","Hi @lhoestq,\r\n\r\nThanks for fixing it and approving :)"],"created_at":1612520523000,"updated_at":1614599780000,"closed_at":1614594099000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1823","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1823","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1823.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1823.patch"},"body":"Hi,\r\n\r\nThis PR closes this [Card](https:\/\/github.com\/huggingface\/datasets\/projects\/1#card-53285184) and Issue #1757.\r\n\r\nI wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `\"relation\"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `\"relation\":\"\"` in the dictionary.\r\n\r\nPlease recommend better alternatives, if any.\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1823\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1822","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1822\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1822\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1822\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1822","id":802003835,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz","number":1822,"title":"Add Hindi Discourse Analysis Natural Language Inference Dataset","user":{"login":"avinsit123","id":33565881,"node_id":"MDQ6VXNlcjMzNTY1ODgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33565881?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/avinsit123","html_url":"https:\/\/github.com\/avinsit123","followers_url":"https:\/\/api.github.com\/users\/avinsit123\/followers","following_url":"https:\/\/api.github.com\/users\/avinsit123\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/avinsit123\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/avinsit123\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/avinsit123\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/avinsit123\/orgs","repos_url":"https:\/\/api.github.com\/users\/avinsit123\/repos","events_url":"https:\/\/api.github.com\/users\/avinsit123\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/avinsit123\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you also run `make style` to fix the CI check on code formatting ?","@lhoestq completed and resolved all comments."],"created_at":1612517454000,"updated_at":1613383059000,"closed_at":1613383059000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1822","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1822","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1822.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1822.patch"},"body":"# Dataset Card for Hindi Discourse Analysis Dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-fields)\r\n - [Data Splits](#data-splits)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n - [Contributions](#contributions)\r\n\r\n## Dataset Description\r\n\r\n- HomePage : https:\/\/github.com\/midas-research\/hindi-nli-data\r\n- Paper : https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\r\n- Point of Contact : https:\/\/github.com\/midas-research\/hindi-nli-data\r\n\r\n### Dataset Summary\r\n\r\n- Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs.\r\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\r\n- Premise and Hypothesis is written in Hindi while Entailment_Label is in English.\r\n- Entailment_label is of 2 types - entailed and not-entailed.\r\n- Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa\r\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\n- Natural Language Inference for Hindi\r\n\r\n### Languages\r\n\r\n- Dataset is in Hindi\r\n\r\n## Dataset Structure\r\n\r\n- Data is structured in TSV format. \r\n- train, test and dev files are in seperate files\r\n\r\n\r\n### Dataset Instances\r\n\r\nAn example of 'train' looks as follows.\r\n\r\n```\r\n{'hypothesis': '\u092f\u0939 \u090f\u0915 \u0935\u0930\u094d\u0923\u0928\u093e\u0924\u094d\u092e\u0915 \u0915\u0925\u0928 \u0939\u0948\u0964', 'label': 1, 'premise': '\u091c\u0948\u0938\u0947 \u0909\u0938 \u0915\u093e \u0938\u093e\u0930\u093e \u091a\u0947\u0939\u0930\u093e \u0905\u092a\u0928\u093e \u0939\u094b \u0914\u0930 \u0906\u0901\u0916\u0947\u0902 \u0915\u093f\u0938\u0940 \u0926\u0942\u0938\u0930\u0947 \u0915\u0940 \u091c\u094b \u091a\u0947\u0939\u0930\u0947 \u092a\u0930 \u092a\u092a\u094b\u091f\u094b\u0902 \u0915\u0947 \u092a\u0940\u091b\u0947 \u092e\u0939\u0938\u0942\u0930 \u0915\u0930 \u0926\u0940 \u0917\u0908\u0902\u0964', 'topic': 1}\r\n\r\n\r\n```\r\n### Data Fields\r\n\r\n- Each row contatins 4 columns - premise, hypothesis, label and topic.\r\n\r\n### Data Splits\r\n\r\n- Train : 31892\r\n- Valid : 9460\r\n- Test : 9970\r\n\r\n## Dataset Creation\r\n\r\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems\r\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\r\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\r\n- For more information on the recasting process, refer to paper https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\r\n\r\n### Source Data\r\n\r\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(https:\/\/github.com\/NirantK\/hindi2vec\/releases\/tag\/bbc-hindi-v0.1)\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode.\r\n- Please refer to this paper for detailed information: https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.149\/\r\n- The Discourse is further classified into \"Argumentative\" , \"Descriptive\" , \"Dialogic\" , \"Informative\" and \"Narrative\" - 5 Clases.\r\n\r\n#### Who are the source language producers?\r\n\r\nPlease refer to this paper for detailed information: https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.149\/\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\nAnnotation process has been described in Dataset Creation Section.\r\n\r\n#### Who are the annotators?\r\n\r\nAnnotation is done automatically by machine and corresponding recasting process.\r\n\r\n### Personal and Sensitive Information\r\n\r\nNo Personal and Sensitive Information is mentioned in the Datasets.\r\n\r\n## Considerations for Using the Data\r\n\r\nPls refer to this paper: https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\r\n\r\n### Discussion of Biases\r\n\r\nNo known bias exist in the dataset.\r\nPls refer to this paper: https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\r\n\r\n### Other Known Limitations\r\n\r\nNo other known limitations . Size of data may not be enough to train large models\r\n\r\n## Additional Information\r\n\r\nPls refer to this link: https:\/\/github.com\/midas-research\/hindi-nli-data\r\n\r\n### Dataset Curators\r\n\r\nIt is written in the repo : https:\/\/github.com\/midas-research\/hindi-nli-data that \r\n- This corpus can be used freely for research purposes.\r\n- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.\r\n- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.\r\n- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.\r\n- Rather than redistributing the corpus, please direct interested parties to this page\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your data for natural language inference.\r\n - if interested in a collaborative research project.\r\n\r\n\r\n### Licensing Information\r\n\r\nCopyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).\r\nPls contact authors for any information on the dataset.\r\n\r\n### Citation Information\r\n\r\n```\r\n @inproceedings{uppal-etal-2020-two,\r\n title = \"Two-Step Classification using Recasted Data for Low Resource Settings\",\r\n author = \"Uppal, Shagun and\r\n Gupta, Vivek and\r\n Swaminathan, Avinash and\r\n Zhang, Haimin and\r\n Mahata, Debanjan and\r\n Gosangi, Rakesh and\r\n Shah, Rajiv Ratn and\r\n Stent, Amanda\",\r\n booktitle = \"Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing\",\r\n month = dec,\r\n year = \"2020\",\r\n address = \"Suzhou, China\",\r\n publisher = \"Association for Computational Linguistics\",\r\n url = \"https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\",\r\n pages = \"706--719\",\r\n abstract = \"An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.\",\r\n}\r\n```\r\n\r\n### Contributions\r\n\r\nThanks to [@avinsit123](https:\/\/github.com\/avinsit123) for adding this dataset.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1822\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1821","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1821\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1821\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1821\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1821","id":801747647,"node_id":"MDU6SXNzdWU4MDE3NDc2NDc=","number":1821,"title":"Provide better exception message when one of many files results in an exception","user":{"login":"david-waterworth","id":5028974,"node_id":"MDQ6VXNlcjUwMjg5NzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5028974?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/david-waterworth","html_url":"https:\/\/github.com\/david-waterworth","followers_url":"https:\/\/api.github.com\/users\/david-waterworth\/followers","following_url":"https:\/\/api.github.com\/users\/david-waterworth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/david-waterworth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/david-waterworth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/david-waterworth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/david-waterworth\/orgs","repos_url":"https:\/\/api.github.com\/users\/david-waterworth\/repos","events_url":"https:\/\/api.github.com\/users\/david-waterworth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/david-waterworth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pandas.read_csv` by passing additional keyword arguments to `load_dataset`. For example, you may find useful this argument:\r\n- `error_bad_lines` : bool, default True\r\n Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these \u201cbad lines\u201d will be dropped from the DataFrame that is returned.\r\n\r\nYou could try:\r\n```python\r\ndatasets = load_dataset(\"csv\", data_files=dict(train=train_files, validation=validation_files), error_bad_lines=False)\r\n```\r\n"],"created_at":1612486143000,"updated_at":1612892367000,"closed_at":1612892367000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I find when I process many files, i.e.\r\n\r\n```\r\ntrain_files = glob.glob('rain*.csv')\r\nvalidation_files = glob.glob(validation*.csv')\r\ndatasets = load_dataset(\"csv\", data_files=dict(train=train_files, validation=validation_files))\r\n```\r\n\r\nI sometimes encounter an error due to one of the files being misformed (i.e. no data, or a comma in a field that isn't quoted, etc).\r\n\r\nFor example, this is the tail of an exception which I suspect is due to a stray comma.\r\n\r\n> File \"pandas\/_libs\/parsers.pyx\", line 756, in pandas._libs.parsers.TextReader.read\r\n> File \"pandas\/_libs\/parsers.pyx\", line 783, in pandas._libs.parsers.TextReader._read_low_memory\r\n> File \"pandas\/_libs\/parsers.pyx\", line 827, in pandas._libs.parsers.TextReader._read_rows\r\n> File \"pandas\/_libs\/parsers.pyx\", line 814, in pandas._libs.parsers.TextReader._tokenize_rows\r\n> File \"pandas\/_libs\/parsers.pyx\", line 1951, in pandas._libs.parsers.raise_parser_error\r\n> pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 559, saw 3\r\n\r\nIt would be nice if the exception trace contained the name of the file being processed (I have 250 separate files!)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1821\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1820","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1820\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1820\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1820\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1820","id":801529936,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1","number":1820,"title":"Add metrics usage examples and tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612463030000,"updated_at":1612533601000,"closed_at":1612533600000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1820","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1820","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1820.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1820.patch"},"body":"All metrics finally have usage examples and proper fast + slow tests :)\r\n\r\nI added examples of usage for every metric, and I use doctest to make sure they all work as expected.\r\n\r\nFor \"slow\" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only done in the slow test.\r\nIn the fast test on the other hand, the download + forward pass are monkey patched.\r\n\r\nMetrics that need to be installed from github are not added to setup.py because it prevents uploading the `datasets` package to pypi.\r\nAn additional-test-requirements.txt file is used instead. This file also include `comet` in order to not have to resolve its *impossible* dependencies.\r\n\r\nAlso `comet` is not tested on windows because one of its dependencies (fairseq) can't be installed in the CI for some reason.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1820\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1819","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1819\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1819\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1819\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1819","id":801448670,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2","number":1819,"title":"Fixed spelling `S3Fileystem` to `S3FileSystem`","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612456606000,"updated_at":1612457547000,"closed_at":1612457546000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1819","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1819","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1819.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1819.patch"},"body":"Fixed documentation spelling errors. \r\nWrong `S3Fileystem`\r\nRight `S3FileSystem`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1819\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1818","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1818\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1818\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1818\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1818","id":800958776,"node_id":"MDU6SXNzdWU4MDA5NTg3NzY=","number":1818,"title":"Loading local dataset raise requests.exceptions.ConnectTimeout","user":{"login":"Alxe1","id":15032072,"node_id":"MDQ6VXNlcjE1MDMyMDcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15032072?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Alxe1","html_url":"https:\/\/github.com\/Alxe1","followers_url":"https:\/\/api.github.com\/users\/Alxe1\/followers","following_url":"https:\/\/api.github.com\/users\/Alxe1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Alxe1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Alxe1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Alxe1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Alxe1\/orgs","repos_url":"https:\/\/api.github.com\/users\/Alxe1\/repos","events_url":"https:\/\/api.github.com\/users\/Alxe1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Alxe1\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts).\r\n\r\nThis should be fixed on master now. Feel free to install `datasets` from source to try it out.\r\nThe fix will be available in the next release of `datasets` in a few days"],"created_at":1612418123000,"updated_at":1612531415000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Load local dataset:\r\n```\r\ndataset = load_dataset('json', data_files=[\"..\/..\/data\/json.json\"])\r\ntrain = dataset[\"train\"]\r\nprint(train.features)\r\ntrain1 = train.map(lambda x: {\"labels\": 1})\r\nprint(train1[:2])\r\n```\r\n\r\nbut it raised requests.exceptions.ConnectTimeout:\r\n\r\n```\r\n\/Users\/littlely\/myvirtual\/tf2\/bin\/python3.7 \/Users\/littlely\/projects\/python_projects\/pytorch_learning\/nlp\/dataset\/transformers_datasets.py\r\nTraceback (most recent call last):\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connection.py\", line 160, in _new_conn\r\n (self._dns_host, self.port), self.timeout, **extra_kw\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/util\/connection.py\", line 84, in create_connection\r\n raise err\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/util\/connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nsocket.timeout: timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py\", line 677, in urlopen\r\n chunked=chunked,\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connection.py\", line 167, in _new_conn\r\n % (self.host, self.timeout),\r\nurllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/requests\/adapters.py\", line 449, in send\r\n timeout=timeout\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/connectionpool.py\", line 727, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/urllib3\/util\/retry.py\", line 439, in increment\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/json\/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/Users\/littlely\/projects\/python_projects\/pytorch_learning\/nlp\/dataset\/transformers_datasets.py\", line 12, in <module>\r\n dataset = load_dataset('json', data_files=[\"..\/..\/data\/json.json\"])\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 591, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 263, in prepare_module\r\n head_hf_s3(path, filename=name, dataset=dataset, max_retries=download_config.max_retries)\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 232, in head_hf_s3\r\n max_retries=max_retries,\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 523, in http_head\r\n max_retries=max_retries,\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 458, in _request_with_retry\r\n raise err\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 454, in _request_with_retry\r\n response = requests.request(verb.upper(), url, **params)\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/requests\/api.py\", line 61, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/requests\/sessions.py\", line 530, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/requests\/sessions.py\", line 643, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"\/Users\/littlely\/myvirtual\/tf2\/lib\/python3.7\/site-packages\/requests\/adapters.py\", line 504, in send\r\n raise ConnectTimeout(e, request=request)\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/json\/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))\r\n\r\nProcess finished with exit code 1\r\n\r\n```\r\n\r\nWhy it want to connect a remote url when I load local datasets, and how can I fix it?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1818\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1817","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1817\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1817\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1817\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1817","id":800870652,"node_id":"MDU6SXNzdWU4MDA4NzA2NTI=","number":1817,"title":"pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500","user":{"login":"LuCeHe","id":9610770,"node_id":"MDQ6VXNlcjk2MTA3NzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9610770?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LuCeHe","html_url":"https:\/\/github.com\/LuCeHe","followers_url":"https:\/\/api.github.com\/users\/LuCeHe\/followers","following_url":"https:\/\/api.github.com\/users\/LuCeHe\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LuCeHe\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LuCeHe\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LuCeHe\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LuCeHe\/orgs","repos_url":"https:\/\/api.github.com\/users\/LuCeHe\/repos","events_url":"https:\/\/api.github.com\/users\/LuCeHe\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LuCeHe\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nThe error you have is due to the `input_ids` column not having the same number of examples as the other columns.\r\nIndeed you're concatenating the `input_ids` at this line:\r\n\r\nhttps:\/\/github.com\/LuCeHe\/GenericTools\/blob\/431835d8e13ec24dceb5ee4dc4ae58f0e873b091\/KerasTools\/lm_preprocessing.py#L134\r\n\r\nHowever the other columns are kept unchanged, and therefore you end up with an `input_ids` column with 599 elements while the others columns like `attention_mask` have 1500.\r\n\r\nTo fix that you can instead concatenate them all using\r\n```python\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n```\r\n\r\nAlso you may need to drop the \"text\" column before applying `group_texts` since strings can't be concatenated with lists. You can drop it at the tokenization step:\r\n```python\r\ndset = dset.map(\r\n tokenize_function,\r\n batched=True,\r\n remove_columns=[\"text\"]\r\n)\r\n```","You saved my life."],"created_at":1612405823000,"updated_at":1612706664000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end\r\n\r\nhttps:\/\/github.com\/LuCeHe\/GenericTools\/blob\/master\/KerasTools\/lm_preprocessing.py\r\n\r\nIn the last iteration of the last dset.map, it gives the error that I copied in the title. Another issue that I have, if I leave the batch_size set as 1000 in the last .map, I'm afraid it's going to lose most text, so I'm considering setting both writer_batch_size and batch_size to 300 K, but I'm not sure it's the best way to go.\r\n\r\nCan you help me?\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1817\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1816","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1816\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1816\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1816\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1816","id":800660995,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx","number":1816,"title":"Doc2dial rc update to latest version","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["- update data loader and readme for latest version 1.0.1"],"created_at":1612382934000,"updated_at":1613402124000,"closed_at":1613401473000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1816","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1816","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1816.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1816.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1816\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1815","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1815\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1815\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1815\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1815","id":800610017,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1","number":1815,"title":"Add CCAligned Multilingual Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For example the [bible_para](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/bible_para\/bible_para.py) dataset is a dataset for translation and therefore users should be able to provide any language pair. You can check how the subclass of BuilderConfig is defined [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/bible_para\/bible_para.py#L49).\r\n\r\nFor testing, only the configurations defined in the `BUILDER_CONFIGS` class attribute are used.\r\nAll the other configs combinations are not tested, but they can be used by users. If a config doesn't already exist in `BUILDER_CONFIGS`, then it is created on the fly.\r\nFor example in [bible_para](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/bible_para\/bible_para.py#L61), only 6 configs are defined in `BUILDER_CONFIGS`.\r\n\r\nSo what I would do in your case is have something like\r\n```python\r\n\r\nclass CCAlignedConfig(datasets.BuilderConfig):\r\n def __init__(self, *args, documents_or_sentences=None, language_code=None, **kwargs):\r\n super().__init__(\r\n *args,\r\n name=f\"{documents_or_sentences}-{language_code}\",\r\n **kwargs,\r\n )\r\n self.documents_or_sentences = documents_or_sentences\r\n self.language_code = language_code\r\n```\r\nAnd of course, feel free to change\/rename things if you want to. In particular I think we can improve the name of the parameter `documents_or_sentences`","Hi @lhoestq,\r\n\r\nThanks a lot! I don't know why I didn't think about that. :P \r\nI'll make these changes and update.","Hi @lhoestq,\r\n\r\nI have tested and added dummy files. Request you to review.\r\n\r\nAlso, does this mean BUILDER_CONFIGS is only needed while testing?","Hi @lhoestq,\r\n\r\nAny changes required on this one?\r\n\r\nThanks,\r\nGunjan","Hi @lhoestq,\r\n\r\nSorry for the delay, I have added the changes from the review. For the ISO format language codes, I just selected the first two characters from the names, hoping those are correct. Let me know if you want me to verify :P\r\n\r\nThanks for taking the time to add such a detailed review. I'll keep all these changes in mind the next time I'm adding a dataset.\r\n\r\nThanks,\r\nGunjan","Hi @lhoestq,\r\n\r\nI have changed the README, and added a single example per config. Even one example is long enough to make the files heavy. Hope that isn't an issue.\r\n\r\nThanks,\r\nGunjan","Hi @lhoestq,\r\n\r\nThanks for approving."],"created_at":1612378792000,"updated_at":1614601983000,"closed_at":1614594981000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1815","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1815","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1815.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1815.patch"},"body":"Hello,\r\n\r\nI'm trying to add [CCAligned Multilingual Dataset](http:\/\/www.statmt.org\/cc-aligned\/). This has the potential to close #1756.\r\n\r\nThis dataset has two types - Document-Pairs, and Sentence-Pairs.\r\n\r\nThe datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to download one particular language and not all. To provide this feature, `load_dataset`'s `**config_kwargs` should allow some random keyword args, in this case -`language_code`. This will be needed before the dataset is downloaded and extracted.\r\n\r\nI'm expecting the usage to be something like - \r\n`load_dataset('ccaligned_multilingual','documents',language_code='en_XX-af_ZA')`. Ofcourse, at a later stage we can provide just two character language codes. This also has an issue where one language has multiple files (`my_MM` and `my_MM_zaw` on the link), but before that the required functionality must be added to `load_dataset`.\r\n\r\nIt would be great if someone could either tell me an alternative way to do this, or point me to where changes need to be made, if any, apart from the `BuilderConfig` definition. \r\n\r\nAdditionally, I believe the tests will also have to be modified if this change is made, since it would not be possible to test for any random keyword arguments. \r\n\r\nA decent way to go about this would be to provide all the options in a list\/dictionary for `language_code` and use that to test the arguments. In essence, this is similar to the pre-trained checkpoint dictionary as `transformers`. That means writing dataset specific tests, or adding something new to dataset generation script to make it easier for everyone to add keyword arguments without having to worry about the tests.\r\n\r\nThanks,\r\nGunjan\r\n\r\nRequesting @lhoestq \/ @yjernite to review.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1815\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1814","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1814\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1814\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1814\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1814","id":800516236,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1","number":1814,"title":"Add Freebase QA Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well."],"created_at":1612371469000,"updated_at":1612468071000,"closed_at":1612455708000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1814","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1814","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1814.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1814.patch"},"body":"Closes PR #1435. Fixed issues with PR #1809.\r\n\r\nRequesting @lhoestq to review.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1814\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1813","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1813\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1813\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1813\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1813","id":800435973,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz","number":1813,"title":"Support future datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612366009000,"updated_at":1612521228000,"closed_at":1612521227000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1813","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1813","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1813.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1813.patch"},"body":"If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.\r\n\r\nHowever when trying to load a dataset that is only available on master, currently users have to specify `script_version=\"master\"` in `load_dataset` to make it work.\r\n\r\nHowever we could automatically get the dataset from master instead in this case.\r\n\r\nI added this feature in this PR.\r\nI also added a warning if a dataset is not available at the version of the local installation of `datasets` but is loaded from master:\r\n```python\r\n>>> load_dataset(\"silicone\", \"dyda_da\")\r\nCouldn't find file locally at silicone\/silicone.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.2.0\/datasets\/silicone\/silicone.py.\r\nThe file was picked from the master branch on github instead at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/silicone\/silicone.py.\r\nDownloading and preparing dataset silicone\/dyda_da (download: 8.46 MiB, generated: 9.39 MiB, post-processed: Unknown size, total: 17.86 MiB) to \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/silicone\/dyda_da\/1.0.0\/d41d8c0b73c6df035b1369c45774418f0051163ea689b5502b8bda783adf6342...\r\n...\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1813\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1812","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1812\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1812\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1812\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1812","id":799379178,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy","number":1812,"title":"Add CIFAR-100 Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\nI have updated with the changes from the review.","Thanks for approving :)"],"created_at":1612279379000,"updated_at":1612782618000,"closed_at":1612780746000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1812","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1812","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1812.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1812.patch"},"body":"Adding CIFAR-100 Dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1812\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1811","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1811\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1811\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1811\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1811","id":799211060,"node_id":"MDU6SXNzdWU3OTkyMTEwNjA=","number":1811,"title":"Unable to add Multi-label Datasets","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for adding this dataset! As far as I know `supervised_keys` is mostly a holdover from TFDS, but isn't really used, so feel free to drop it (@lhoestq or @thomwolf correct me if I'm wrong). It definitely shouldn't be blocking :) ","I can confirm that it comes from TFDS and is not used at the moment.","Thanks @yjernite @lhoestq \r\n\r\nThe template for new dataset makes it slightly confusing. I suppose the comment suggesting its update can be removed.","Closing this issue since it was answered."],"created_at":1612266656000,"updated_at":1613657791000,"closed_at":1613657791000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I am trying to add [CIFAR-100](https:\/\/www.cs.toronto.edu\/~kriz\/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as \r\n`supervised_keys=(\"img\", \"fine_label\")` raises no issue. But trying `supervised_keys=(\"img\", \"fine_label\",\"coarse_label\")` leads to this error : \r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"test_script.py\", line 2, in <module>\r\n d = load_dataset('.\/datasets\/cifar100')\r\n File \"~\/datasets\/src\/datasets\/load.py\", line 668, in load_dataset\r\n **config_kwargs,\r\n File \"~\/datasets\/src\/datasets\/builder.py\", line 896, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n File \"~\/datasets\/src\/datasets\/builder.py\", line 247, in __init__\r\n info.update(self._info())\r\n File \"~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cifar100\/61d2489b2d4a4abc34201432541b7380984ec714e290817d9a1ee318e4b74e0f\/cifar100.py\", line 79, in _info\r\n citation=_CITATION,\r\n File \"<string>\", line 19, in __init__\r\n File \"~\/datasets\/src\/datasets\/info.py\", line 136, in __post_init__\r\n self.supervised_keys = SupervisedKeysData(*self.supervised_keys)\r\nTypeError: __init__() takes from 1 to 3 positional arguments but 4 were given\r\n```\r\nIs there a way I can fix this?\r\n\r\nAlso, what does adding `supervised_keys` do? Is it necessary? How would I specify `supervised_keys` for a multi-input, multi-label dataset?\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1811\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1810","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1810\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1810\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1810\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1810","id":799168650,"node_id":"MDU6SXNzdWU3OTkxNjg2NTA=","number":1810,"title":"Add Hateful Memes Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?","Also, I found the information for loading only subsets of the data [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/splits.rst).","Hi @lhoestq,\r\n\r\nRequest you to check this once.\r\n\r\nThanks,\r\nGunjan","Hi @gchhablani since Array2D doesn't support images of different sizes, I would suggest to store in the dataset the paths to the image file instead of the image data. This has the advantage of not decompressing the data (images are often compressed using jpeg, png etc.). Users can still apply `.map` to load the images if they want to. Though it would en up being Sequences features.\r\n\r\nIn the future we'll add support for ragged tensors for this case and update the relevant dataset with this feature."],"created_at":1612263239000,"updated_at":1613667684000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"## Add Hateful Memes Dataset\r\n- **Name:** Hateful Memes\r\n- **Description:** [https:\/\/ai.facebook.com\/blog\/hateful-memes-challenge-and-data-set]( https:\/\/ai.facebook.com\/blog\/hateful-memes-challenge-and-data-set)\r\n- **Paper:** [https:\/\/arxiv.org\/pdf\/2005.04790.pdf](https:\/\/arxiv.org\/pdf\/2005.04790.pdf)\r\n- **Data:** [This link](https:\/\/drivendata-competition-fb-hateful-memes-data.s3.amazonaws.com\/XjiOc5ycDBRRNwbhRlgH.zip?AWSAccessKeyId=AKIARVBOBDCY4MWEDJKS&Signature=DaUuGgZWUgDHzEPPbyJ2PhSJ56Q%3D&Expires=1612816874)\r\n- **Motivation:** Including multi-modal datasets to \ud83e\udd17 datasets.\r\n\r\nI will be adding this dataset. It requires the user to sign an agreement on DrivenData. So, it will be used with a manual download.\r\n\r\nThe issue with this dataset is that the images are of different sizes. The image datasets added so far (CIFAR-10 and MNIST) have a uniform shape throughout.\r\nSo something like \r\n```python\r\n datasets.Array2D(shape=(28, 28), dtype=\"uint8\")\r\n```\r\nwon't work for the images. How would I add image features then? I checked `datasets\/features.py` but couldn't figure out the appropriate class for this. I'm assuming I would want to avoid re-sizing at all since we want the user to be able to access the original images.\r\n\r\nAlso, in case I want to load only a subset of the data, since the actual data is around 8.8GB, how would that be possible?\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1810\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1809","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1809\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1809\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1809\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1809","id":799059141,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz","number":1809,"title":"Add FreebaseQA dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?","Hi @lhoestq,\r\n\r\nI think this happened because of rebasing. I'm unable to remove the duorc commit from the branch. GEM, Arabic sarcasm datasets are also there. I can't see any merge conflicts, however. Before commiting I always rebase (shouldn't have done that).\r\nCan you explain what is to be done? Should I create a clean PR?","Hi @gchhablani \r\nI think you can simply create another branch and another PR.\r\n\r\nIf I understand correctly the github diff is messed up because you rebased instead of merge.\r\nRebasing is supposed to be used only before pushing the branch the first time, or github messes up the diff.\r\nIf you want to include changes from master on a branch that is already push you need to use git merge.","Thanks @lhoestq.\r\n\r\nI understand the issue now. I missed the instructions on the template. Sorry for bothering you unnecessarily, I'm pretty new to contributing on GitHub. I'll make a fresh PR.\r\n","No problem, I'm not a big fan of this weird behavior tbh.\r\nThanks for making a new PR","@lhoestq Haha, well, it's not as weird as not reading the [instructions](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#open-a-pull-request-on-the-main-huggingface-repo-and-share-your-work).\r\nAlso, I'm enjoying adding new datasets so it's all cool :)"],"created_at":1612254953000,"updated_at":1612372505000,"closed_at":1612370586000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1809","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1809","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1809.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1809.patch"},"body":"Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR.\r\nRequesting @lhoestq to review.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1809\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1808","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1808\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1808\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1808\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1808","id":798879180,"node_id":"MDU6SXNzdWU3OTg4NzkxODA=","number":1808,"title":"writing Datasets in a human readable format","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\r\nYou can convert the Arrow table to a pandas dataframe to save the data as csv as follows:\r\n```python\r\narrow_table = dataset.data\r\ndataframe = arrow_table.to_pandas()\r\ndataframe.to_csv(\"\/path\/to\/file.csv\")\r\n```\r\n\r\nSimilarly, you can convert the dataset to a Python dict and save it as JSON:\r\n```python\r\nimport json\r\narrow_table = dataset.data\r\npy_dict = arrow_table.to_pydict()\r\nwith open(\"\/path\/to\/file.json\", \"w+\") as f:\r\n json.dump(py_dict, f)\r\n```","Indeed this works as long as you have enough memory.\r\nIt would be amazing to have export options like csv, json etc. !\r\n\r\nIt should be doable to implement something that iterates through the dataset batch by batch to write to csv for example.\r\nThere is already an `export` method but currently the only export type that is supported is `tfrecords`."],"created_at":1612234540000,"updated_at":1612278248000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1808\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1807","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1807\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1807\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1807\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1807","id":798823591,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5","number":1807,"title":"Adding an aggregated dataset for the GEM benchmark","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice !"],"created_at":1612226393000,"updated_at":1612306121000,"closed_at":1612289218000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1807","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1807","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1807.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1807.patch"},"body":"This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)\r\n\r\nThe changes from the original datasets are detailed in the Dataset Cards on the GEM website, which are linked to in this dataset card.\r\n\r\ncc @sebastianGehrmann\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1807\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1806","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1806\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1806\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1806\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1806","id":798607869,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz","number":1806,"title":"Update details to MLSUM dataset","user":{"login":"padipadou","id":15138872,"node_id":"MDQ6VXNlcjE1MTM4ODcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15138872?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/padipadou","html_url":"https:\/\/github.com\/padipadou","followers_url":"https:\/\/api.github.com\/users\/padipadou\/followers","following_url":"https:\/\/api.github.com\/users\/padipadou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/padipadou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/padipadou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/padipadou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/padipadou\/orgs","repos_url":"https:\/\/api.github.com\/users\/padipadou\/repos","events_url":"https:\/\/api.github.com\/users\/padipadou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/padipadou\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1612204512000,"updated_at":1612205188000,"closed_at":1612205181000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1806","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1806","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1806.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1806.patch"},"body":"Update details to MLSUM dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1806\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1805","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1805\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1805\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1805\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1805","id":798498053,"node_id":"MDU6SXNzdWU3OTg0OTgwNTM=","number":1805,"title":"can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index","user":{"login":"abarbosa94","id":6608232,"node_id":"MDQ6VXNlcjY2MDgyMzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6608232?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abarbosa94","html_url":"https:\/\/github.com\/abarbosa94","followers_url":"https:\/\/api.github.com\/users\/abarbosa94\/followers","following_url":"https:\/\/api.github.com\/users\/abarbosa94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abarbosa94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abarbosa94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abarbosa94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abarbosa94\/orgs","repos_url":"https:\/\/api.github.com\/users\/abarbosa94\/repos","events_url":"https:\/\/api.github.com\/users\/abarbosa94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abarbosa94\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next release of `datasets`, or you can also install `datasets` from source.","I totally forgot to answer this issue, I'm so sorry. \r\n\r\nI was able to get it working by installing `datasets` from source. Huge thanks!"],"created_at":1612196057000,"updated_at":1615041166000,"closed_at":1615041166000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"So, I have the following instances in my dataset\r\n\r\n```\r\n{'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of \r\nthis increase in rotation?', \r\n'answer': 'C', \r\n'example_id': 'ARCCH_Mercury_7175875', \r\n'options':[{'option_context': 'One effect of increased amperage in the planetary world (..)', 'option_id': 'A', 'option_text': 'Planetary density will decrease.'},\r\n (...)]}\r\n```\r\n\r\nThe `options` value is always an list with 4 options, each one is a dict with `option_context`; `option_id` and `option_text`.\r\n\r\nI would like to overwrite the `option_context` of each instance of my dataset for a dpr result that I am developing. Then, I trained a model already and save it in a FAISS index\r\n```\r\ndpr_dataset = load_dataset(\r\n \"text\",\r\n data_files=ARC_CORPUS_TEXT,\r\n cache_dir=CACHE_DIR,\r\n split=\"train[:100%]\",\r\n )\r\ndpr_dataset.load_faiss_index(\"embeddings\", f\"{ARC_CORPUS_FAISS}\")\r\ntorch.set_grad_enabled(False)\r\n```\r\n\r\nThen, as a processor of my dataset, I created a map function that calls the `dpr_dataset` for each _option_\r\n\r\n```\r\ndef generate_context(example):\r\n question_text = example['question']\r\n for option in example['options']:\r\n question_with_option = question_text + \" \" + option['option_text']\r\n tokenize_text = question_tokenizer(question_with_option, return_tensors=\"pt\").to(device)\r\n question_embed = (\r\n question_encoder(**tokenize_text)\r\n )[0][0].cpu().numpy()\r\n _, retrieved_examples = dpr_dataset.get_nearest_examples(\r\n \"embeddings\", question_embed, k=10\r\n )\r\n # option[\"option_context\"] = retrieved_examples[\"text\"]\r\n # option[\"option_context\"] = \" \".join(option[\"option_context\"]).strip()\r\n #result_dict = {\r\n # 'example_id': example['example_id'],\r\n # 'answer': example['answer'],\r\n # 'question': question_text,\r\n #options': example['options']\r\n # }\r\n return example\r\n```\r\n\r\nI intentionally commented on this portion of the code.\r\n\r\nBut when I call the `map` method, `ds_with_context = dataset.map(generate_context,load_from_cache_file=False)`\r\n\r\nIt calls the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-55-75a458ce205c> in <module>\r\n----> 1 ds_with_context = dataset.map(generate_context,load_from_cache_file=False)\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 301 num_proc=num_proc,\r\n 302 )\r\n--> 303 for k, dataset in self.items()\r\n 304 }\r\n 305 )\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in <dictcomp>(.0)\r\n 301 num_proc=num_proc,\r\n 302 )\r\n--> 303 for k, dataset in self.items()\r\n 304 }\r\n 305 )\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1257 fn_kwargs=fn_kwargs,\r\n 1258 new_fingerprint=new_fingerprint,\r\n-> 1259 update_data=update_data,\r\n 1260 )\r\n 1261 else:\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 155 }\r\n 156 # apply actual function\r\n--> 157 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 158 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 159 # re-apply format to the output\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 156 kwargs_for_fingerprint[\"fingerprint_name\"] = fingerprint_name\r\n 157 kwargs[fingerprint_name] = update_fingerprint(\r\n--> 158 self._fingerprint, transform, kwargs_for_fingerprint\r\n 159 )\r\n 160 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)\r\n 103 for key in sorted(transform_args):\r\n 104 hasher.update(key)\r\n--> 105 hasher.update(transform_args[key])\r\n 106 return hasher.hexdigest()\r\n 107 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in update(self, value)\r\n 55 def update(self, value):\r\n 56 self.m.update(f\"=={type(value)}==\".encode(\"utf8\"))\r\n---> 57 self.m.update(self.hash(value).encode(\"utf-8\"))\r\n 58 \r\n 59 def hexdigest(self):\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in hash(cls, value)\r\n 51 return cls.dispatch[type(value)](cls, value)\r\n 52 else:\r\n---> 53 return cls.hash_default(value)\r\n 54 \r\n 55 def update(self, value):\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in hash_default(cls, value)\r\n 44 @classmethod\r\n 45 def hash_default(cls, value):\r\n---> 46 return cls.hash_bytes(dumps(value))\r\n 47 \r\n 48 @classmethod\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py in dumps(obj)\r\n 387 file = StringIO()\r\n 388 with _no_cache_fields(obj):\r\n--> 389 dump(obj, file)\r\n 390 return file.getvalue()\r\n 391 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py in dump(obj, file)\r\n 359 def dump(obj, file):\r\n 360 \"\"\"pickle an object to a file\"\"\"\r\n--> 361 Pickler(file, recurse=True).dump(obj)\r\n 362 return\r\n 363 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/dill\/_dill.py in dump(self, obj)\r\n 452 raise PicklingError(msg)\r\n 453 else:\r\n--> 454 StockPickler.dump(self, obj)\r\n 455 stack.clear() # clear record of 'recursion-sensitive' pickled objects\r\n 456 return\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in dump(self, obj)\r\n 435 if self.proto >= 4:\r\n 436 self.framer.start_framing()\r\n--> 437 self.save(obj)\r\n 438 self.write(STOP)\r\n 439 self.framer.end_framing()\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py in save_function(pickler, obj)\r\n 554 dill._dill._create_function,\r\n 555 (obj.__code__, globs, obj.__name__, obj.__defaults__, obj.__closure__, obj.__dict__, fkwdefaults),\r\n--> 556 obj=obj,\r\n 557 )\r\n 558 else:\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 636 else:\r\n 637 save(func)\r\n--> 638 save(args)\r\n 639 write(REDUCE)\r\n 640 \r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_tuple(self, obj)\r\n 784 write(MARK)\r\n 785 for element in obj:\r\n--> 786 save(element)\r\n 787 \r\n 788 if id(obj) in memo:\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 939 # we only care about session the first pass thru\r\n 940 pickler._session = False\r\n--> 941 StockPickler.save_dict(pickler, obj)\r\n 942 log.info(\"# D2\")\r\n 943 return\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 854 \r\n 855 self.memoize(obj)\r\n--> 856 self._batch_setitems(obj.items())\r\n 857 \r\n 858 dispatch[dict] = save_dict\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 880 for k, v in tmp:\r\n 881 save(k)\r\n--> 882 save(v)\r\n 883 write(SETITEMS)\r\n 884 elif n:\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 547 \r\n 548 # Save the reduce() output and finally memoize the object\r\n--> 549 self.save_reduce(obj=obj, *rv)\r\n 550 \r\n 551 def persistent_id(self, obj):\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 660 \r\n 661 if state is not None:\r\n--> 662 save(state)\r\n 663 write(BUILD)\r\n 664 \r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 939 # we only care about session the first pass thru\r\n 940 pickler._session = False\r\n--> 941 StockPickler.save_dict(pickler, obj)\r\n 942 log.info(\"# D2\")\r\n 943 return\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 854 \r\n 855 self.memoize(obj)\r\n--> 856 self._batch_setitems(obj.items())\r\n 857 \r\n 858 dispatch[dict] = save_dict\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 880 for k, v in tmp:\r\n 881 save(k)\r\n--> 882 save(v)\r\n 883 write(SETITEMS)\r\n 884 elif n:\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 939 # we only care about session the first pass thru\r\n 940 pickler._session = False\r\n--> 941 StockPickler.save_dict(pickler, obj)\r\n 942 log.info(\"# D2\")\r\n 943 return\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 854 \r\n 855 self.memoize(obj)\r\n--> 856 self._batch_setitems(obj.items())\r\n 857 \r\n 858 dispatch[dict] = save_dict\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 885 k, v = tmp[0]\r\n 886 save(k)\r\n--> 887 save(v)\r\n 888 write(SETITEM)\r\n 889 # else tmp is empty, and we're done\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 547 \r\n 548 # Save the reduce() output and finally memoize the object\r\n--> 549 self.save_reduce(obj=obj, *rv)\r\n 550 \r\n 551 def persistent_id(self, obj):\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 660 \r\n 661 if state is not None:\r\n--> 662 save(state)\r\n 663 write(BUILD)\r\n 664 \r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 939 # we only care about session the first pass thru\r\n 940 pickler._session = False\r\n--> 941 StockPickler.save_dict(pickler, obj)\r\n 942 log.info(\"# D2\")\r\n 943 return\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 854 \r\n 855 self.memoize(obj)\r\n--> 856 self._batch_setitems(obj.items())\r\n 857 \r\n 858 dispatch[dict] = save_dict\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 880 for k, v in tmp:\r\n 881 save(k)\r\n--> 882 save(v)\r\n 883 write(SETITEMS)\r\n 884 elif n:\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 547 \r\n 548 # Save the reduce() output and finally memoize the object\r\n--> 549 self.save_reduce(obj=obj, *rv)\r\n 550 \r\n 551 def persistent_id(self, obj):\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 660 \r\n 661 if state is not None:\r\n--> 662 save(state)\r\n 663 write(BUILD)\r\n 664 \r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 939 # we only care about session the first pass thru\r\n 940 pickler._session = False\r\n--> 941 StockPickler.save_dict(pickler, obj)\r\n 942 log.info(\"# D2\")\r\n 943 return\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 854 \r\n 855 self.memoize(obj)\r\n--> 856 self._batch_setitems(obj.items())\r\n 857 \r\n 858 dispatch[dict] = save_dict\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 885 k, v = tmp[0]\r\n 886 save(k)\r\n--> 887 save(v)\r\n 888 write(SETITEM)\r\n 889 # else tmp is empty, and we're done\r\n\r\n\/usr\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 522 reduce = getattr(obj, \"__reduce_ex__\", None)\r\n 523 if reduce is not None:\r\n--> 524 rv = reduce(self.proto)\r\n 525 else:\r\n 526 reduce = getattr(obj, \"__reduce__\", None)\r\n\r\nTypeError: can't pickle SwigPyObject objects\r\n```\r\n\r\nWhich I have no idea how to solve\/deal with it\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1805\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1804","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1804\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1804\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1804\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1804","id":798483881,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3","number":1804,"title":"Add SICK dataset","user":{"login":"calpt","id":36051308,"node_id":"MDQ6VXNlcjM2MDUxMzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36051308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/calpt","html_url":"https:\/\/github.com\/calpt","followers_url":"https:\/\/api.github.com\/users\/calpt\/followers","following_url":"https:\/\/api.github.com\/users\/calpt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/calpt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/calpt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/calpt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/calpt\/orgs","repos_url":"https:\/\/api.github.com\/users\/calpt\/repos","events_url":"https:\/\/api.github.com\/users\/calpt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/calpt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1612195064000,"updated_at":1612547188000,"closed_at":1612540165000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1804","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1804","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1804.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1804.patch"},"body":"Adds the SICK dataset (http:\/\/marcobaroni.org\/composes\/sick.html).\r\n\r\nCloses #1772.\r\n\r\nEdit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1804\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1803","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1803\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1803\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1803\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1803","id":798243904,"node_id":"MDU6SXNzdWU3OTgyNDM5MDQ=","number":1803,"title":"Querying examples from big datasets is slower than small datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello, @lhoestq \/ @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet? ","Hi ! I'm pretty sure that it can be fixed by using the Arrow IPC file format instead of the raw streaming format but I haven't tested yet.\r\nI'll take a look at it soon and let you know","My workaround is to shard the dataset into splits in my ssd disk and feed the data in different training sessions. But it is a bit of a pain when we need to reload the last training session with the rest of the split with the Trainer in transformers.\r\n\r\nI mean, when I split the training and then reloads the model and optimizer, it not gets the correct global_status of the optimizer, so I need to hardcode some things. I'm planning to open an issue in transformers and think about it.\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset(\"bookcorpus\", split=\"train[:25%]\")\r\nwikicorpus = load_dataset(\"wikicorpus\", split=\"train[:25%]\")\r\nopenwebtext = load_dataset(\"openwebtext\", split=\"train[:25%]\")\r\n\r\nbig_dataset = datasets.concatenate_datasets([wikicorpus, openwebtext, book_corpus])\r\nbig_dataset.shuffle(seed=42)\r\nbig_dataset = big_dataset.map(encode, batched=True, num_proc=20, load_from_cache_file=True, writer_batch_size=5000)\r\nbig_dataset.set_format(type='torch', columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\".\/linear_bert\",\r\n overwrite_output_dir=True,\r\n per_device_train_batch_size=71,\r\n save_steps=500,\r\n save_total_limit=10,\r\n logging_first_step=True,\r\n logging_steps=100,\r\n gradient_accumulation_steps=9,\r\n fp16=True,\r\n dataloader_num_workers=20,\r\n warmup_steps=24000,\r\n learning_rate=0.000545205002870214,\r\n adam_epsilon=1e-6,\r\n adam_beta2=0.98,\r\n weight_decay=0.01,\r\n max_steps=138974, # the total number of steps after concatenating 100% datasets\r\n max_grad_norm=1.0,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=big_dataset,\r\n tokenizer=tokenizer))\r\n```\r\n\r\nI do one training pass with the total steps of this shard and I use len(bbig)\/batchsize to stop the training (hardcoded in the trainer.py) when I pass over all the examples in this split.\r\n\r\nNow Im working, I will edit the comment with a more elaborated answer when I left the work.","I just tested and using the Arrow File format doesn't improve the speed... This will need further investigation.\r\n\r\nMy guess is that it has to iterate over the record batches or chunks of a ChunkedArray in order to retrieve elements.\r\n\r\nHowever if we know in advance in which chunk the element is, and at what index it is, then we can access it instantaneously. But this requires dealing with the chunked arrays instead of the pyarrow Table directly which is not practical.","I have a dataset with about 2.7 million rows (which I'm loading via `load_from_disk`), and I need to fetch around 300k (particular) rows of it, by index. Currently this is taking a really long time (~8 hours). I tried sharding the large dataset but overall it doesn't change how long it takes to fetch the desired rows.\r\n\r\nI actually have enough RAM that I could fit the large dataset in memory. Would having the large dataset in memory speed up querying? To find out, I tried to load (a column of) the large dataset into memory like this:\r\n```\r\ncolumn_data = large_ds['column_name']\r\n```\r\nbut in itself this takes a really long time.\r\n\r\nI'm pretty stuck - do you have any ideas what I should do? ","Hi ! Feel free to post a message on the [forum](https:\/\/discuss.huggingface.co\/c\/datasets\/10). I'd be happy to help you with this.\r\n\r\nIn your post on the forum, feel free to add more details about your setup:\r\nWhat are column names and types of your dataset ?\r\nHow was the dataset constructed ?\r\nIs the dataset shuffled ?\r\nIs the dataset tokenized ?\r\nAre you on a SSD or an HDD ?\r\n\r\nI'm sure we can figure something out.\r\nFor example on my laptop I can access the 6 millions articles from wikipedia in less than a minute.","Thanks @lhoestq, I've [posted on the forum](https:\/\/discuss.huggingface.co\/t\/fetching-rows-of-a-large-dataset-by-index\/4271?u=abisee).","Fixed by #2122."],"created_at":1612177703000,"updated_at":1628100661000,"closed_at":1628100642000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets.\r\nFor example\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nb1 = load_dataset(\"bookcorpus\", split=\"train[:1%]\")\r\nb50 = load_dataset(\"bookcorpus\", split=\"train[:50%]\")\r\nb100 = load_dataset(\"bookcorpus\", split=\"train[:100%]\")\r\n\r\n%timeit _ = b1[-1] \r\n# 12.2 \u00b5s \u00b1 70.4 ns per loop (mean \u00b1 std. dev. of 7 runs, 100000 loops each)\r\n\r\n%timeit _ = b50[-1] \r\n# 92.5 \u00b5s \u00b1 1.24 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\r\n\r\n%timeit _ = b100[-1] \r\n# 177 \u00b5s \u00b1 3.13 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\r\n\r\n```\r\n\r\nIt looks like the time to fetch the example increases with the size of the dataset.\r\n\r\nThis is maybe due to the use of the Arrow streaming format to store the data on disk. I guess pyarrow needs to iterate through the file as a stream to find the queried sample.\r\n\r\nMaybe switching to the Arrow IPC file format could help fixing this issue.\r\n\r\nIndeed according to the [documentation](https:\/\/arrow.apache.org\/docs\/format\/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample, which could fix the issue:\r\n> We define a \u201cfile format\u201d supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer.\r\n\r\ncc @gaceladri since it can help speed up your training when this one is fixed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1803\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1802","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1802\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1802\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1802\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1802","id":797924468,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0ODE4NDIy","number":1802,"title":"add github of contributors","user":{"login":"vasudevgupta7","id":53136577,"node_id":"MDQ6VXNlcjUzMTM2NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53136577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vasudevgupta7","html_url":"https:\/\/github.com\/vasudevgupta7","followers_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/followers","following_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/orgs","repos_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/repos","events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Can you confirm if this format is fine? I will update cards based on your feedback.","On HuggingFace side we also have a mapping of hf user => github user (GitHub info used to be required when signing up until not long ago \u2013 cc @gary149 @beurkinger) so we can also add a link to HF profile","All the dataset cards have been updated with GitHub ids :)"],"created_at":1612151359000,"updated_at":1612346992000,"closed_at":1612346790000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1802","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1802","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1802.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1802.patch"},"body":"This PR will add contributors GitHub id at the end of every dataset cards.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1802\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1801","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1801\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1801\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1801\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1801","id":797814275,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw","number":1801,"title":"[GEM] Updated the source link of the data to update correct tokenized version.","user":{"login":"mounicam","id":11708999,"node_id":"MDQ6VXNlcjExNzA4OTk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11708999?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mounicam","html_url":"https:\/\/github.com\/mounicam","followers_url":"https:\/\/api.github.com\/users\/mounicam\/followers","following_url":"https:\/\/api.github.com\/users\/mounicam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mounicam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mounicam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mounicam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mounicam\/orgs","repos_url":"https:\/\/api.github.com\/users\/mounicam\/repos","events_url":"https:\/\/api.github.com\/users\/mounicam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mounicam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ","Closed by https:\/\/github.com\/huggingface\/datasets\/pull\/1807"],"created_at":1612127839000,"updated_at":1612271858000,"closed_at":1612271848000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1801","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1801","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1801.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1801.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1801\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1800","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1800\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1800\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1800\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1800","id":797798689,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3","number":1800,"title":"Add DuoRC Dataset","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too."],"created_at":1612123319000,"updated_at":1612328505000,"closed_at":1612306166000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1800","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1800","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1800.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1800.patch"},"body":"Hi,\r\n\r\nDuoRC SelfRC is one type of the [DuoRC Dataset](https:\/\/duorc.github.io\/). DuoRC SelfRC is a crowdsourced Abstractive\/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or no answers. I have also added ParaphraseRC - the other type of DuoRC dataset where questions are based on Wikipedia movie plots and answers are based on corresponding IMDb movie plots.\r\n\r\nPaper : [https:\/\/arxiv.org\/abs\/1804.07927](https:\/\/arxiv.org\/abs\/1804.07927)\r\n\r\nI want to add this to \ud83e\udd17 datasets to make it more accessible to the community. I have added all the details that I could find. Please let me know if anything else is needed from my end.\r\n\r\nThanks,\r\nGunjan\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1800\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1799","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1799\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1799\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1799\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1799","id":797789439,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0NzEyMzUy","number":1799,"title":"Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c\u2026","user":{"login":"gmihaila","id":22454783,"node_id":"MDQ6VXNlcjIyNDU0Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22454783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gmihaila","html_url":"https:\/\/github.com\/gmihaila","followers_url":"https:\/\/api.github.com\/users\/gmihaila\/followers","following_url":"https:\/\/api.github.com\/users\/gmihaila\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gmihaila\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gmihaila\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gmihaila\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gmihaila\/orgs","repos_url":"https:\/\/api.github.com\/users\/gmihaila\/repos","events_url":"https:\/\/api.github.com\/users\/gmihaila\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gmihaila\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@yjernite Pushed all the changes you recommended. Thank you for your help!"],"created_at":1612120735000,"updated_at":1612908373000,"closed_at":1612885798000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1799","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1799","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1799.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1799.patch"},"body":"This is a dataset I currently use my research and I realized some features are not being returned.\r\n\r\nPrevious code was not using all available metadata and was kind of messy\r\n\r\nI fixed code to use all metadata and made some modification to be more efficient and better formatted.\r\n\r\n\r\nPlease let me know if I need to make any changes.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1799\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1798","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1798\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1798\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1798\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1798","id":797766818,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1","number":1798,"title":"Add Arabic sarcasm dataset","user":{"login":"mapmeld","id":643918,"node_id":"MDQ6VXNlcjY0MzkxOA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/643918?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mapmeld","html_url":"https:\/\/github.com\/mapmeld","followers_url":"https:\/\/api.github.com\/users\/mapmeld\/followers","following_url":"https:\/\/api.github.com\/users\/mapmeld\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mapmeld\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mapmeld\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mapmeld\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mapmeld\/orgs","repos_url":"https:\/\/api.github.com\/users\/mapmeld\/repos","events_url":"https:\/\/api.github.com\/users\/mapmeld\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mapmeld\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data"],"created_at":1612114735000,"updated_at":1612989553000,"closed_at":1612348554000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1798","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1798","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1798.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1798.patch"},"body":"This MIT license dataset: https:\/\/github.com\/iabufarha\/ArSarcasm\r\n\r\nVia https:\/\/sites.google.com\/view\/ar-sarcasm-sentiment-detection\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1798\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1797","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1797\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1797\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1797\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1797","id":797357901,"node_id":"MDU6SXNzdWU3OTczNTc5MDE=","number":1797,"title":"Connection error","user":{"login":"smile0925","id":46243662,"node_id":"MDQ6VXNlcjQ2MjQzNjYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46243662?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/smile0925","html_url":"https:\/\/github.com\/smile0925","followers_url":"https:\/\/api.github.com\/users\/smile0925\/followers","following_url":"https:\/\/api.github.com\/users\/smile0925\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/smile0925\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/smile0925\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/smile0925\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/smile0925\/orgs","repos_url":"https:\/\/api.github.com\/users\/smile0925\/repos","events_url":"https:\/\/api.github.com\/users\/smile0925\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/smile0925\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! For future references let me add a link to our discussion here : https:\/\/github.com\/huggingface\/datasets\/issues\/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)"],"created_at":1611991965000,"updated_at":1628100577000,"closed_at":1628100577000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am hitting to the error, help me and thanks.\r\n\r\n`train_data = datasets.load_dataset(\"xsum\", split=\"train\")`\r\n`ConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/xsum\/xsum.py`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1797\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1796","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1796\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1796\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1796\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1796","id":797329905,"node_id":"MDU6SXNzdWU3OTczMjk5MDU=","number":1796,"title":"Filter on dataset too much slowww","user":{"login":"ayubSubhaniya","id":20911334,"node_id":"MDQ6VXNlcjIwOTExMzM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20911334?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ayubSubhaniya","html_url":"https:\/\/github.com\/ayubSubhaniya","followers_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/followers","following_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/orgs","repos_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/repos","events_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ayubSubhaniya\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object.\r\n\r\n```\r\nds_table = dataset.data.filter(mask=dataset['flag'])\r\n```","@thomwolf @lhoestq can you guys please take a look and recommend some solution.","Hi ! Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.\r\nUsing a mask directly on the arrow table doesn't do any read or write operation therefore it's way quicker.\r\n\r\nReplacing the old table by the new one should do the job:\r\n```python\r\ndataset._data = dataset._data.filter(...)\r\n```\r\n\r\nNote: this is a **workaround** and in general users shouldn't have to do that. In particular if you did some `shuffle` or `select` before that then it would not work correctly since the indices mapping (index from `__getitem__` -> index in the table) would not be valid anymore. But if you haven't done any `shuffle`, `select`, `shard`, `train_test_split` etc. then it should work.\r\n\r\nIdeally it would be awesome to update the filter function to allow masking this way !\r\nIf you would like to give it a shot I will be happy to help :) ","Yes, would be happy to contribute. Thanks","Hi @lhoestq @ayubSubhaniya,\r\n\r\nIf there's no progress on this one, can I try working on it?\r\n\r\nThanks,\r\nGunjan","Sure @gchhablani feel free to start working on it, this would be very appreciated :)\r\nThis feature is would be really awesome, especially since arrow allows to mask really quickly and without having to rewrite the dataset on disk"],"created_at":1611979759000,"updated_at":1613668164000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have a dataset with 50M rows.\r\nFor pre-processing, I need to tokenize this and filter rows with the large sequence.\r\n\r\nMy tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes.\r\n\r\nWhen I applied the `filter()` function it is taking too much time. I need to filter sequences based on a boolean column.\r\nBelow are the variants I tried.\r\n1. filter() with batch size 1024, single process (takes roughly 3 hr)\r\n2. filter() with batch size 1024, 96 processes (takes 5-6 hrs \u00af\\\\\\_(\u30c4)\\_\/\u00af)\r\n3. filter() with loading all data in memory, only a single boolean column (never ends).\r\n\r\nCan someone please help?\r\n\r\nBelow is a sample code for small dataset.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\ndataset = dataset.map(lambda x: {'flag': random.randint(0,1)==1})\r\n\r\ndef _amplify(data):\r\n return data\r\n\r\ndataset = dataset.filter(_amplify, batch_size=1024, keep_in_memory=False, input_columns=['flag'])\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1796\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1795","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1795\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1795\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1795\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1795","id":797021730,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTUz","number":1795,"title":"Custom formatting for lazy map + arrow data extraction refactor","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This PR is amazing!!!\r\n\r\nI only looked at `arrow_dataset.py` and `formatting\/formatting.py` but those look good to me.\r\n\r\nMy only (tiny) concern is the name of the function: I don't think it's self-evident that `set_format` applies a generic transformation, and some people might not look too far into the doc.\r\n\r\nMaybe we could have an `apply_transform` or `process_columns` method which is called by `set_format` (to keep backward compatibility)?","What about something like `.set_format` and `.set_transform` ?\r\n- set_format would be the same as right now, i.e. defined by a format type.\r\n- set_transform would define the transformation that is applied on output batches on-the-fly.\r\n\r\nI was also thinking about `._with_format` and `.with_transform`. It could be their equivalent but would create a **new** dataset with the corresponding format or transform ? I know @sgugger was interested in something like that.","Yup, I think that would make all of these options very clear!","I like all those options as well (as long as the `_` in `_with_format` is a typo ;-) )","Yes it's a typo indeed ;)\r\n\r\nAlright I'll do the changes !","I took all your suggestions into account, thanks :)\r\nLet me know if you have more comments"],"created_at":1611938153000,"updated_at":1612518847000,"closed_at":1612518846000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1795","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1795","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1795.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1795.patch"},"body":"Hi !\r\n\r\nThis PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.\r\n\r\nWhile the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy\/pandas\/PyTorch\/TensorFlow, on-the-fly.\r\n\r\nA specific format can be activated with `datasets.Dataset.set_format`. For example: `dataset.set_format(type='torch', columns=['label'])`.\r\n\r\n### What's new:\r\n\r\nYou can now also define your own formatting function that is applied on-the-fly. To do so you can pass your formatting function in the `transform` parameter of `datasets.Dataset.set_format`, and keep `type` to `None`.\r\nA formatting function is a callable that takes a batch (as a dict, formatted as python) as input and returns a batch.\r\n\r\nHere is an example to tokenize and pad tokens on-the-fly when accessing the samples:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndef encode(batch):\r\n return tokenizer(batch[\"sentence1\"], padding=\"longest\", truncation=True, max_length=512, return_tensors=\"pt\")\r\n\r\ndataset = load_dataset(\"glue\", \"mrpc\", split=\"train\")\r\ndataset.set_format(transform=encode)\r\ndataset.format\r\n# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}\r\ndataset[:2]\r\n# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}\r\n```\r\n\r\nLet me know what you think of this API !\r\nWe can still change it if we want to.\r\n\r\nEspecially @sgugger since this may be useful when using `datasets` to train models.\r\n\r\nEDIT: this was changed to `dataset.set_transform(encode)`\r\n\r\n-------------------\r\n\r\nNote:\r\n\r\nI had to refactor the way data are extracted and formatted from pyarrow tables and I made it more robust and flexible. In particular I modularized it to be able to unit-test it properly. This was very helpful since I detected some bugs in the previous implementation and was able to fix them.\r\n\r\nSome bugs I found and fixed:\r\n- certain slices\/ranges were not supported because negative ids were passed to pyarrow\r\n- formatting as numpy\/torch\/tensorflow a column would make it lose its precision information (for example a column as `Value(\"float32\")`) would be returned as a tensor of float64 (default behavior for numpy)\r\n- on windows integers formatted as numpy\/torch\/tensorflow were not always int64 tensors by default but were int32 \r\n\r\nThe unit tests for those are now really extensive :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1795\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1794","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1794\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1794\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1794\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1794","id":796975588,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0MDYyMTkw","number":1794,"title":"Move silicone directory","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611934395000,"updated_at":1611937899000,"closed_at":1611937898000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1794","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1794","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1794.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1794.patch"},"body":"The dataset was added in #1761 but not in the right directory. I'm moving it to \/datasets","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1794\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1793","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1793\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1793\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1793\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1793","id":796940299,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0MDMzMjk0","number":1793,"title":"Minor fix the docstring of load_metric","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611931655000,"updated_at":1611939212000,"closed_at":1611939212000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1793","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1793","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1793.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1793.patch"},"body":"Minor fix:\r\n- duplicated attributes\r\n- format fix","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1793\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1792","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1792\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1792\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1792\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1792","id":796934627,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0MDI4NTk1","number":1792,"title":"Allow loading dataset in-memory","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am wondering how to test their difference...","> ring how to test their difference...\r\n\r\nHmm I don't think pyarrow exposes an API to check if a Table comes from a file that is memory-mapped. In particular since all the buffer\/memory logic is in the C++ part of pyarrow.\r\n\r\nOtherwise we can still check the difference of RAM used when loading a big chunk of data.","> Hmm I don't think pyarrow exposes an API to check if a Table comes from a file that is memory-mapped. In particular since all the buffer\/memory logic is in the C++ part of pyarrow.\r\n> \r\n> Otherwise we can still check the difference of RAM used when loading a big chunk of data.\r\n\r\n@lhoestq I think I found a way: `pa.total_allocated_bytes()` :smirk:"],"created_at":1611931190000,"updated_at":1613139208000,"closed_at":1613139208000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1792","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1792","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1792.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1792.patch"},"body":"Allow loading datasets either from:\r\n- memory-mapped file (current implementation)\r\n- from file descriptor, copying data to physical memory\r\n\r\nClose #708","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1792\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1791","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1791\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1791\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1791\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1791","id":796924519,"node_id":"MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3","number":1791,"title":"Small fix with corrected logging of train vectors","user":{"login":"TezRomacH","id":7549587,"node_id":"MDQ6VXNlcjc1NDk1ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7549587?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TezRomacH","html_url":"https:\/\/github.com\/TezRomacH","followers_url":"https:\/\/api.github.com\/users\/TezRomacH\/followers","following_url":"https:\/\/api.github.com\/users\/TezRomacH\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TezRomacH\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TezRomacH\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TezRomacH\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TezRomacH\/orgs","repos_url":"https:\/\/api.github.com\/users\/TezRomacH\/repos","events_url":"https:\/\/api.github.com\/users\/TezRomacH\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TezRomacH\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611930366000,"updated_at":1611946270000,"closed_at":1611939907000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1791","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1791","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1791.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1791.patch"},"body":"Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1791\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1790","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1790\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1790\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1790\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1790","id":796678157,"node_id":"MDU6SXNzdWU3OTY2NzgxNTc=","number":1790,"title":"ModuleNotFoundError: No module named 'apache_beam', when specific languages.","user":{"login":"miyamonz","id":6331508,"node_id":"MDQ6VXNlcjYzMzE1MDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6331508?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/miyamonz","html_url":"https:\/\/github.com\/miyamonz","followers_url":"https:\/\/api.github.com\/users\/miyamonz\/followers","following_url":"https:\/\/api.github.com\/users\/miyamonz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/miyamonz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/miyamonz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/miyamonz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/miyamonz\/orgs","repos_url":"https:\/\/api.github.com\/users\/miyamonz\/repos","events_url":"https:\/\/api.github.com\/users\/miyamonz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/miyamonz\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nApache Beam is a framework used to define data transformation pipelines. These pipeline can then be run in many runtimes: DataFlow, Spark, Flink, etc. There also exist a local runner called the DirectRunner.\r\nWikipedia is a dataset that requires some parsing, so to allow the processing to be run on this kind of runtime we're using Apache Beam.\r\n\r\nAt Hugging Face we've already processed certain versions of wikipedia (the `20200501.en` one for example) so that users can directly download the processed version instead of using Apache Beam to process it.\r\nHowever for the japanese language we haven't processed it so you'll have to run the processing on your side.\r\nSo you do need Apache Beam to process `20200501.ja`.\r\n\r\nYou can install Apache Beam with\r\n```\r\npip install apache-beam\r\n```\r\n\r\nI think we can probably improve the error message to let users know of this subtlety.\r\nWhat #498 implied is that Apache Beam is not needed when you process a dataset that doesn't use Apache Beam.","Thanks for your reply! \r\nI understood.\r\n\r\nI tried again with installing apache-beam, add ` beam_runner=\"DirectRunner\"` and an anther `mwparserfromhell` is also required so I installed it.\r\nbut, it also failed. It exited 1 without error message.\r\n\r\n```py\r\nimport datasets\r\n# BTW, 20200501.ja doesn't exist at wikipedia, so I specified date argument\r\nwiki = datasets.load_dataset(\"wikipedia\", language=\"ja\", date=\"20210120\", cache_dir=\".\/datasets\", beam_runner=\"DirectRunner\")\r\nprint(wiki)\r\n```\r\nand its log is below\r\n```\r\nUsing custom data configuration 20210120.ja\r\nDownloading and preparing dataset wikipedia\/20210120.ja-date=20210120,language=ja (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to .\/datasets\/wikipedia\/20210120.ja-date=20210120,language=ja\/0.0.0\/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...\r\nKilled\r\n```\r\n\r\nI also tried on another machine because it may caused by insufficient resources.\r\n```\r\n$ python main.py\r\nUsing custom data configuration 20210120.ja\r\nDownloading and preparing dataset wikipedia\/20210120.ja-date=20210120,language=ja (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to .\/datasets\/wikipedia\/20210120.ja-date=20210120,language=ja\/0.0.0\/4021357e28509391eab2f8300d9b689e7e8f3a877ebb3d354b01577d497ebc63...\r\n\r\nTraceback (most recent call last):\r\n File \"main.py\", line 3, in <module>\r\n wiki = datasets.load_dataset(\"wikipedia\", language=\"ja\", date=\"20210120\", cache_dir=\".\/datasets\", beam_runner=\"DirectRunner\")\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 609, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 526, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 1069, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/pipeline.py\", line 561, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/direct\/direct_runner.py\", line 126, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 182, in run_pipeline\r\n self._latest_run_result = self.run_via_runner_api(\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 193, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 358, in run_stages\r\n stage_results = self._run_stage(\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 549, in _run_stage\r\n last_result, deferred_inputs, fired_timers = self._run_bundle(\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 595, in _run_bundle\r\n result, splits = bundle_manager.process_bundle(\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 888, in process_bundle\r\n self._send_input_to_worker(process_bundle_id, transform_id, elements)\r\n File \"\/home\/miyamonz\/.cache\/pypoetry\/virtualenvs\/try-datasets-4t4JWXxu-py3.8\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 765, in _send_input_to_worker\r\n data_out.write(byte_stream)\r\n File \"apache_beam\/coders\/stream.pyx\", line 42, in apache_beam.coders.stream.OutputStream.write\r\n File \"apache_beam\/coders\/stream.pyx\", line 47, in apache_beam.coders.stream.OutputStream.write\r\n File \"apache_beam\/coders\/stream.pyx\", line 109, in apache_beam.coders.stream.OutputStream.extend\r\nAssertionError: OutputStream realloc failed.\r\n```\r\n\r\n","Hi @miyamonz,\r\n\r\nI tried replicating this issue using the same snippet used by you. I am able to download the dataset without any issues, although I stopped it in the middle because the dataset is huge.\r\n\r\nBased on a similar issue [here](https:\/\/github.com\/google-research\/fixmatch\/issues\/23), it could be related to your environment setup, although I am just guessing here. Can you share these details?","thanks for your reply and sorry for my late response.\r\n\r\n## environment\r\nmy local machine environment info\r\n- Ubuntu on WSL2\r\n\r\n`lsb_release -a`\r\n```\r\nNo LSB modules are available.\r\nDistributor ID: Ubuntu\r\nDescription: Ubuntu 20.04.2 LTS\r\nRelease: 20.04\r\nCodename: focal\r\n```\r\n\r\nRTX 2070 super\r\nInside WSL, there is no nvidia-msi command. I don't know why.\r\nBut, `torch.cuda.is_available()` is true and when I start something ML training code GPU usage is growing up, so I think it works.\r\n\r\nFrom PowerShell, there is nvidia-smi.exe and result is below.\r\n```\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 470.05 Driver Version: 470.05 CUDA Version: 11.3 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name TCC\/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage\/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 NVIDIA GeForce ... WDDM | 00000000:09:00.0 On | N\/A |\r\n| 0% 30C P8 19W \/ 175W | 523MiB \/ 8192MiB | 3% Default |\r\n| | | N\/A |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n| 0 N\/A N\/A 1728 C+G Insufficient Permissions N\/A |\r\n| 0 N\/A N\/A 3672 C+G ...ekyb3d8bbwe\\YourPhone.exe N\/A |\r\n| 0 N\/A N\/A 6304 C+G ...2txyewy\\TextInputHost.exe N\/A |\r\n| 0 N\/A N\/A 8648 C+G C:\\Windows\\explorer.exe N\/A |\r\n| 0 N\/A N\/A 9536 C+G ...y\\ShellExperienceHost.exe N\/A |\r\n| 0 N\/A N\/A 10668 C+G ...5n1h2txyewy\\SearchApp.exe N\/A |\r\n| 0 N\/A N\/A 10948 C+G ...artMenuExperienceHost.exe N\/A |\r\n| 0 N\/A N\/A 11988 C+G ...8wekyb3d8bbwe\\Cortana.exe N\/A |\r\n| 0 N\/A N\/A 12464 C+G ...cw5n1h2txyewy\\LockApp.exe N\/A |\r\n| 0 N\/A N\/A 13280 C+G ...upport\\CEF\\Max Helper.exe N\/A |\r\n| 0 N\/A N\/A 15948 C+G ...t\\GoogleIMEJaRenderer.exe N\/A |\r\n| 0 N\/A N\/A 16128 C+G ...ram Files\\Slack\\Slack.exe N\/A |\r\n| 0 N\/A N\/A 19096 C+G ...8bbwe\\WindowsTerminal.exe N\/A |\r\n+-----------------------------------------------------------------------------+\r\n```\r\n\r\nI don't know what should I show in such a case. If it's not enough, please tell me some commands.\r\n\r\n---\r\n## what I did\r\nI surveyed more and I found 2 issues.\r\n\r\nAbout the first one, I wrote it as a new issue.\r\nhttps:\/\/github.com\/huggingface\/datasets\/issues\/2031\r\n\r\nThe error I mentioned in the previous comment above, which occurred on my local machine, is no longer occurring.\r\n\r\nBut, it still failed. In the previous comment, I wrote `AssertionError: OutputStream realloc failed.` happen on another machine. It also happens on my local machine.\r\n\r\nHere's what I've tried.\r\n\r\nthe wikipedia.py downloads these xml.bz2 files based on dumpstatus.json\r\nIn Japanese Wikipedia dataset that I specified, it will download these 6 files.\r\n\r\n\r\n`https:\/\/dumps.wikimedia.org\/jawiki\/20210120\/dumpstatus.json`\r\nand filtered json based on wikipedia.py is below.\r\n```json\r\n {\r\n \"jobs\": {\r\n \"articlesmultistreamdump\": {\r\n \"files\": {\r\n \"jawiki-20210120-pages-articles-multistream1.xml-p1p114794.bz2\": {\r\n \"url\": \"\/jawiki\/20210120\/jawiki-20210120-pages-articles-multistream1.xml-p1p114794.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream2.xml-p114795p390428.bz2\": {\r\n \"url\": \"\/jawiki\/20210120\/jawiki-20210120-pages-articles-multistream2.xml-p114795p390428.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream3.xml-p390429p902407.bz2\": {\r\n \"url\": \"\/jawiki\/20210120\/jawiki-20210120-pages-articles-multistream3.xml-p390429p902407.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream4.xml-p902408p1721646.bz2\": {\r\n \"url\": \"\/jawiki\/20210120\/jawiki-20210120-pages-articles-multistream4.xml-p902408p1721646.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream5.xml-p1721647p2807947.bz2\": {\r\n \"url\": \"\/jawiki\/20210120\/jawiki-20210120-pages-articles-multistream5.xml-p1721647p2807947.bz2\"\r\n },\r\n \"jawiki-20210120-pages-articles-multistream6.xml-p2807948p4290013.bz2\": {\r\n \"url\": \"\/jawiki\/20210120\/jawiki-20210120-pages-articles-multistream6.xml-p2807948p4290013.bz2\"\r\n }\r\n }\r\n }\r\n }\r\n }\r\n```\r\n\r\nSo, I tried running with fewer resources by modifying this line.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/13a5b7db992ad5cf77895e4c0f76595314390418\/datasets\/wikipedia\/wikipedia.py#L524\r\nI changed it like this. just change filepaths list.\r\n` | \"Initialize\" >> beam.Create(filepaths[:1])`\r\n\r\nand I added a print line inside for the loop of _extract_content.\r\nlike this `if(i % 100000 == 0): print(i)`\r\n\r\nfirst, without modification, it always stops after all _extract_content is done.\r\n\r\n- `filepaths[:1]` then it succeeded.\r\n- `filepaths[:2]` then it failed.\r\nI don't try all patterns because each pattern takes a long time.\r\n\r\n### my opinion\r\nIt seems it's successful when the entire file size is small.\r\n \r\nso, at least it doesn't file-specific issue.\r\n\r\n\r\nI don't know it's true but I think when beam_writter writes into a file, it consumes memory depends on its entire file.\r\nbut It's correct Apache Beam's behavior? I'm not familiar with this library.\r\n","I don't know if this is related, but there is this issue on the wikipedia processing that you reported at #2031 (open PR is at #2037 ) .\r\nDoes the fix your proposed at #2037 helps in your case ?\r\n\r\nAnd for information, the DirectRunner of Apache Beam is not optimized for memory intensive tasks, so you must be right when you say that it uses the memory for the entire file.","the #2037 doesn't solve my problem directly, but I found the point!\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/349ac4398a3bcae6356f14c5754483383a60e8a4\/datasets\/wikipedia\/wikipedia.py#L523\r\nthis `beam.transforms.Reshuffle()` cause the memory error.\r\n\r\nit makes sense if I consider the shuffle means. Beam's reshuffle seems need put all data in memory.\r\nPreviously I doubt that this line causes error, but at that time another bug showed in #2037 made error, so I can't found it.\r\n\r\nAnyway, I comment out this line, and run load_dataset, then it works!\r\n\r\n```python\r\nwiki = datasets.load_dataset(\r\n \".\/wikipedia.py\",\r\n cache_dir=\".\/datasets\",\r\n beam_runner=\"DirectRunner\",\r\n language=\"ja\",\r\n date=\"20210120\",\r\n)[\"train\"]\r\n```\r\n![image](https:\/\/user-images.githubusercontent.com\/6331508\/112283369-6a9f3300-8ccb-11eb-82e5-827bf7fddfb9.png)\r\n\r\nDataset has already shuffle function. https:\/\/github.com\/huggingface\/datasets\/blob\/349ac4398a3bcae6356f14c5754483383a60e8a4\/src\/datasets\/arrow_dataset.py#L2069\r\nSo, though I don't know it's difference correctly, but I think Beam's reshuffle isn't be needed. How do you think?","The reshuffle is needed when you use parallelism.\r\nThe objective is to redistribute the articles evenly on the workers, since the `_extract_content` step generated many articles per file. By using reshuffle, we can split the processing of the articles of one file into several workers. Without reshuffle, all the articles of one file would be processed on the same worker that read the file, making the whole process take a very long time.","Maybe the reshuffle step can be added only if the runner is not a DirectRunner ?"],"created_at":1611908244000,"updated_at":1616674251000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```py\r\nimport datasets\r\nwiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='.\/datasets')\r\n```\r\nthen `ModuleNotFoundError: No module named 'apache_beam'` happend.\r\n\r\nThe error doesn't appear when it's '20200501.en'.\r\nI don't know Apache Beam, but according to #498 it isn't necessary when it's saved to local. is it correct?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1790\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1789","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1789\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1789\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1789\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1789","id":796229721,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2","number":1789,"title":"[BUG FIX] typo in the import path for metrics","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611856897000,"updated_at":1611857636000,"closed_at":1611857636000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1789","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1789","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1789.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1789.patch"},"body":"This tiny PR fixes a typo introduced in https:\/\/github.com\/huggingface\/datasets\/pull\/1726 which prevents loading new metrics","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1789\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1788","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1788\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1788\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1788\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1788","id":795544422,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYyODc1NzA2","number":1788,"title":"Doc2dial rc","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611791460000,"updated_at":1611859573000,"closed_at":1611859573000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1788","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1788","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1788.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1788.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1788\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1787","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1787\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1787\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1787\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1787","id":795485842,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3","number":1787,"title":"Update the CommonGen citation information","user":{"login":"yuchenlin","id":10104354,"node_id":"MDQ6VXNlcjEwMTA0MzU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10104354?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yuchenlin","html_url":"https:\/\/github.com\/yuchenlin","followers_url":"https:\/\/api.github.com\/users\/yuchenlin\/followers","following_url":"https:\/\/api.github.com\/users\/yuchenlin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yuchenlin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yuchenlin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yuchenlin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yuchenlin\/orgs","repos_url":"https:\/\/api.github.com\/users\/yuchenlin\/repos","events_url":"https:\/\/api.github.com\/users\/yuchenlin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yuchenlin\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611785567000,"updated_at":1611842189000,"closed_at":1611842189000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1787","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1787","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1787.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1787.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1787\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1786","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1786\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1786\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1786\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1786","id":795462816,"node_id":"MDU6SXNzdWU3OTU0NjI4MTY=","number":1786,"title":"How to use split dataset ","user":{"login":"kkhan188","id":78090287,"node_id":"MDQ6VXNlcjc4MDkwMjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/78090287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kkhan188","html_url":"https:\/\/github.com\/kkhan188","followers_url":"https:\/\/api.github.com\/users\/kkhan188\/followers","following_url":"https:\/\/api.github.com\/users\/kkhan188\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kkhan188\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kkhan188\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kkhan188\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kkhan188\/orgs","repos_url":"https:\/\/api.github.com\/users\/kkhan188\/repos","events_url":"https:\/\/api.github.com\/users\/kkhan188\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kkhan188\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndata_files = {\r\n \"train\": \"data\/lambada\/train.txt\",\r\n \"valid\": \"data\/lambada\/valid.txt\",\r\n \"test\": \"data\/lambada\/test.txt\",\r\n}\r\nds = load_dataset(\"text\", data_files=data_files)\r\n```","Thank you for the quick response! "],"created_at":1611783467000,"updated_at":1619191059000,"closed_at":1619191059000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"![Capture1](https:\/\/user-images.githubusercontent.com\/78090287\/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG)\r\n\r\nHey,\r\nI want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my project but its not giving desired results. Any help will be appreciated!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1786\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1785","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1785\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1785\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1785\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1785","id":795458856,"node_id":"MDU6SXNzdWU3OTU0NTg4NTY=","number":1785,"title":"Not enough disk space (Needed: Unknown size) when caching on a cluster","user":{"login":"olinguyen","id":4341867,"node_id":"MDQ6VXNlcjQzNDE4Njc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4341867?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/olinguyen","html_url":"https:\/\/github.com\/olinguyen","followers_url":"https:\/\/api.github.com\/users\/olinguyen\/followers","following_url":"https:\/\/api.github.com\/users\/olinguyen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/olinguyen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/olinguyen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/olinguyen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/olinguyen\/orgs","repos_url":"https:\/\/api.github.com\/users\/olinguyen\/repos","events_url":"https:\/\/api.github.com\/users\/olinguyen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/olinguyen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! \r\n\r\nWhat do you mean by \"disk_usage(\".\").free` can't compute on the cluster's shared disk\" exactly ?\r\nDoes it return 0 ?","Yes, that's right. It shows 0 free space even though there is. I suspect it might have to do with permissions on the shared disk.\r\n\r\n```python\r\n>>> disk_usage(\".\")\r\nusage(total=999999, used=999999, free=0)\r\n```","That's an interesting behavior...\r\nDo you know any other way to get the free space that works in your case ?\r\nAlso if it's a permission issue could you try fix the permissions and let mus know if that helped ?","I think its an issue on the clusters end (unclear exactly why -- maybe something with docker containers?), will close the issue"],"created_at":1611783059000,"updated_at":1611968876000,"closed_at":1611968876000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk.\r\n\r\nThe exact error thrown:\r\n\r\n```bash\r\n>>> load_dataset(dataset, cache_dir=\"\/path\/to\/cluster\/shared\/path\")\r\nOSError: Not enough disk space. Needed: Unknown size (download: Unknown size, generated: Unknown size, post-processed: Unknown size)\r\n```\r\n\r\n\r\n[`utils.has_sufficient_disk_space`](https:\/\/github.com\/huggingface\/datasets\/blob\/8a03ab7d123a76ee744304f21ce868c75f411214\/src\/datasets\/utils\/py_utils.py#L332) fails on each job because of how the cluster system is designed (`disk_usage(\".\").free` can't compute on the cluster's shared disk).\r\n\r\n\r\nThis is exactly where the error gets thrown:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/builder.py#L502\r\n\r\n```python\r\nif not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root):\r\n raise IOError(\r\n \"Not enough disk space. Needed: {} (download: {}, generated: {}, post-processed: {})\".format(\r\n utils.size_str(self.info.size_in_bytes or 0),\r\n utils.size_str(self.info.download_size or 0),\r\n utils.size_str(self.info.dataset_size or 0),\r\n utils.size_str(self.info.post_processing_size or 0),\r\n )\r\n )\r\n\r\n```\r\n\r\nWhat would be a good way to circumvent this? my current fix is to manually comment out that part, but that is not ideal. \r\nWould it be possible to pass a flag to skip this check on disk space?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1785\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1784","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1784\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1784\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1784\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1784","id":794659174,"node_id":"MDU6SXNzdWU3OTQ2NTkxNzQ=","number":1784,"title":"JSONDecodeError on JSON with multiple lines","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full stacktrace please ? Also which version of datasets and pyarrow are you using ?\r\n\r\n","Hi Quentin!\r\n\r\nI apologize for bothering you. There was some issue with my pyarrow version as far as I understand. I don't remember the exact version I was using as I didn't check it.\r\n\r\nI repeated it with `datasets 1.2.1` and `pyarrow 2.0.0` and it worked.\r\n\r\nClosing this issue. Again, sorry for the bother.\r\n\r\nThanks,\r\nGunjan"],"created_at":1611706762000,"updated_at":1612082838000,"closed_at":1612082838000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hello :),\r\n\r\nI have been trying to load data using a JSON file. Based on the [docs](https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#json-files), the following format is supported:\r\n\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n But, when I try loading a dataset with the same format, I get a JSONDecodeError : `JSONDecodeError: Extra data: line 2 column 1 (char 7142)`. Now, this is expected when using `json` to load a JSON file. But I was wondering if there are any special arguments to pass when using `load_dataset` as the docs suggest that this format is supported.\r\n\r\nWhen I convert the JSON file to a list of dictionaries format, I get AttributeError: `AttributeError: 'list' object has no attribute 'keys'`. So, I can't convert them to list of dictionaries either.\r\n\r\nPlease let me know :)\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1784\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1783","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1783\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1783\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1783\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1783","id":794544495,"node_id":"MDU6SXNzdWU3OTQ1NDQ0OTU=","number":1783,"title":"Dataset Examples Explorer","user":{"login":"ChewKokWah","id":30875246,"node_id":"MDQ6VXNlcjMwODc1MjQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30875246?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChewKokWah","html_url":"https:\/\/github.com\/ChewKokWah","followers_url":"https:\/\/api.github.com\/users\/ChewKokWah\/followers","following_url":"https:\/\/api.github.com\/users\/ChewKokWah\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChewKokWah\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChewKokWah\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChewKokWah\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChewKokWah\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChewKokWah\/repos","events_url":"https:\/\/api.github.com\/users\/ChewKokWah\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChewKokWah\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ChewKokWah,\r\n\r\nWe're working on it! In the meantime, you can still find the dataset explorer at the following URL: https:\/\/huggingface.co\/datasets\/viewer\/","Glad to see that it still exist, this existing one is more than good enough for me, it is feature rich, simple to use and concise. \r\nHope similar feature can be retain in the future version."],"created_at":1611693542000,"updated_at":1612187924000,"closed_at":1612187924000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version.\r\n\r\nHope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a particular dataset, or alternatively can extract 20 examples for each datasets and make those part of the Dataset Card Documentation.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1783\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1782","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1782\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1782\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1782\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1782","id":794167920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYxNzI5OTc3","number":1782,"title":"Update pyarrow import warning","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611661631000,"updated_at":1611669050000,"closed_at":1611669049000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1782","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1782","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1782.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1782.patch"},"body":"Update the minimum version to >=0.17.1 in the pyarrow version check and update the message.\r\n\r\nI also moved the check at the top of the __init__.py","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1782\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1781","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1781\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1781\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1781\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1781","id":793914556,"node_id":"MDU6SXNzdWU3OTM5MTQ1NTY=","number":1781,"title":"AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import ","user":{"login":"PalaashAgrawal","id":45964869,"node_id":"MDQ6VXNlcjQ1OTY0ODY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/45964869?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PalaashAgrawal","html_url":"https:\/\/github.com\/PalaashAgrawal","followers_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/followers","following_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/orgs","repos_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/repos","events_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PalaashAgrawal\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgrade\r\n```","We should bump up the version test of pyarrow maybe no?\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/__init__.py#L60","Yes indeed.\r\n\r\nAlso it looks like Pyarrow 3.0.0 got released on pypi 10 hours ago. This might be related to the bug, I'll investigate\r\nEDIT: looks like the 3.0.0 release doesn't have unexpected breaking changes for us, so I don't think the issue comes from that","Maybe colab moved to pyarrow 0.16 by default (instead of 0.14 before)?","Installing datasets installs pyarrow>=0.17.1 so in theory it doesn't matter which version of pyarrow colab has by default (which is currently pyarrow 0.14.1).\r\n\r\nAlso now the colab runtime refresh the pyarrow version automatically after the update from pip (previously you needed to restart your runtime).\r\n\r\nI guess what happened is that Colab didn't refresh pyarrow for some reason, and the AttributeError was raised *before* the pyarrow version check from `datasets` at https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/__init__.py#L60","Yes colab doesn\u2019t reload preloaded library unless you restart the instance. Maybe we should move the check on top of the init ","Yes I'll do that :)","I updated the pyarrow version check in #1782"],"created_at":1611634715000,"updated_at":1611661656000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm using Colab. And suddenly this morning, there is this error. Have a look below!\r\n\r\n![screenshot-colab research google com-2021 01 26-08-15-36](https:\/\/user-images.githubusercontent.com\/45964869\/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1781\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1780","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1780\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1780\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1780\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1780","id":793882132,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy","number":1780,"title":"Update SciFact URL","user":{"login":"dwadden","id":3091916,"node_id":"MDQ6VXNlcjMwOTE5MTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3091916?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dwadden","html_url":"https:\/\/github.com\/dwadden","followers_url":"https:\/\/api.github.com\/users\/dwadden\/followers","following_url":"https:\/\/api.github.com\/users\/dwadden\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dwadden\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dwadden\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dwadden\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dwadden\/orgs","repos_url":"https:\/\/api.github.com\/users\/dwadden\/repos","events_url":"https:\/\/api.github.com\/users\/dwadden\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dwadden\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets\/scifact --save_infos --all_configs --ignore_verifications\r\n```\r\nThis will update the dataset_infos.json :)","Nice, I ran that command and `dataset_infos` seems to have been updated appropriately; I added this to the PR. But when I try to load the dataset it still seems like it's getting a path to the old URL somehow. I `pip install -e`'d my fork of the repo, so I'm not sure why `load_dataset` is still looking for the old version of the file. Stack trace below.\r\n\r\n```\r\nIn [1]: import datasets\r\n\r\nIn [2]: ds = datasets.load_dataset(\"scifact\", \"claims\")\r\nDownloading: 7.34kB [00:00, 2.58MB\/s]\r\nDownloading: 3.38kB [00:00, 1.36MB\/s]\r\nDownloading and preparing dataset scifact\/claims (download: 2.72 MiB, generated: 258.64 KiB, post-processed: Unknown size, total: 2.97 MiB) to \/Users\/dwadden\/.cache\/huggingface\/datasets\/scifact\/claims\/1.0.0\/2bb675b2003716a061a4d8ce27fab32ab7f6d010016bab08ffaccea3c14ec6e7...\r\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-2-9a50b954d89a> in <module>\r\n----> 1 ds = datasets.load_dataset(\"scifact\", \"claims\")\r\n\r\n~\/proj\/datasets\/src\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 672\r\n 673 # Download and prepare data\r\n--> 674 builder_instance.download_and_prepare(\r\n 675 download_config=download_config,\r\n 676 download_mode=download_mode,\r\n\r\n~\/proj\/datasets\/src\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 560 logger.warning(\"HF google storage unreachable. Downloading and preparing it from source\")\r\n 561 if not downloaded_from_gcs:\r\n--> 562 self._download_and_prepare(\r\n 563 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 564 )\r\n\r\n~\/proj\/datasets\/src\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 616 split_dict = SplitDict(dataset_name=self.name)\r\n 617 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 618 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 619\r\n 620 # Checksums verification\r\n\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\/2bb675b2003716a061a4d8ce27fab32ab7f6d010016bab08ffaccea3c14ec6e7\/scifact.py in _split_generators(self, dl_manager)\r\n 92 # dl_manager is a datasets.download.DownloadManager that can be used to\r\n 93 # download and extract URLs\r\n---> 94 dl_dir = dl_manager.download_and_extract(_URL)\r\n 95\r\n 96 if self.config.name == \"corpus\":\r\n\r\n~\/proj\/datasets\/src\/datasets\/utils\/download_manager.py in download_and_extract(self, url_or_urls)\r\n 256 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 257 \"\"\"\r\n--> 258 return self.extract(self.download(url_or_urls))\r\n 259\r\n 260 def get_recorded_sizes_checksums(self):\r\n\r\n~\/proj\/datasets\/src\/datasets\/utils\/download_manager.py in download(self, url_or_urls)\r\n 177\r\n 178 start_time = datetime.now()\r\n--> 179 downloaded_path_or_paths = map_nested(\r\n 180 download_func,\r\n 181 url_or_urls,\r\n\r\n~\/proj\/datasets\/src\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 223 # Singleton\r\n 224 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 225 return function(data_struct)\r\n 226\r\n 227 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)\r\n\r\n~\/proj\/datasets\/src\/datasets\/utils\/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 348 if is_remote_url(url_or_filename):\r\n 349 # URL, so get it from the cache (downloading if necessary)\r\n--> 350 output_path = get_from_cache(\r\n 351 url_or_filename,\r\n 352 cache_dir=cache_dir,\r\n\r\n~\/proj\/datasets\/src\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries)\r\n 631 elif response is not None and response.status_code == 404:\r\n 632 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n--> 633 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 634\r\n 635 # Try a second time\r\n\r\nConnectionError: Couldn't reach https:\/\/ai2-s2-scifact.s3-us-west-2.amazonaws.com\/release\/2020-05-01\/data.tar.gz\r\n```","Hi ! This may be because you need to point `load_dataset` to the path of the dataset script that has the updated url:\r\n```python\r\nload_dataset(\".\/datasets\/scifact\", \"claims\")\r\n```\r\n\r\nIf you don't use a path to the updated script, then the old one is used by deffault","Nice, I did\r\n```\r\nload_dataset(\".\/datasets\/scifact\", \"claims\")\r\n```\r\nand it worked. ","One more question about the way the code is being preprocessed. The way I've formatted the data, each entry is a claim, which may be associated with multiple evidence documents (similar to FEVER):\r\n```\r\n# My way\r\n{'id': 70,\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence': {'5956380': [{'sentences': [5, 6], 'label': 'SUPPORT'}],\r\n '4414547': [{'sentences': [5], 'label': 'SUPPORT'}]},\r\n 'cited_doc_ids': [5956380, 4414547]}\r\n```\r\n\r\nIn the Hugginface data, each entry is a single claim \/ evidence document pair. So, the above entry is converted into two separate entries, like so:\r\n```\r\n# huggingface\r\n[{'cited_doc_ids': [5956380, 4414547],\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence_doc_id': '5956380',\r\n 'evidence_label': 'SUPPORT',\r\n 'evidence_sentences': [5, 6],\r\n 'id': 70},\r\n {'cited_doc_ids': [5956380, 4414547],\r\n 'claim': 'Activation of PPM1D suppresses p53 function.',\r\n 'evidence_doc_id': '4414547',\r\n 'evidence_label': 'SUPPORT',\r\n 'evidence_sentences': [5],\r\n 'id': 70}]\r\n```\r\n\r\nWas this done by design? If not, would you mind if I modify the Huggingface code so that it more closely matches the format that people will get if they download the data from the SciFact repo?","Yes if you think the format is not convenient for training or evaluation we can change it.\r\nAlso I think we're doing something similar for FEVER: one example = one (claim, sentence) pair.\r\n\r\nLet's merge this PR first and then feel free to open a new PR to change the format :) ","Thanks for merging!\r\n\r\nI don't have super-strong feelings one way or the other in terms of the data, I think it's probably fine. I may revisit later."],"created_at":1611629346000,"updated_at":1611859680000,"closed_at":1611829185000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1780","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1780","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1780.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1780.patch"},"body":"Hi,\r\n\r\nI'm following up this [issue](https:\/\/github.com\/huggingface\/datasets\/issues\/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!\r\n\r\nBasically, I'd just like to change the `_URL` to `\"https:\/\/scifact.s3-us-west-2.amazonaws.com\/release\/latest\/data.tar.gz\"`. I changed `scifact.py` appropriately and tried running\r\n\r\n```\r\npython datasets-cli test datasets\/scifact --save_infos --all_configs\r\n```\r\nwhich I was hoping would update the `dataset_infos.json` for SciFact. But for some reason the code still seems to be looking for the old version of the dataset. Full stack trace below. I've tried to clear all my Huggingface-related caches, and I've `git grep`'d to make sure that the old path to the dataset isn't floating around somewhere. So I'm not sure why this is happening?\r\n\r\nCan you help me switch the download URL?\r\n\r\n```\r\n(datasets) $ python datasets-cli test datasets\/scifact --save_infos --all_configs\r\nChecking datasets\/scifact\/scifact.py for additional imports.\r\nFound main folder for dataset datasets\/scifact\/scifact.py at \/Users\/dwadden\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\r\nFound specific version folder for dataset datasets\/scifact\/scifact.py at \/Users\/dwadden\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534\r\nFound script file from datasets\/scifact\/scifact.py to \/Users\/dwadden\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534\/scifact.py\r\nFound dataset infos file from datasets\/scifact\/dataset_infos.json to \/Users\/dwadden\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534\/dataset_infos.json\r\nFound metadata file for dataset datasets\/scifact\/scifact.py at \/Users\/dwadden\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534\/scifact.json\r\nLoading Dataset Infos from \/Users\/dwadden\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/scifact\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534\r\nTesting builder 'corpus' (1\/2)\r\nGenerating dataset scifact (\/Users\/dwadden\/.cache\/huggingface\/datasets\/scifact\/corpus\/1.0.0\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534)\r\nDownloading and preparing dataset scifact\/corpus (download: 2.72 MiB, generated: 7.63 MiB, post-processed: Unknown size, total: 10.35 MiB) to \/Users\/dwadden\/.cache\/huggingface\/datasets\/scifact\/corpus\/1.0.0\/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534...\r\nDownloading took 0.0 min\r\nChecksum Computation took 0.0 min\r\nTraceback (most recent call last):\r\n File \"\/Users\/dwadden\/proj\/datasets\/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"\/Users\/dwadden\/proj\/datasets\/src\/datasets\/commands\/test.py\", line 139, in run\r\n builder.download_and_prepare(\r\n File \"\/Users\/dwadden\/proj\/datasets\/src\/datasets\/builder.py\", line 562, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/Users\/dwadden\/proj\/datasets\/src\/datasets\/builder.py\", line 622, in _download_and_prepare\r\n verify_checksums(\r\n File \"\/Users\/dwadden\/proj\/datasets\/src\/datasets\/utils\/info_utils.py\", line 32, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https:\/\/ai2-s2-scifact.s3-us-west-2.amazonaws.com\/release\/2020-05-01\/data.tar.gz'}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1780\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1779","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1779\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1779\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1779\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1779","id":793539703,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5","number":1779,"title":"Ignore definition line number of functions for caching","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611592949000,"updated_at":1611656420000,"closed_at":1611656419000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1779","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1779","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1779.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1779.patch"},"body":"As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.\r\n\r\nThis is because we were not ignoring the line number definition for such functions (even though we're doing it for lambda functions).\r\n\r\nFor example this code currently prints False:\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\n# define once\r\ndef foo(x):\r\n return x\r\n\r\nh = Hasher.hash(foo)\r\n\r\n# define a second time elsewhere\r\ndef foo(x):\r\n return x\r\n\r\nprint(h == Hasher.hash(foo))\r\n```\r\n\r\nI changed this by ignoring the line number for all functions.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1779\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1778","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1778\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1778\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1778\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1778","id":793474507,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1","number":1778,"title":"Narrative QA Manual","user":{"login":"rsanjaykamath","id":18527321,"node_id":"MDQ6VXNlcjE4NTI3MzIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18527321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rsanjaykamath","html_url":"https:\/\/github.com\/rsanjaykamath","followers_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/followers","following_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/orgs","repos_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/repos","events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364","Excellent comments. Thanks for those valuable suggestions. I changed everything as you have pointed out :) ","I've copied the same template as NarrativeQA now. Please let me know if this is fine. ","> Awesome thank you !!\r\n> This looks all good :)\r\n> \r\n> Just before we merge, I was wondering if you knew why the number of examples in the train set went from 1102 to 32747 in your last commit ? I can't see why the changes in the code would cause such a big difference\r\n\r\nOk the change was the way I presented the data. \r\nIn my previous code, I presented a story with a list of questions-answers related to the story per sample. So the total 1102 was the number of stories (not questions) in the train set. \r\n\r\nIn the case of `NarrativeQA`, the code presented each sample data with one single question. So the story gets replicated as many times based on number of questions per story. I felt this was not really memory efficient so I had coded the way I did earlier. \r\n\r\nBut since this would be inconsistent as you pointed out, I modified my code to suit the `NarrativeQA` approach. Hope it's clear now :) ","Ok I see ! that makes sense","Thanks for your time and helping me with all this :) Really appreciate the hardwork you guys do. "],"created_at":1611588151000,"updated_at":1611912914000,"closed_at":1611912891000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1778","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1778","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1778.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1778.patch"},"body":"Submitting the manual version of Narrative QA script which requires a manual download from the original repository","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1778\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1777","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1777\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1777\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1777\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1777","id":793273770,"node_id":"MDU6SXNzdWU3OTMyNzM3NzA=","number":1777,"title":"GPT2 MNLI training using run_glue.py","user":{"login":"nlp-student","id":76427077,"node_id":"MDQ6VXNlcjc2NDI3MDc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/76427077?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nlp-student","html_url":"https:\/\/github.com\/nlp-student","followers_url":"https:\/\/api.github.com\/users\/nlp-student\/followers","following_url":"https:\/\/api.github.com\/users\/nlp-student\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nlp-student\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nlp-student\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nlp-student\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nlp-student\/orgs","repos_url":"https:\/\/api.github.com\/users\/nlp-student\/repos","events_url":"https:\/\/api.github.com\/users\/nlp-student\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nlp-student\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611572032000,"updated_at":1611573173000,"closed_at":1611573173000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`\r\n\r\nRunning this on Google Colab,\r\n\r\n```\r\n!python run_glue.py \\\r\n --model_name_or_path gpt2 \\\r\n --task_name mnli \\\r\n --do_train \\\r\n --do_eval \\\r\n --max_seq_length 128 \\\r\n --per_gpu_train_batch_size 10 \\\r\n --gradient_accumulation_steps 32\\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir models\/gpt2\/mnli\/\r\n```\r\n\r\nI get the following error,\r\n\r\n```\r\n \"Asking to pad but the tokenizer does not have a padding token. \"\r\nValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\r\n```\r\n\r\nDo I need to modify the trainer to work with GPT2 ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1777\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1776","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1776\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1776\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1776\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1776","id":792755249,"node_id":"MDU6SXNzdWU3OTI3NTUyNDk=","number":1776,"title":"[Question & Bug Report] Can we preprocess a dataset on the fly?","user":{"login":"shuaihuaiyi","id":14048129,"node_id":"MDQ6VXNlcjE0MDQ4MTI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14048129?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shuaihuaiyi","html_url":"https:\/\/github.com\/shuaihuaiyi","followers_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/followers","following_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/orgs","repos_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/repos","events_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We are very actively working on this. How does your dataset look like in practice (number\/size\/type of files)?","It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm_wwm.py","Indeed I will submit a PR in a fez days to enable processing on-the-fly :)\r\nThis can be useful in language modeling for tokenization, padding etc.\r\n","any update on this issue? ...really look forward to use it ","Hi @acul3,\r\n\r\nPlease look at the discussion on a related Issue #1825. I think using `set_transform` after building from source should do.","@gchhablani thank you so much\r\n\r\nwill try look at it"],"created_at":1611480504000,"updated_at":1621484158000,"closed_at":1621484158000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?\r\n\r\nBTW, I tried raising `writer_batch_size`. Seems that argument doesn't have any effect when it's larger than `batch_size`, because you are saving all the batch instantly after it's processed. Please check the following code:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/0281f9d881f3a55c89aeaa642f1ba23444b64083\/src\/datasets\/arrow_dataset.py#L1532","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1776\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1775","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1775\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1775\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1775\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1775","id":792742120,"node_id":"MDU6SXNzdWU3OTI3NDIxMjA=","number":1775,"title":"Efficient ways to iterate the dataset","user":{"login":"zhongpeixiang","id":11826803,"node_id":"MDQ6VXNlcjExODI2ODAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11826803?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhongpeixiang","html_url":"https:\/\/github.com\/zhongpeixiang","followers_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/followers","following_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/repos","events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems that selecting a subset of colums directly from the dataset, i.e., dataset[\"column\"], is slow.","I was wrong, ```dataset[\"column\"]``` is fast."],"created_at":1611474871000,"updated_at":1611481839000,"closed_at":1611481839000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"For a large dataset that does not fits the memory, how can I select only a subset of features from each example?\r\n\r\nIf I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?\r\n\r\nThanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1775\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1774","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1774\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1774\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1774\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1774","id":792730559,"node_id":"MDU6SXNzdWU3OTI3MzA1NTk=","number":1774,"title":"is it possible to make slice to be more compatible like python list and numpy?","user":{"login":"world2vec","id":7607120,"node_id":"MDQ6VXNlcjc2MDcxMjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7607120?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/world2vec","html_url":"https:\/\/github.com\/world2vec","followers_url":"https:\/\/api.github.com\/users\/world2vec\/followers","following_url":"https:\/\/api.github.com\/users\/world2vec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/world2vec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/world2vec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/world2vec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/world2vec\/orgs","repos_url":"https:\/\/api.github.com\/users\/world2vec\/repos","events_url":"https:\/\/api.github.com\/users\/world2vec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/world2vec\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nI am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing.\r\nIf you have some code to reproduce the issue it would be nice so I can make sure that this case will be supported :)\r\nI'll make a PR in a few days ","Good if you can take care at your side.\r\nHere is the [colab notebook](https:\/\/colab.research.google.com\/drive\/19c-abm87RTRYgW9G1D8ktfwRW95zDYBZ?usp=sharing)"],"created_at":1611468952000,"updated_at":1611531378000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nsee below error:\r\n```\r\nAssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1774\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1773","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1773\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1773\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1773\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1773","id":792708160,"node_id":"MDU6SXNzdWU3OTI3MDgxNjA=","number":1773,"title":"bug in loading datasets ","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like an issue with your csv file. Did you use the right delimiter ?\r\nApparently at line 37 the CSV reader from pandas reads 2 fields instead of 1.","Note that you can pass any argument you would pass to `pandas.read_csv` as kwargs to `load_dataset`. For example you can do\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files=data_files, sep=\"\\t\")\r\n```\r\n\r\nfor example to use a tab separator.\r\n\r\nYou can see the full list of arguments here: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/packaged_modules\/csv\/csv.py\r\n\r\n(I've not found the list in the documentation though, we definitely must add them !)","You can try to convert the file to (CSV UTF-8)"],"created_at":1611456825000,"updated_at":1630918486000,"closed_at":1628100781000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI need to load a dataset, I use these commands:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={'train': 'sick\/train.csv',\r\n 'test': 'sick\/test.csv',\r\n 'validation': 'sick\/validation.csv'})\r\nprint(dataset['validation'])\r\n```\r\nthe dataset in sick\/train.csv are simple csv files representing the data. I am getting this error, do you have an idea how I can solve this? thank you @lhoestq \r\n\r\n \r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv\/default-61468fc71a743ec1 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/julia\/cache_home_2\/datasets\/csv\/default-61468fc71a743ec1\/0.0.0\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nTraceback (most recent call last):\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/builder.py\", line 485, in incomplete_dir\r\n yield tmp_dir\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/builder.py\", line 527, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/builder.py\", line 604, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/builder.py\", line 959, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/tqdm-4.49.0-py3.7.egg\/tqdm\/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"\/julia\/cache_home_2\/modules\/datasets_modules\/datasets\/csv\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2\/csv.py\", line 129, in _generate_tables\r\n for batch_idx, df in enumerate(csv_file_reader):\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/pandas-1.2.0-py3.7-linux-x86_64.egg\/pandas\/io\/parsers.py\", line 1029, in __next__\r\n return self.get_chunk()\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/pandas-1.2.0-py3.7-linux-x86_64.egg\/pandas\/io\/parsers.py\", line 1079, in get_chunk\r\n return self.read(nrows=size)\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/pandas-1.2.0-py3.7-linux-x86_64.egg\/pandas\/io\/parsers.py\", line 1052, in read\r\n index, columns, col_dict = self._engine.read(nrows)\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/pandas-1.2.0-py3.7-linux-x86_64.egg\/pandas\/io\/parsers.py\", line 2056, in read\r\n data = self._reader.read(nrows)\r\n File \"pandas\/_libs\/parsers.pyx\", line 756, in pandas._libs.parsers.TextReader.read\r\n File \"pandas\/_libs\/parsers.pyx\", line 783, in pandas._libs.parsers.TextReader._read_low_memory\r\n File \"pandas\/_libs\/parsers.pyx\", line 827, in pandas._libs.parsers.TextReader._read_rows\r\n File \"pandas\/_libs\/parsers.pyx\", line 814, in pandas._libs.parsers.TextReader._tokenize_rows\r\n File \"pandas\/_libs\/parsers.pyx\", line 1951, in pandas._libs.parsers.raise_parser_error\r\npandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 37, saw 2\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"write_sick.py\", line 19, in <module>\r\n 'validation': 'sick\/validation.csv'})\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/load.py\", line 612, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/builder.py\", line 534, in download_and_prepare\r\n self._save_info()\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/site-packages\/datasets-1.2.0-py3.7.egg\/datasets\/builder.py\", line 491, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/shutil.py\", line 498, in rmtree\r\n onerror(os.rmdir, path, sys.exc_info())\r\n File \"\/julia\/libs\/anaconda3\/envs\/success\/lib\/python3.7\/shutil.py\", line 496, in rmtree\r\n os.rmdir(path)\r\nOSError: [Errno 39] Directory not empty: '\/julia\/cache_home_2\/datasets\/csv\/default-61468fc71a743ec1\/0.0.0\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2.incomplete'\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1773\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1772","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1772\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1772\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1772\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1772","id":792703797,"node_id":"MDU6SXNzdWU3OTI3MDM3OTc=","number":1772,"title":"Adding SICK dataset","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611454531000,"updated_at":1612540165000,"closed_at":1612540165000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nIt would be great to include SICK dataset.\r\n\r\n## Adding a Dataset\r\n- **Name:** SICK\r\n- **Description:** a well known entailment dataset \r\n- **Paper:** http:\/\/marcobaroni.org\/composes\/sick.html\r\n- **Data:** http:\/\/marcobaroni.org\/composes\/sick.html\r\n- **Motivation:** this is an important NLI benchmark\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\n\r\n\r\nthanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1772\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1771","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1771\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1771\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1771\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1771","id":792701276,"node_id":"MDU6SXNzdWU3OTI3MDEyNzY=","number":1771,"title":"Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.2.1\/datasets\/csv\/csv.py","user":{"login":"world2vec","id":7607120,"node_id":"MDQ6VXNlcjc2MDcxMjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7607120?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/world2vec","html_url":"https:\/\/github.com\/world2vec","followers_url":"https:\/\/api.github.com\/users\/world2vec\/followers","following_url":"https:\/\/api.github.com\/users\/world2vec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/world2vec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/world2vec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/world2vec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/world2vec\/orgs","repos_url":"https:\/\/api.github.com\/users\/world2vec\/repos","events_url":"https:\/\/api.github.com\/users\/world2vec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/world2vec\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I temporary manually download csv.py as custom dataset loading script","Indeed in 1.2.1 the script to process csv file is downloaded. Starting from the next release though we include the csv processing directly in the library.\r\nSee PR #1726 \r\nWe'll do a new release soon :)","Thanks."],"created_at":1611453232000,"updated_at":1611529589000,"closed_at":1611529589000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nWhen I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/tom\/pyenv\/pystory\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/tom\/pyenv\/pystory\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 343, in cached_path\r\n max_retries=download_config.max_retries,\r\n File \"\/home\/tom\/pyenv\/pystory\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 617, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.2.1\/datasets\/csv\/csv.py\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1771\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1770","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1770\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1770\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1770\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1770","id":792698148,"node_id":"MDU6SXNzdWU3OTI2OTgxNDg=","number":1770,"title":"how can I combine 2 dataset with different\/same features?","user":{"login":"world2vec","id":7607120,"node_id":"MDQ6VXNlcjc2MDcxMjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7607120?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/world2vec","html_url":"https:\/\/github.com\/world2vec","followers_url":"https:\/\/api.github.com\/users\/world2vec\/followers","following_url":"https:\/\/api.github.com\/users\/world2vec\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/world2vec\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/world2vec\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/world2vec\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/world2vec\/orgs","repos_url":"https:\/\/api.github.com\/users\/world2vec\/repos","events_url":"https:\/\/api.github.com\/users\/world2vec\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/world2vec\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https:\/\/github.com\/huggingface\/datasets\/issues\/853#issuecomment-727872188","Good to hear.\r\nCurrently I did not use map , just fetch src and tgt from the 2 dataset and merge them.\r\nIt will be a release if you can deal with it at the backend.\r\nThanks."],"created_at":1611451566000,"updated_at":1611531834000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"to combine 2 dataset by one-one map like ds = zip(ds1, ds2):\r\nds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'} \r\nor different feature:\r\nds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1770\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1769","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1769\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1769\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1769\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1769","id":792523284,"node_id":"MDU6SXNzdWU3OTI1MjMyODQ=","number":1769,"title":"_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2","user":{"login":"shuaihuaiyi","id":14048129,"node_id":"MDQ6VXNlcjE0MDQ4MTI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14048129?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shuaihuaiyi","html_url":"https:\/\/github.com\/shuaihuaiyi","followers_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/followers","following_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/orgs","repos_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/repos","events_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shuaihuaiyi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["More information: `run_mlm.py` will raise same error when `data_args.line_by_line==True`\r\n\r\nhttps:\/\/github.com\/huggingface\/transformers\/blob\/9152f16023b59d262b51573714b40325c8e49370\/examples\/language-modeling\/run_mlm.py#L300\r\n","Hi ! What version of python and datasets do you have ? And also what version of dill and pickle ?","> Hi ! What version of python and datasets do you have ? And also what version of dill and pickle ?\r\n\r\npython==3.6.10\r\ndatasets==1.2.1\r\ndill==0.3.2\r\npickle.format_version==4.0","Multiprocessing in python require all the functions to be picklable. More specifically, functions need to be picklable with `dill`.\r\n\r\nHowever objects like `typing.Union[str, NoneType]` are not picklable in python <3.7.\r\nCan you try to update your python version to python>=3.7 ?\r\n"],"created_at":1611396780000,"updated_at":1611570237000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.\r\n\r\nThe script I use is https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm_wwm.py\r\n\r\nScript args:\r\n\r\n```\r\n--model_name_or_path\r\n..\/..\/..\/model\/chinese-roberta-wwm-ext\r\n--train_file\r\n\/nfs\/volume-377-2\/bert\/data\/test\/train.txt\r\n--output_dir\r\ntest\r\n--do_train\r\n--per_device_train_batch_size\r\n2\r\n--gradient_accumulation_steps\r\n2\r\n--learning_rate\r\n1e-4\r\n--max_steps\r\n1000\r\n--warmup_steps\r\n10\r\n--save_steps\r\n1000\r\n--save_total_limit\r\n1\r\n--seed\r\n23333\r\n--max_seq_length\r\n512\r\n--preprocessing_num_workers\r\n2\r\n--cache_dir\r\n\/nfs\/volume-377-2\/bert\/data\/test\/cache\r\n```\r\n\r\nWhere the `\/nfs\/volume-377-2\/bert\/data\/test\/train.txt` is just a toy example with 10000 lines of random string, you should be able to reproduce this error esaily.\r\n\r\nFull Traceback:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/nfs\/volume-377-2\/bert\/transformers\/examples\/language-modeling\/run_mlm_wwm.py\", line 398, in <module>\r\n main()\r\n File \"\/nfs\/volume-377-2\/bert\/transformers\/examples\/language-modeling\/run_mlm_wwm.py\", line 325, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 303, in map\r\n for k, dataset in self.items()\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 303, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1318, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1318, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/multiprocess\/pool.py\", line 644, in get\r\n raise self._value\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/multiprocess\/pool.py\", line 424, in _handle_tasks\r\n put(task)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/multiprocess\/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/multiprocess\/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 446, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 409, in dump\r\n self.save(obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 751, in save_tuple\r\n save(element)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 933, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 821, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 847, in _batch_setitems\r\n save(v)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 1438, in save_function\r\n obj.__dict__, fkwdefaults), obj=obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 610, in save_reduce\r\n save(args)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 751, in save_tuple\r\n save(element)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 736, in save_tuple\r\n save(element)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 1170, in save_cell\r\n pickler.save_reduce(_create_cell, (f,), obj=obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 610, in save_reduce\r\n save(args)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 736, in save_tuple\r\n save(element)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 521, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 605, in save_reduce\r\n save(cls)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 1365, in save_type\r\n obj.__bases__, _dict), obj=obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 610, in save_reduce\r\n save(args)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 751, in save_tuple\r\n save(element)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 933, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 821, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 847, in _batch_setitems\r\n save(v)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 933, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 821, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 847, in _batch_setitems\r\n save(v)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 507, in save\r\n self.save_global(obj, rv)\r\n File \"\/home\/luban\/miniconda3\/envs\/py36\/lib\/python3.6\/pickle.py\", line 927, in save_global\r\n (obj, module_name, name))\r\n_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1769\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1768","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1768\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1768\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1768\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1768","id":792150745,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx","number":1768,"title":"Mention kwargs in the Dataset Formatting docs","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611333800000,"updated_at":1612096390000,"closed_at":1611566099000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1768","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1768","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1768.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1768.patch"},"body":"Hi,\r\n\r\nThis was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed. \r\nTo prevent people from having to check the code\/method docs, I just added a couple of lines in the docs.\r\n\r\nPlease let me know your thoughts on this.\r\n\r\nThanks,\r\nGunjan\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1768\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1767","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1767\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1767\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1767\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1767","id":792068497,"node_id":"MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2","number":1767,"title":"Add Librispeech ASR","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Awesome thank you !\r\n> \r\n> The dummy data are quite big but it was expected given that the raw files are flac files.\r\n> Given that the script doesn't even read the flac files I think we can remove them. Or maybe use empty flac files (see [here](https:\/\/hydrogenaud.io\/index.php?topic=118685.0) for example). What do you think ?\r\n> \r\n> We'll find a better solution to be able to have bigger dummy_data (max 1MB instead of a few KB, maybe using git LFS.\r\n\r\nHmm, I already made the dummy data as small as possible (a single flac filie per split only). I'd like to keep them at least to have complete dummy data and don't think 500KB for all datasets together is a problem (the long-range summarization datasets are similarly heavy). The moment we allow dummy data to be loaded directly for testing, we need the flac files IMO.\r\n\r\nBut I agree that longterm, we need a better solution for the dummy data (maybe stop hosting it on github to not make the repo too heavy)"],"created_at":1611327277000,"updated_at":1611607087000,"closed_at":1611607062000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1767","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1767","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1767.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1767.patch"},"body":"This PR adds the librispeech asr dataset: https:\/\/www.tensorflow.org\/datasets\/catalog\/librispeech\r\n\r\nThere are 2 configs: \"clean\" and \"other\" whereas there are two \"train\" datasets for \"clean\", hence the name \"train.100\" and \"train.360\".\r\n\r\nAs suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` format, the speech files are not directly prepared to a float32-array, but instead just the path to the array file is stored.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1767\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1766","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1766\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1766\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1766\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1766","id":792044105,"node_id":"MDU6SXNzdWU3OTIwNDQxMDU=","number":1766,"title":"Issues when run two programs compute the same metrics","user":{"login":"lamthuy","id":8089862,"node_id":"MDQ6VXNlcjgwODk4NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8089862?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lamthuy","html_url":"https:\/\/github.com\/lamthuy","followers_url":"https:\/\/api.github.com\/users\/lamthuy\/followers","following_url":"https:\/\/api.github.com\/users\/lamthuy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lamthuy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lamthuy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lamthuy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lamthuy\/orgs","repos_url":"https:\/\/api.github.com\/users\/lamthuy\/repos","events_url":"https:\/\/api.github.com\/users\/lamthuy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lamthuy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace \"default_experiment\" with the experiment id that you provide in the arrow filename. \r\n\r\nAlso when two `experiment_id` collide we're supposed to detect it using our locking mechanism. Not sure why it didn't work in your case. Could you share some code that reproduces the issue ? This would help us investigate.","Thank you for your response. I fixed the issue by set \"keep_in_memory=True\" when load_metric. \r\nI cannot share the entire source code but below is the wrapper I wrote:\r\n\r\n```python\r\nclass Evaluation:\r\n def __init__(self, metric='sacrebleu'):\r\n # self.metric = load_metric(metric, keep_in_memory=True)\r\n self.metric = load_metric(metric)\r\n\r\n def add(self, predictions, references):\r\n self.metric.add_batch(predictions=predictions, references=references)\r\n\r\n def compute(self):\r\n return self.metric.compute()['score']\r\n```\r\n\r\nThen call the given wrapper as follows:\r\n\r\n```python\r\neval = Evaluation(metric='sacrebleu')\r\nfor query, candidates, labels in tqdm(dataset):\r\n predictions = net.generate(query)\r\n references = [[s] for s in labels]\r\n eval.add(predictions, references)\r\n if n % 100 == 0:\r\n bleu += eval.compute()\r\n eval = Evaluation(metric='sacrebleu')"],"created_at":1611325375000,"updated_at":1612262286000,"closed_at":1612262286000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read\/and\/write to the same location (.cache\/huggingface\/metrics\/sacrebleu\/default\/default_experiment-1-0.arrow) where it caches the batches:\r\n\r\n```\r\nFile \"train_matching_min.py\", line 160, in <module>ch_9_label\r\n avg_loss = valid(epoch, args.batch, args.validation, args.with_label)\r\n File \"train_matching_min.py\", line 93, in valid\r\n bleu += eval.compute()\r\n File \"\/u\/tlhoang\/projects\/seal\/match\/models\/eval.py\", line 23, in compute\r\n return self.metric.compute()['score']\r\n File \"\/dccstor\/know\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/metric.py\", line 387, in compute\r\n self._finalize()\r\n File \"\/dccstor\/know\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/metric.py\", line 355, in _finalize\r\n self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n File \"\/dccstor\/know\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/arrow_reader.py\", line 231, in read_files\r\n pa_table = self._read_files(files)\r\n File \"\/dccstor\/know\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/arrow_reader.py\", line 170, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict)\r\n File \"\/dccstor\/know\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/arrow_reader.py\", line 299, in _get_dataset_from_filename\r\n pa_table = f.read_all()\r\n File \"pyarrow\/ipc.pxi\", line 481, in pyarrow.lib.RecordBatchReader.read_all\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Expected to read 1819307375 metadata bytes, but only read 454396\r\n``` ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1766\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1765","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1765\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1765\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1765\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1765","id":791553065,"node_id":"MDU6SXNzdWU3OTE1NTMwNjU=","number":1765,"title":"Error iterating over Dataset with DataLoader","user":{"login":"EvanZ","id":1295082,"node_id":"MDQ6VXNlcjEyOTUwODI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1295082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/EvanZ","html_url":"https:\/\/github.com\/EvanZ","followers_url":"https:\/\/api.github.com\/users\/EvanZ\/followers","following_url":"https:\/\/api.github.com\/users\/EvanZ\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/EvanZ\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/EvanZ\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/EvanZ\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/EvanZ\/orgs","repos_url":"https:\/\/api.github.com\/users\/EvanZ\/repos","events_url":"https:\/\/api.github.com\/users\/EvanZ\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/EvanZ\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Instead of:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n```\r\nIt should be:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n```\r\n\r\n`batch_sampler` accepts a Sampler object or an Iterable, so you get an error.","@mariosasko I thought that would fix it, but now I'm getting a different error:\r\n\r\n```\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_dataset.py:851: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at \/pytorch\/torch\/csrc\/utils\/tensor_numpy.cpp:141.)\r\n return torch.tensor(x, **format_kwargs)\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-20-3af1d82bf93a> in <module>()\r\n 1 dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n----> 2 next(iter(dataloader))\r\n\r\n5 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/collate.py in default_collate(batch)\r\n 53 storage = elem.storage()._new_shared(numel)\r\n 54 out = elem.new(storage)\r\n---> 55 return torch.stack(batch, 0, out=out)\r\n 56 elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \\\r\n 57 and elem_type.__name__ != 'string_':\r\n\r\nRuntimeError: stack expects each tensor to be equal size, but got [7] at entry 0 and [10] at entry 1\r\n```\r\n\r\nAny thoughts what this means?I Do I need padding?","Yes, padding is an answer. \r\n\r\nThis can be solved easily by passing a callable to the collate_fn arg of DataLoader that adds padding. ","Padding was the fix, thanks!"],"created_at":1611269805000,"updated_at":1611400941000,"closed_at":1611373454000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have a Dataset that I've mapped a tokenizer over:\r\n\r\n```\r\nencoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])\r\nencoded_dataset[:1]\r\n```\r\n```\r\n{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),\r\n 'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 21365, 4515, 8618, 1113,\r\n 102]]),\r\n 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])}\r\n```\r\n\r\nWhen I try to iterate as in the docs, I get errors:\r\n\r\n```\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\nnext(iter(dataloader))\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-45-05180ba8aa35> in <module>()\r\n 1 dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n----> 2 next(iter(dataloader))\r\n\r\n3 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/dataloader.py in __init__(self, loader)\r\n 411 self._timeout = loader.timeout\r\n 412 self._collate_fn = loader.collate_fn\r\n--> 413 self._sampler_iter = iter(self._index_sampler)\r\n 414 self._base_seed = torch.empty((), dtype=torch.int64).random_(generator=loader.generator).item()\r\n 415 self._persistent_workers = loader.persistent_workers\r\n\r\nTypeError: 'int' object is not iterable\r\n\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1765\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1764","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1764\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1764\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1764\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1764","id":791486860,"node_id":"MDU6SXNzdWU3OTE0ODY4NjA=","number":1764,"title":"Connection Issues","user":{"login":"SaeedNajafi","id":12455298,"node_id":"MDQ6VXNlcjEyNDU1Mjk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12455298?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SaeedNajafi","html_url":"https:\/\/github.com\/SaeedNajafi","followers_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/followers","following_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/orgs","repos_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/repos","events_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SaeedNajafi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Academic WIFI was blocking."],"created_at":1611262569000,"updated_at":1611262819000,"closed_at":1611262802000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Today, I am getting connection issues while loading a dataset and the metric.\r\n```\r\nTraceback (most recent call last):\r\n File \"src\/train.py\", line 180, in <module>\r\n train_dataset, dev_dataset, test_dataset = create_race_dataset()\r\n File \"src\/train.py\", line 130, in create_race_dataset\r\n train_dataset = load_dataset(\"race\", \"all\", split=\"train\")\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 591, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 343, in cached_path\r\n max_retries=download_config.max_retries,\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 617, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.2.1\/datasets\/race\/race.py\r\n```\r\n\r\nOr\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"src\/train.py\", line 105, in <module>\r\n rouge = datasets.load_metric(\"rouge\")\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 500, in load_metric\r\n dataset=False,\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 343, in cached_path\r\n max_retries=download_config.max_retries,\r\n File \"\/Users\/saeed\/Desktop\/codes\/repos\/dreamscape-qa\/env\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 617, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.2.1\/metrics\/rouge\/rouge.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1764\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1763","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1763\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1763\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1763\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1763","id":791389763,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1","number":1763,"title":"PAWS-X: Fix csv Dictreader splitting data on quotes","user":{"login":"gowtham1997","id":9641196,"node_id":"MDQ6VXNlcjk2NDExOTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9641196?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gowtham1997","html_url":"https:\/\/github.com\/gowtham1997","followers_url":"https:\/\/api.github.com\/users\/gowtham1997\/followers","following_url":"https:\/\/api.github.com\/users\/gowtham1997\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gowtham1997\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gowtham1997\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gowtham1997\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gowtham1997\/orgs","repos_url":"https:\/\/api.github.com\/users\/gowtham1997\/repos","events_url":"https:\/\/api.github.com\/users\/gowtham1997\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gowtham1997\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611253261000,"updated_at":1611310473000,"closed_at":1611310425000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1763","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1763","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1763.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1763.patch"},"body":"\r\n```python\r\nfrom datasets import load_dataset\r\n# load english paws-x dataset \r\ndatasets = load_dataset('paws-x', 'en')\r\nprint(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs\r\nprint(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]\r\n```\r\n\r\nchanged `data = csv.DictReader(f, delimiter=\"\\t\")` to `data = csv.DictReader(f, delimiter=\"\\t\", quoting=csv.QUOTE_NONE)` in the dataloader to make csv module not split by quotes.\r\n\r\nThe results are as expected for all languages after the change.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1763\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1762","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1762\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1762\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1762\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1762","id":791226007,"node_id":"MDU6SXNzdWU3OTEyMjYwMDc=","number":1762,"title":"Unable to format dataset to CUDA Tensors","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! You can get CUDA tensors with\r\n\r\n```python\r\ndataset.set_format(\"torch\", columns=columns, device=\"cuda\")\r\n```\r\n\r\nIndeed `set_format` passes the `**kwargs` to `torch.tensor`","Hi @lhoestq,\r\n\r\nThanks a lot. Is this true for all format types?\r\n\r\nAs in, for 'torch', I can have `**kwargs` to `torch.tensor` and for 'tf' those args are passed to `tf.Tensor`, and the same for 'numpy' and 'pandas'?","Yes the keywords arguments are passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`.\r\nWe don't support the kwargs for pandas on the other hand.","Thanks @lhoestq,\r\nWould it be okay if I added this to the docs and made a PR?","Sure ! Feel free to open a PR to improve the documentation :) ","Closing this issue as it has been resolved."],"created_at":1611243083000,"updated_at":1612250002000,"closed_at":1612250002000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI came across this [link](https:\/\/huggingface.co\/docs\/datasets\/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.\r\n\r\nI tried this, but Dataset doesn't support assignment:\r\n```\r\n columns=['input_ids', 'token_type_ids', 'attention_mask', 'start_positions','end_positions']\r\n\r\n samples.set_format(type='torch', columns = columns)\r\n for column in columns:\r\n samples[column].to(torch.device(self.config.device))\r\n```\r\nThere should be an option to do so, or if there is already a way to do this, please let me know.\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1762\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1761","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1761\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1761\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1761\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1761","id":791150858,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw","number":1761,"title":"Add SILICONE benchmark","user":{"login":"eusip","id":1551356,"node_id":"MDQ6VXNlcjE1NTEzNTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1551356?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eusip","html_url":"https:\/\/github.com\/eusip","followers_url":"https:\/\/api.github.com\/users\/eusip\/followers","following_url":"https:\/\/api.github.com\/users\/eusip\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eusip\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eusip\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eusip\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eusip\/orgs","repos_url":"https:\/\/api.github.com\/users\/eusip\/repos","events_url":"https:\/\/api.github.com\/users\/eusip\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eusip\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the feedback. All your comments have been addressed!","Thank you for your constructive feedback! I now know how to best format future datasets that our team plans to publish in the near future :)","Awesome ! Looking forward to it :) ","Hi @lhoestq ! One last question. Our research team would like to distribute a link to this dataset amongst the spoken dialogue research community but the dataset does not show in the dropdown menu at huggingface.co. Is there anything else we must do in order to find the dataset there ?\r\n\r\nOnce the dataset does show in the dropdown menu, how can I affiliate it with the Telecom Paris organization that I already created at the website ?","The files are not located in the right place in the repo. Let me move them","I created a PR at https:\/\/github.com\/huggingface\/datasets\/pull\/1794","I just merged the change @eusip, now the dataset page is available at the url:\r\nhttps:\/\/huggingface.co\/datasets\/silicone","Thank you for moving the folder for me :)"],"created_at":1611239352000,"updated_at":1612449168000,"closed_at":1611669031000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1761","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1761","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1761.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1761.patch"},"body":"My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.\r\n\r\nThis is a new pull request relative to the [previously closed request](https:\/\/github.com\/huggingface\/datasets\/pull\/1712) which was reviewed by @lhoestq.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1761\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1760","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1760\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1760\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1760\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1760","id":791110857,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0","number":1760,"title":"More tags","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Conll has `multilingual` but is only tagged as `en`","good catch, that was a bad copy paste x)"],"created_at":1611237010000,"updated_at":1611308401000,"closed_at":1611308400000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1760","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1760","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1760.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1760.patch"},"body":"Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1760\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1759","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1759\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1759\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1759\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1759","id":790992226,"node_id":"MDU6SXNzdWU3OTA5OTIyMjY=","number":1759,"title":"wikipedia dataset incomplete","user":{"login":"ChrisChross","id":19912393,"node_id":"MDQ6VXNlcjE5OTEyMzkz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19912393?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChrisChross","html_url":"https:\/\/github.com\/ChrisChross","followers_url":"https:\/\/api.github.com\/users\/ChrisChross\/followers","following_url":"https:\/\/api.github.com\/users\/ChrisChross\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChrisChross\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChrisChross\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChrisChross\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChrisChross\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChrisChross\/repos","events_url":"https:\/\/api.github.com\/users\/ChrisChross\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChrisChross\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nFrom what pickle file fo you get this ?\r\nI guess you mean the dataset loaded using `load_dataset` ?","yes sorry, I used the `load_dataset`function and saved the data to a pickle file so I don't always have to reload it and are able to work offline. ","The wikipedia articles are processed using the `mwparserfromhell` library. Even if it works well in most cases, such issues can happen unfortunately. You can find the repo here: https:\/\/github.com\/earwig\/mwparserfromhell\r\n\r\nThere also exist other datasets based on wikipedia that were processed differently (and are often cleaner) such as `wiki40b`.\r\n\r\n","ok great. Thank you, @lhoestq. "],"created_at":1611229635000,"updated_at":1611249731000,"closed_at":1611249666000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hey guys,\r\n\r\nI am using the https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/wikipedia dataset.\r\nUnfortunately, I found out that there is an incompleteness for the German dataset.\r\n For reasons unknown to me, the number of inhabitants has been removed from many pages:\r\nThorey-sur-Ouche has 128 inhabitants according to the webpage (https:\/\/de.wikipedia.org\/wiki\/Thorey-sur-Ouche).\r\nThe pickle file however shows: franz\u00f6sische Gemeinde mit Einwohnern (Stand).\r\n Is it possible to fix this?\r\n\r\nBest regards \r\nChris\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1759\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1758","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1758\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1758\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1758\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1758","id":790626116,"node_id":"MDU6SXNzdWU3OTA2MjYxMTY=","number":1758,"title":"dataset.search() (elastic) cannot reliably retrieve search results","user":{"login":"afogarty85","id":49048309,"node_id":"MDQ6VXNlcjQ5MDQ4MzA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49048309?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/afogarty85","html_url":"https:\/\/github.com\/afogarty85","followers_url":"https:\/\/api.github.com\/users\/afogarty85\/followers","following_url":"https:\/\/api.github.com\/users\/afogarty85\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/afogarty85\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/afogarty85\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/afogarty85\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/afogarty85\/orgs","repos_url":"https:\/\/api.github.com\/users\/afogarty85\/repos","events_url":"https:\/\/api.github.com\/users\/afogarty85\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/afogarty85\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?","Thanks for the feedback! I added a 30 second \"sleep\" and that seemed to work well!"],"created_at":1611195997000,"updated_at":1611275150000,"closed_at":1611275150000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.\r\n\r\nThe problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.\r\n\r\nI am indexing data that looks like the following from the HF SQuAD 2.0 data set:\r\n\r\n```\r\n['57318658e6313a140071d02b',\r\n '56f7165e3d8e2e1400e3733a',\r\n '570e2f6e0b85d914000d7d21',\r\n '5727e58aff5b5019007d97d0',\r\n '5a3b5a503ff257001ab8441f',\r\n '57262fab271a42140099d725']\r\n```\r\n\r\n\r\n\r\nTo reproduce the issue, try:\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import BertTokenizerFast, BertForQuestionAnswering\r\nfrom elasticsearch import Elasticsearch\r\nimport numpy as np\r\nimport collections\r\nfrom tqdm.auto import tqdm\r\nimport torch\r\n\r\n# from https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/question_answering.ipynb#scrollTo=941LPhDWeYv-\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\nmax_length = 384 # The maximum length of a feature (question and context)\r\ndoc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.\r\npad_on_right = tokenizer.padding_side == \"right\"\r\nsquad_v2 = True\r\n\r\n# from https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/question_answering.ipynb#scrollTo=941LPhDWeYv-\r\ndef prepare_validation_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[\"question\" if pad_on_right else \"context\"],\r\n examples[\"context\" if pad_on_right else \"question\"],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=max_length,\r\n stride=doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n\r\n # We keep the example_id that gave us this feature and we will store the offset mappings.\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n for i in range(len(tokenized_examples[\"input_ids\"])):\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n context_index = 1 if pad_on_right else 0\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token\r\n # position is part of the context or not.\r\n tokenized_examples[\"offset_mapping\"][i] = [\r\n (list(o) if sequence_ids[k] == context_index else None)\r\n for k, o in enumerate(tokenized_examples[\"offset_mapping\"][i])\r\n ]\r\n\r\n return tokenized_examples\r\n\r\n\r\n\r\n\r\n\r\n# build base examples, features set of training data\r\nshuffled_idx = pd.read_csv('https:\/\/raw.githubusercontent.com\/afogarty85\/temp\/main\/idx.csv')['idx'].to_list()\r\nexamples = load_dataset(\"squad_v2\").shuffle(seed=1)['train']\r\nfeatures = load_dataset(\"squad_v2\").shuffle(seed=1)['train'].map(\r\n prepare_validation_features,\r\n batched=True,\r\n remove_columns=['answers', 'context', 'id', 'question', 'title'])\r\n# reorder features by the training process\r\nfeatures = features.select(indices=shuffled_idx)\r\n# get the example ids to match with the \"example\" data; get unique entries\r\nid_list = list(dict.fromkeys(features['example_id']))\r\n# now search for their index positions in the examples data set; load elastic search\r\nes = Elasticsearch([{'host': 'localhost'}]).ping()\r\n# add an index to the id column for the examples\r\nexamples.add_elasticsearch_index(column='id')\r\n# retrieve the example index\r\nexample_idx_k1 = [examples.search(index_name='id', query=i, k=1).indices for i in id_list]\r\nexample_idx_k1 = [item for sublist in example_idx_k1 for item in sublist]\r\n\r\nexample_idx_k2 = [examples.search(index_name='id', query=i, k=3).indices for i in id_list]\r\nexample_idx_k2 = [item for sublist in example_idx_k2 for item in sublist]\r\n\r\nlen(example_idx_k1) # should be 130319\r\nlen(example_idx_k2) # should be 130319\r\n\r\n#trial 1 lengths:\r\n# k=1: 130314\r\n# k=3: 130319\r\n\r\n# trial 2:\r\n# just run k=3 first: 130310\r\n# try k=1 after k=3: 130319\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1758\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1757","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1757\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1757\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1757\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1757","id":790466509,"node_id":"MDU6SXNzdWU3OTA0NjY1MDk=","number":1757,"title":"FewRel","user":{"login":"dspoka","id":6183050,"node_id":"MDQ6VXNlcjYxODMwNTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6183050?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dspoka","html_url":"https:\/\/github.com\/dspoka","followers_url":"https:\/\/api.github.com\/users\/dspoka\/followers","following_url":"https:\/\/api.github.com\/users\/dspoka\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dspoka\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dspoka\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dspoka\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dspoka\/orgs","repos_url":"https:\/\/api.github.com\/users\/dspoka\/repos","events_url":"https:\/\/api.github.com\/users\/dspoka\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dspoka\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["+1","@dspoka Please check the following link : https:\/\/github.com\/thunlp\/FewRel\r\nThis link mentions two versions of the datasets. Also, this one seems to be the official link.\r\n\r\nI am assuming this is the correct link and implementing based on the same.","Hi @lhoestq,\r\n\r\nThis issue can be closed, I guess.","Yes :) closing\r\nThanks again for adding FewRel !","Thanks for adding this @gchhablani ! Sorry didn't see the email notifications sooner!"],"created_at":1611186963000,"updated_at":1615258325000,"closed_at":1615214092000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** FewRel\r\n- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset\r\n- **Paper:** @inproceedings{han2018fewrel,\r\n title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},\r\n author={Han, Xu and Zhu, Hao and Yu, Pengfei and Wang, Ziyun and Yao, Yuan and Liu, Zhiyuan and Sun, Maosong},\r\n booktitle={EMNLP},\r\n year={2018}}\r\n- **Data:** https:\/\/github.com\/ProKil\/FewRel\r\n- **Motivation:** relationship extraction dataset that's been used by some state of the art systems that should be incorporated.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1757\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1756","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1756\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1756\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1756\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1756","id":790380028,"node_id":"MDU6SXNzdWU3OTAzODAwMjg=","number":1756,"title":"Ccaligned multilingual translation dataset","user":{"login":"flozi00","id":47894090,"node_id":"MDQ6VXNlcjQ3ODk0MDkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47894090?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/flozi00","html_url":"https:\/\/github.com\/flozi00","followers_url":"https:\/\/api.github.com\/users\/flozi00\/followers","following_url":"https:\/\/api.github.com\/users\/flozi00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/flozi00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/flozi00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/flozi00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/flozi00\/orgs","repos_url":"https:\/\/api.github.com\/users\/flozi00\/repos","events_url":"https:\/\/api.github.com\/users\/flozi00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/flozi00\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611181124000,"updated_at":1614594981000,"closed_at":1614594981000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *name of the dataset*\r\n- **Description:** *short description of the dataset (or link to social media or blog post)*\r\n- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language identification on raw web-documents, and ensuring corresponding language codes were corresponding in the URLs of web documents. This pattern matching approach yielded more than 100 million aligned documents paired with English. Recognizing that each English document was often aligned to mulitple documents in different target language, we can join on English documents to obtain aligned documents that directly pair two non-English documents (e.g., Arabic-French).\r\n- **Paper:** *link to the dataset paper if available*\r\n- https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.480.pdf\r\n- **Data:** *link to the Github repository or current dataset location*\r\n- http:\/\/www.statmt.org\/cc-aligned\/\r\n- **Motivation:** *what are some good reasons to have this dataset*\r\n- The authors says it's an high quality dataset.\r\n- it's pretty large and includes many language pairs. It could be interesting training mt5 on this task.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1756\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1755","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1755\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1755\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1755\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1755","id":790324734,"node_id":"MDU6SXNzdWU3OTAzMjQ3MzQ=","number":1755,"title":"Using select\/reordering datasets slows operations down immensely","user":{"login":"afogarty85","id":49048309,"node_id":"MDQ6VXNlcjQ5MDQ4MzA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49048309?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/afogarty85","html_url":"https:\/\/github.com\/afogarty85","followers_url":"https:\/\/api.github.com\/users\/afogarty85\/followers","following_url":"https:\/\/api.github.com\/users\/afogarty85\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/afogarty85\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/afogarty85\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/afogarty85\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/afogarty85\/orgs","repos_url":"https:\/\/api.github.com\/users\/afogarty85\/repos","events_url":"https:\/\/api.github.com\/users\/afogarty85\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/afogarty85\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can use `Dataset.flatten_indices()` to make it fast after a select or shuffle.","Thanks for the input! I gave that a try by adding this after my selection \/ reordering operations, but before the big computation task of `score_squad`\r\n\r\n```\r\nexamples = examples.flatten_indices()\r\nfeatures = features.flatten_indices()\r\n```\r\n\r\nThat helped quite a bit!"],"created_at":1611177132000,"updated_at":1611180219000,"closed_at":1611180219000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am using portions of HF's helpful work in preparing \/ scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.\r\n\r\nThe below example should be reproducible and I have ran myself down this path because I want to use HF's scoring functions and helpful data preparation, but use my own trainer. The training process uses shuffle and therefore the order I trained on no longer matches the original data set order. So, to score my results correctly, the original data set needs to match the order of the training. This requires that I: (1) collect the index for each row of data emitted during training, and (2) use this index information to re-order the datasets correctly so the orders match when I go to score.\r\n\r\n\r\nThe problem is, the dataset class starts performing very poorly as soon as you start manipulating its order by immense magnitudes.\r\n\r\n\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import BertTokenizerFast, BertForQuestionAnswering\r\nfrom elasticsearch import Elasticsearch\r\nimport numpy as np\r\nimport collections\r\nfrom tqdm.auto import tqdm\r\nimport torch\r\n\r\n# from https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/question_answering.ipynb#scrollTo=941LPhDWeYv-\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')\r\nmax_length = 384 # The maximum length of a feature (question and context)\r\ndoc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.\r\npad_on_right = tokenizer.padding_side == \"right\"\r\nsquad_v2 = True\r\n\r\n# from https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/question_answering.ipynb#scrollTo=941LPhDWeYv-\r\ndef prepare_validation_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[\"question\" if pad_on_right else \"context\"],\r\n examples[\"context\" if pad_on_right else \"question\"],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=max_length,\r\n stride=doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\",\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n\r\n # We keep the example_id that gave us this feature and we will store the offset mappings.\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n for i in range(len(tokenized_examples[\"input_ids\"])):\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n context_index = 1 if pad_on_right else 0\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token\r\n # position is part of the context or not.\r\n tokenized_examples[\"offset_mapping\"][i] = [\r\n (list(o) if sequence_ids[k] == context_index else None)\r\n for k, o in enumerate(tokenized_examples[\"offset_mapping\"][i])\r\n ]\r\n\r\n return tokenized_examples\r\n\r\n# from https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/question_answering.ipynb#scrollTo=941LPhDWeYv-\r\ndef postprocess_qa_predictions(examples, features, starting_logits, ending_logits, n_best_size = 20, max_answer_length = 30):\r\n all_start_logits, all_end_logits = starting_logits, ending_logits\r\n # Build a map example to its corresponding features.\r\n example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}\r\n features_per_example = collections.defaultdict(list)\r\n\r\n for i, feature in enumerate(features):\r\n features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)\r\n\r\n # The dictionaries we have to fill.\r\n predictions = collections.OrderedDict()\r\n\r\n # Logging.\r\n print(f\"Post-processing {len(examples)} example predictions split into {len(features)} features.\")\r\n\r\n # Let's loop over all the examples!\r\n for example_index, example in enumerate(tqdm(examples)):\r\n # Those are the indices of the features associated to the current example.\r\n feature_indices = features_per_example[example_index]\r\n\r\n min_null_score = None # Only used if squad_v2 is True.\r\n valid_answers = []\r\n\r\n context = example[\"context\"]\r\n # Looping through all the features associated to the current example.\r\n for feature_index in feature_indices:\r\n\r\n # We grab the predictions of the model for this feature.\r\n start_logits = all_start_logits[feature_index]\r\n end_logits = all_end_logits[feature_index]\r\n # This is what will allow us to map some the positions in our logits to span of texts in the original\r\n # context.\r\n offset_mapping = features[feature_index][\"offset_mapping\"]\r\n\r\n # Update minimum null prediction.\r\n cls_index = features[feature_index][\"input_ids\"].index(tokenizer.cls_token_id)\r\n feature_null_score = start_logits[cls_index] + end_logits[cls_index]\r\n if min_null_score is None or min_null_score < feature_null_score:\r\n min_null_score = feature_null_score\r\n\r\n # Go through all possibilities for the `n_best_size` greater start and end logits.\r\n start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\r\n end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\r\n for start_index in start_indexes:\r\n for end_index in end_indexes:\r\n # Don't consider out-of-scope answers, either because the indices are out of bounds or correspond\r\n # to part of the input_ids that are not in the context.\r\n if (\r\n start_index >= len(offset_mapping)\r\n or end_index >= len(offset_mapping)\r\n or offset_mapping[start_index] is None\r\n or offset_mapping[end_index] is None\r\n ):\r\n continue\r\n # Don't consider answers with a length that is either < 0 or > max_answer_length.\r\n if end_index < start_index or end_index - start_index + 1 > max_answer_length:\r\n continue\r\n\r\n start_char = offset_mapping[start_index][0]\r\n end_char = offset_mapping[end_index][1]\r\n valid_answers.append(\r\n {\r\n \"score\": start_logits[start_index] + end_logits[end_index],\r\n \"text\": context[start_char: end_char]\r\n }\r\n )\r\n\r\n\r\n if len(valid_answers) > 0:\r\n best_answer = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[0]\r\n else:\r\n # In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid\r\n # failure.\r\n best_answer = {\"text\": \"\", \"score\": 0.0}\r\n\r\n # Let's pick our final answer: the best one or the null answer (only for squad_v2)\r\n if not squad_v2:\r\n predictions[example[\"id\"]] = best_answer[\"text\"]\r\n else:\r\n answer = best_answer[\"text\"] if best_answer[\"score\"] > min_null_score else \"\"\r\n predictions[example[\"id\"]] = answer\r\n\r\n return predictions\r\n\r\n\r\n\r\n# build base examples, features from training data\r\nexamples = load_dataset(\"squad_v2\").shuffle(seed=5)['train']\r\nfeatures = load_dataset(\"squad_v2\").shuffle(seed=5)['train'].map(\r\n prepare_validation_features,\r\n batched=True,\r\n remove_columns=['answers', 'context', 'id', 'question', 'title'])\r\n\r\n# sim some shuffled training indices that we want to use to re-order the data to compare how we did\r\nshuffle_idx = np.arange(0, 131754)\r\nnp.random.shuffle(shuffle_idx)\r\n# create a new dataset with rows selected following the training shuffle\r\nfeatures = features.select(indices=shuffle_idx)\r\n# get unique example ids to match with the \"example\" data\r\nid_list = list(dict.fromkeys(features['example_id']))\r\n# now search for their index positions; load elastic search\r\nes = Elasticsearch([{'host': 'localhost'}]).ping()\r\n# add an index to the id column for the examples\r\nexamples.add_elasticsearch_index(column='id')\r\n# search the examples for their index position\r\nexample_idx = [examples.search(index_name='id', query=i, k=1).indices for i in id_list]\r\n# drop the elastic search\r\nexamples.drop_index(index_name='id')\r\n# put examples in the right order\r\nexamples = examples.select(indices=example_idx)\r\n\r\n# generate some fake data\r\nlogits = {'starting_logits': torch.randn(131754, 384), 'ending_logits': torch.randn(131754, 384)}\r\n\r\n\r\ndef score_squad(logits, n_best_size, max_answer):\r\n # proceed with QA calculation\r\n final_predictions = postprocess_qa_predictions(examples=examples,\r\n features=features,\r\n starting_logits=logits['starting_logits'],\r\n ending_logits=logits['ending_logits'],\r\n n_best_size=20,\r\n max_answer_length=30)\r\n metric = load_metric(\"squad_v2\")\r\n formatted_predictions = [{\"id\": k, \"prediction_text\": v, \"no_answer_probability\": 0.0} for k, v in final_predictions.items()]\r\n references = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in examples]\r\n metrics = metric.compute(predictions=formatted_predictions, references=references)\r\n return metrics\r\n\r\nmetrics = score_squad(logits, n_best_size=20, max_answer=30)\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1755\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1754","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1754\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1754\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1754\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1754","id":789881730,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw","number":1754,"title":"Use a config id in the cache directory names for custom configs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611141060000,"updated_at":1611565927000,"closed_at":1611565926000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1754","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1754","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1754.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1754.patch"},"body":"As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.\r\n\r\nFor example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nmnli = load_dataset(\"glue\", \"mnli\")\r\nmnli_custom = load_dataset(\"glue\", \"mnli\", label_classes=[\"contradiction\", \"entailment\", \"neutral\"])\r\n```\r\n\r\nI fixed that by extending the cache directory definition of a dataset that is being generated.\r\nInstead of using the config name in the cache directory name, I switched to using a `config_id`.\r\n\r\nBy default it is equal to the config name.\r\nHowever the name of a config is not sufficent to have a unique identifier for the dataset being generated since it doesn't take into account:\r\n- the config kwargs that can be used to overwrite attributes\r\n- the custom features used to write the dataset\r\n- the data_files for json\/text\/csv\/pandas datasets\r\n\r\nTherefore the config id is just the config name with an optional suffix based on these.\r\n\r\nIn particular taking into account the config kwargs fixes the issue with the `label_classes` above.\r\n\r\nI completed the current test cases by adding the case that was missing: overwriting an already existing config.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1754\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1753","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1753\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1753\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1753\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1753","id":789867685,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx","number":1753,"title":"fix comet citations","user":{"login":"ricardorei","id":17256847,"node_id":"MDQ6VXNlcjE3MjU2ODQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17256847?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ricardorei","html_url":"https:\/\/github.com\/ricardorei","followers_url":"https:\/\/api.github.com\/users\/ricardorei\/followers","following_url":"https:\/\/api.github.com\/users\/ricardorei\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ricardorei\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ricardorei\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ricardorei\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ricardorei\/orgs","repos_url":"https:\/\/api.github.com\/users\/ricardorei\/repos","events_url":"https:\/\/api.github.com\/users\/ricardorei\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ricardorei\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611139958000,"updated_at":1611153570000,"closed_at":1611153570000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1753","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1753","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1753.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1753.patch"},"body":"I realized COMET citations were not showing in the hugging face metrics page:\r\n\r\n<img width=\"814\" alt=\"Screenshot 2021-01-20 at 09 48 44\" src=\"https:\/\/user-images.githubusercontent.com\/17256847\/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png\">\r\n\r\nThis pull request is intended to fix that.\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1753\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1752","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1752\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1752\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1752\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1752","id":789822459,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5","number":1752,"title":"COMET metric citation","user":{"login":"ricardorei","id":17256847,"node_id":"MDQ6VXNlcjE3MjU2ODQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17256847?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ricardorei","html_url":"https:\/\/github.com\/ricardorei","followers_url":"https:\/\/api.github.com\/users\/ricardorei\/followers","following_url":"https:\/\/api.github.com\/users\/ricardorei\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ricardorei\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ricardorei\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ricardorei\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ricardorei\/orgs","repos_url":"https:\/\/api.github.com\/users\/ricardorei\/repos","events_url":"https:\/\/api.github.com\/users\/ricardorei\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ricardorei\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think its better to create a new branch with this fix. I forgot I was still using the old branch."],"created_at":1611136483000,"updated_at":1611138427000,"closed_at":1611138302000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1752","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1752","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1752.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1752.patch"},"body":"In my last pull request to add COMET metric, the citations where not following the usual \"format\". Because of that they where not correctly displayed on the website: \r\n\r\n<img width=\"814\" alt=\"Screenshot 2021-01-20 at 09 48 44\" src=\"https:\/\/user-images.githubusercontent.com\/17256847\/105158000-686efb80-5b05-11eb-8bb0-9c85fdac2938.png\">\r\n\r\nThis pull request is only intended to fix that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1752\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1751","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1751\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1751\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1751\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1751","id":789232980,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2","number":1751,"title":"Updated README for the Social Bias Frames dataset","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1611078780000,"updated_at":1611154612000,"closed_at":1611154612000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1751","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1751","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1751.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1751.patch"},"body":"See the updated card at https:\/\/github.com\/mcmillanmajora\/datasets\/tree\/add-SBIC-card\/datasets\/social_bias_frames. I incorporated information from the [SBIC data statement](https:\/\/homes.cs.washington.edu\/~msap\/social-bias-frames\/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1751\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1750","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1750\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1750\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1750\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1750","id":788668085,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1","number":1750,"title":"Fix typo in README.md of cnn_dailymail","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good catch, thanks!","Thank you for merging!"],"created_at":1611025565000,"updated_at":1611054449000,"closed_at":1611049723000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1750","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1750","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1750.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1750.patch"},"body":"When I read the README.md of `CNN\/DailyMail Dataset`, there seems to be a typo `CCN`.\r\n\r\nI am afraid this is a trivial matter, but I would like to make a suggestion for revision.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1750\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1749","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1749\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1749\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1749\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1749","id":788476639,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU2OTgxMDc5","number":1749,"title":"Added metadata and correct splits for swda.","user":{"login":"gmihaila","id":22454783,"node_id":"MDQ6VXNlcjIyNDU0Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22454783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gmihaila","html_url":"https:\/\/github.com\/gmihaila","followers_url":"https:\/\/api.github.com\/users\/gmihaila\/followers","following_url":"https:\/\/api.github.com\/users\/gmihaila\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gmihaila\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gmihaila\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gmihaila\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gmihaila\/orgs","repos_url":"https:\/\/api.github.com\/users\/gmihaila\/repos","events_url":"https:\/\/api.github.com\/users\/gmihaila\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gmihaila\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I will push updates tomorrow.","@lhoestq thank you for your comments! I went ahead and fixed the code \ud83d\ude03. Please let me know if I missed anything."],"created_at":1610994992000,"updated_at":1611948952000,"closed_at":1611945488000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1749","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1749","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1749.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1749.patch"},"body":"Switchboard Dialog Act Corpus\r\n\r\nI made some changes following @bhavitvyamalik recommendation in #1678:\r\n\r\n* Contains all metadata.\r\n* Used official implementation from the [\/swda](https:\/\/github.com\/cgpotts\/swda) repo.\r\n* Add official train and test splits used in [Stolcke et al. (2000)](https:\/\/web.stanford.edu\/~jurafsky\/ws97) and validation split used in [Probabilistic-RNN-DA-Classifier](https:\/\/github.com\/NathanDuran\/Probabilistic-RNN-DA-Classifier).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1749\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1748","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1748\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1748\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1748\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1748","id":788431642,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU2OTQ0NDEx","number":1748,"title":"add Stuctured Argument Extraction for Korean dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610990059000,"updated_at":1631897598000,"closed_at":1611055618000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1748","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1748","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1748.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1748.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1748\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1747","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1747\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1747\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1747\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1747","id":788299775,"node_id":"MDU6SXNzdWU3ODgyOTk3NzU=","number":1747,"title":"datasets slicing with seed ","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi :) \r\nThe slicing API from https:\/\/huggingface.co\/docs\/datasets\/splits.html doesn't shuffle the data.\r\nYou can shuffle and then take a subset of your dataset with\r\n```python\r\n# shuffle and take the first 100 examples\r\ndataset = dataset.shuffle(seed=42).select(range(100))\r\n```\r\n\r\nYou can find more information about shuffling and selecting rows in the documentation: https:\/\/huggingface.co\/docs\/datasets\/processing.html#selecting-sorting-shuffling-splitting-rows","thank you so much\n\nOn Mon, Jan 18, 2021 at 3:17 PM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hi :)\n> The slicing API doesn't shuffle the data.\n> You can shuffle and then take a subset of your dataset with\n>\n> # shuffle and take the first 100 examplesdataset = dataset.shuffle(seed=42).select(range(100))\n>\n> You can find more information about shuffling and selecting rows in the\n> documentation:\n> https:\/\/huggingface.co\/docs\/datasets\/processing.html#selecting-sorting-shuffling-splitting-rows\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/issues\/1747#issuecomment-762278134>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/AM3GZM5D5MDPLJGI4IG3UADS2Q7GPANCNFSM4WHLOZJQ>\n> .\n>\n"],"created_at":1610978935000,"updated_at":1610981134000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI need to slice a dataset with random seed, I looked into documentation here https:\/\/huggingface.co\/docs\/datasets\/splits.html \r\nI could not find a seed option, could you assist me please how I can get a slice for different seeds?\r\nthank you.\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1747\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1746","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1746\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1746\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1746\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1746","id":788188184,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU2NzQxMjIw","number":1746,"title":"Fix release conda worflow","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610969350000,"updated_at":1610969484000,"closed_at":1610969483000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1746","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1746","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1746.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1746.patch"},"body":"The current workflow yaml file is not valid according to https:\/\/github.com\/huggingface\/datasets\/actions\/runs\/487638110","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1746\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1745","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1745\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1745\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1745\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1745","id":787838256,"node_id":"MDU6SXNzdWU3ODc4MzgyNTY=","number":1745,"title":"difference between wsc and wsc.fixed for superglue","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["From the description given in the dataset script for `wsc.fixed`:\r\n```\r\nThis version fixes issues where the spans are not actually substrings of the text.\r\n```"],"created_at":1610931019000,"updated_at":1610967763000,"closed_at":1610931574000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1745\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1744","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1744\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1744\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1744\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1744","id":787649811,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4","number":1744,"title":"Add missing \"brief\" entries to reuters","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I ran `make style` but CI code quality still failing and I don't have access to logs","It's also likely that due to the previous placement of the field initialization, much of the data about topics etc was simply wrong and carried over from previous entries. Model scores seem to improve significantly with this PR."],"created_at":1610870329000,"updated_at":1610969169000,"closed_at":1610969169000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1744","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1744","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1744.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1744.patch"},"body":"This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1744\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1743","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1743\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1743\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1743\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1743","id":787631412,"node_id":"MDU6SXNzdWU3ODc2MzE0MTI=","number":1743,"title":"Issue while Creating Custom Metric","user":{"login":"gchhablani","id":29076344,"node_id":"MDQ6VXNlcjI5MDc2MzQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29076344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gchhablani","html_url":"https:\/\/github.com\/gchhablani","followers_url":"https:\/\/api.github.com\/users\/gchhablani\/followers","following_url":"https:\/\/api.github.com\/users\/gchhablani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gchhablani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gchhablani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gchhablani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gchhablani\/orgs","repos_url":"https:\/\/api.github.com\/users\/gchhablani\/repos","events_url":"https:\/\/api.github.com\/users\/gchhablani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gchhablani\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Currently it's only possible to define the features for the two columns `references` and `predictions`.\r\nThe data for these columns can then be passed to `metric.add_batch` and `metric.compute`.\r\nInstead of defining more columns `text`, `offset_mapping` and `ground` you must include them in either references and predictions.\r\n\r\nFor example \r\n```python\r\nfeatures = datasets.Features({\r\n 'predictions':datasets.Sequence(datasets.Value(\"int32\")),\r\n \"references\": datasets.Sequence({\r\n \"references_ids\": datasets.Value(\"int32\"),\r\n \"offset_mapping\": datasets.Value(\"int32\"),\r\n 'text': datasets.Value('string'),\r\n \"ground\": datasets.Value(\"int32\")\r\n }),\r\n})\r\n```\r\n\r\nAnother option would be to simply have the two features like \r\n```python\r\nfeatures = datasets.Features({\r\n 'predictions':datasets.Sequence(datasets.Value(\"int32\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"int32\")),\r\n})\r\n```\r\nand keep `offset_mapping`, `text` and `ground` as as parameters for the computation (i.e. kwargs when calling `metric.compute`).\r\n\r\n\r\nWhat is the metric you would like to implement ?\r\n\r\nI'm asking since we consider allowing additional fields as requested in the `Comet` metric (see PR and discussion [here](https:\/\/github.com\/huggingface\/datasets\/pull\/1577)) and I'd like to know if it's something that can be interesting for users.\r\n\r\nWhat do you think ?","Hi @lhoestq,\r\n\r\nI am doing text segmentation and the metric is effectively dice score on character offsets. So I need to pass the actual spans and I want to be able to get the spans based on predictions using offset_mapping.\r\n\r\nIncluding them in references seems like a good idea. I'll try it out and get back to you. If there's a better way to write a metric function for the same, please let me know."],"created_at":1610866874000,"updated_at":1611333900000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi Team,\r\n\r\nI am trying to create a custom metric for my training as follows, where f1 is my own metric:\r\n\r\n```python\r\n def _info(self):\r\n # TODO: Specifies the datasets.MetricInfo object\r\n return datasets.MetricInfo(\r\n # This is the description that will appear on the metrics page.\r\n description=_DESCRIPTION,\r\n citation=_CITATION,\r\n inputs_description=_KWARGS_DESCRIPTION,\r\n # This defines the format of each prediction and reference\r\n features = datasets.Features({'predictions':datasets.Sequence(datasets.Value(\"int32\")), \"references\": datasets.Sequence(datasets.Value(\"int32\")),\"offset_mapping\":datasets.Sequence(datasets.Value(\"int32\")),'text':datasets.Sequence(datasets.Value('string')),\"ground\":datasets.Sequence(datasets.Value(\"int32\")),}),\r\n # Homepage of the metric for documentation\r\n homepage=\"http:\/\/metric.homepage\",\r\n # Additional links to the codebase or references\r\n codebase_urls=[\"http:\/\/github.com\/path\/to\/codebase\/of\/new_metric\"],\r\n reference_urls=[\"http:\/\/path.to.reference.url\/new_metric\"]\r\n )\r\n\r\n def _compute(self,predictions,references,text,offset_mapping,spans):\r\n\r\n pred_spans = []\r\n\r\n for i,preds in enumerate(predictions):\r\n current_preds = []\r\n for j,token_preds in enumerate(preds):\r\n if (preds>0.5):\r\n current_preds+=list(range(offset_mapping[i][j][0],offset_mapping[i][j][1]))\r\n pred_spans.append(current_spans)\r\n \r\n return {\r\n \"Token Wise F1\": f1_score(references,predictions,labels=[0,1]),\r\n \"Offset Wise F1\": np.mean([f1(preds,gold) for preds,fold in zip(pred_spans,ground)])\r\n }\r\n\r\n```\r\n\r\nI believe this is not correct. But that's not the issue I am facing right now. I get this error :\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-144-ed7349b50821> in <module>()\r\n----> 1 new_metric.compute(predictions=inputs[\"labels\"],references=inputs[\"labels\"], text=inputs[\"text\"], offset_mapping=inputs[\"offset_mapping\"],ground=inputs[\"ground\"] )\r\n\r\n2 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/features.py in encode_batch(self, batch)\r\n 802 encoded_batch = {}\r\n 803 if set(batch) != set(self):\r\n--> 804 print(batch)\r\n 805 print(self)\r\n 806 raise ValueError(\"Column mismatch between batch {} and features {}\".format(set(batch), set(self)))\r\n\r\nValueError: Column mismatch between batch {'references', 'predictions'} and features {'ground', 'predictions', 'offset_mapping', 'text', 'references'}\r\n```\r\nOn checking the features.py file, I see the call is made from add_batch() in metrics.py which only takes in predictions and references.\r\n\r\nHow do I make my custom metric work? Will it work with a trainer even if I am able to make this metric work?\r\n\r\nThanks,\r\nGunjan","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1743\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1742","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1742\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1742\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1742\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1742","id":787623640,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw","number":1742,"title":"Add GLUE Compat (compatible with transformers<3.5.0)","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe it would be simpler to just overwrite the order of the label classes of the `glue` dataset ?\r\n```python\r\nmnli = load_dataset(\"glue\", \"mnli\", label_classes=[\"contradiction\", \"entailment\", \"neutral\"])\r\n```","Sounds good. Will close the issue if that works."],"created_at":1610862865000,"updated_at":1617021810000,"closed_at":1617021810000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1742","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1742","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1742.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1742.patch"},"body":"Link to our discussion on Slack (HF internal)\r\nhttps:\/\/huggingface.slack.com\/archives\/C014N4749J9\/p1609668119337400\r\n\r\nThe next step is to add a compatible option in the new `run_glue.py`\r\n\r\nI duplicated `glue` and made the following changes:\r\n1. Change the name to `glue_compat`.\r\n2. Change the label assignments for MNLI and AX.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1742\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1741","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1741\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1741\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1741\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1741","id":787327060,"node_id":"MDU6SXNzdWU3ODczMjcwNjA=","number":1741,"title":"error when run fine_tuning on text_classification","user":{"login":"XiaoYang66","id":43234824,"node_id":"MDQ6VXNlcjQzMjM0ODI0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43234824?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/XiaoYang66","html_url":"https:\/\/github.com\/XiaoYang66","followers_url":"https:\/\/api.github.com\/users\/XiaoYang66\/followers","following_url":"https:\/\/api.github.com\/users\/XiaoYang66\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/XiaoYang66\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/XiaoYang66\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/XiaoYang66\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/XiaoYang66\/orgs","repos_url":"https:\/\/api.github.com\/users\/XiaoYang66\/repos","events_url":"https:\/\/api.github.com\/users\/XiaoYang66\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/XiaoYang66\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["none"],"created_at":1610763799000,"updated_at":1610764768000,"closed_at":1610764758000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"dataset:sem_eval_2014_task_1\r\npretrained_model:bert-base-uncased\r\n\r\nerror description:\r\nwhen i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https:\/\/colab.research.google.com\/github\/huggingface\/notebooks\/blob\/master\/examples\/text_classification.ipynb#scrollTo=TlqNaB8jIrJW).\r\n\r\n\r\nthe error is like this :\r\n`File \"train.py\", line 69, in <module>\r\n trainer.train()\r\n File \"\/home\/projects\/anaconda3\/envs\/calibration\/lib\/python3.7\/site-packages\/transformers\/trainer.py\", line 784, in train\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"\/home\/projects\/anaconda3\/envs\/calibration\/lib\/python3.7\/site-packages\/torch\/utils\/data\/dataloader.py\", line 435, in __next__\r\n data = self._next_data()\r\n File \"\/home\/projects\/anaconda3\/envs\/calibration\/lib\/python3.7\/site-packages\/torch\/utils\/data\/dataloader.py\", line 475, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"\/home\/projects\/anaconda3\/envs\/calibration\/lib\/python3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/projects\/anaconda3\/envs\/calibration\/lib\/python3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\nKeyError: 2`\r\n\r\nthis is my code :\r\n```dataset_name = 'sem_eval_2014_task_1'\r\nnum_labels_size = 3\r\nbatch_size = 4\r\nmodel_checkpoint = 'bert-base-uncased'\r\nnumber_train_epoch = 5\r\n\r\ndef tokenize(batch):\r\nreturn tokenizer(batch['premise'], batch['hypothesis'], truncation=True, )\r\n\r\ndef compute_metrics(pred):\r\nlabels = pred.label_ids\r\npreds = pred.predictions.argmax(-1)\r\nprecision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='micro')\r\nacc = accuracy_score(labels, preds)\r\nreturn {\r\n'accuracy': acc,\r\n'f1': f1,\r\n'precision': precision,\r\n'recall': recall\r\n}\r\n\r\nmodel = BertForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels_size)\r\ntokenizer = BertTokenizerFast.from_pretrained(model_checkpoint, use_fast=True)\r\n\r\ntrain_dataset = load_dataset(dataset_name, split='train')\r\ntest_dataset = load_dataset(dataset_name, split='test')\r\n\r\ntrain_encoded_dataset = train_dataset.map(tokenize, batched=True)\r\ntest_encoded_dataset = test_dataset.map(tokenize, batched=True)\r\n\r\nargs = TrainingArguments(\r\noutput_dir='.\/results',\r\nevaluation_strategy=\"epoch\",\r\nlearning_rate=2e-5,\r\nper_device_train_batch_size=batch_size,\r\nper_device_eval_batch_size=batch_size,\r\nnum_train_epochs=number_train_epoch,\r\nweight_decay=0.01,\r\ndo_predict=True,\r\n)\r\ntrainer = Trainer(\r\nmodel=model,\r\nargs=args,\r\ncompute_metrics=compute_metrics,\r\ntrain_dataset=train_encoded_dataset,\r\neval_dataset=test_encoded_dataset,\r\ntokenizer=tokenizer\r\n)\r\n\r\ntrainer.train()\r\ntrainer.evaluate()\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1741\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1740","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1740\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1740\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1740\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1740","id":787264605,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU2MDA5NjM1","number":1740,"title":"add id_liputan6 dataset","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610751514000,"updated_at":1611150086000,"closed_at":1611150086000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1740","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1740","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1740.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1740.patch"},"body":"id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https:\/\/arxiv.org\/abs\/2011.00679","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1740\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1739","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1739\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1739\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1739\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1739","id":787219138,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx","number":1739,"title":"fixes and improvements for the WebNLG loader","user":{"login":"Shimorina","id":9607332,"node_id":"MDQ6VXNlcjk2MDczMzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9607332?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Shimorina","html_url":"https:\/\/github.com\/Shimorina","followers_url":"https:\/\/api.github.com\/users\/Shimorina\/followers","following_url":"https:\/\/api.github.com\/users\/Shimorina\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Shimorina\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Shimorina\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Shimorina\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Shimorina\/orgs","repos_url":"https:\/\/api.github.com\/users\/Shimorina\/repos","events_url":"https:\/\/api.github.com\/users\/Shimorina\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Shimorina\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The dataset card is fantastic!\r\n\r\nLooks good to me! Did you check that this still passes the slow tests with the existing dummy data?","Yes, I ran and passed all the tests specified in [this guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#automatically-add-code-metadata), including the slow ones.","I just added the `from pathlib import Path` at the top to fix the script","I ran the tests locally and they all pass, merging","Thank you for the review!"],"created_at":1610747123000,"updated_at":1611930846000,"closed_at":1611917583000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1739","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1739","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1739.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1739.patch"},"body":"- fixes test sets loading in v3.0\r\n- adds additional fields for v3.0_ru\r\n- adds info to the WebNLG data card","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1739\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1738","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1738\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1738\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1738\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1738","id":786068440,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4","number":1738,"title":"Conda support","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice thanks :) \r\nNote that in `datasets` the tags are simply the version without the `v`. For example `1.2.1`.","Do you push tags only for versions?","Yes I've always used tags only for versions"],"created_at":1610637085000,"updated_at":1610705300000,"closed_at":1610705299000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1738","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1738","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1738.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1738.patch"},"body":"Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).\r\n\r\nWill appear here: https:\/\/anaconda.org\/huggingface\/datasets\r\n\r\nDepends on `conda-forge` for now, so the following is required for installation:\r\n\r\n```\r\nconda install -c huggingface -c conda-forge datasets\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1738\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1737","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1737\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1737\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1737\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1737","id":785606286,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5","number":1737,"title":"update link in TLC to be github links","user":{"login":"chameleonTK","id":6429850,"node_id":"MDQ6VXNlcjY0Mjk4NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6429850?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chameleonTK","html_url":"https:\/\/github.com\/chameleonTK","followers_url":"https:\/\/api.github.com\/users\/chameleonTK\/followers","following_url":"https:\/\/api.github.com\/users\/chameleonTK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chameleonTK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chameleonTK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chameleonTK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chameleonTK\/orgs","repos_url":"https:\/\/api.github.com\/users\/chameleonTK\/repos","events_url":"https:\/\/api.github.com\/users\/chameleonTK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chameleonTK\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for updating this!"],"created_at":1610592561000,"updated_at":1610619924000,"closed_at":1610619924000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1737","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1737","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1737.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1737.patch"},"body":"Base on this issue https:\/\/github.com\/huggingface\/datasets\/issues\/1064, I can now use the official links.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1737\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1736","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1736\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1736\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1736\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1736","id":785433854,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw","number":1736,"title":"Adjust BrWaC dataset features name","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610570344000,"updated_at":1610620178000,"closed_at":1610620178000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1736","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1736","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1736.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1736.patch"},"body":"I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.\r\n\r\nLooking at the current features hierarchy, we have \"paragraphs\" with a list of \"sentences\" with a list of \"sentences?!\". But the actual hierarchy is a \"text\" with a list of \"paragraphs\" with a list of \"sentences\".\r\n\r\nI confused myself trying to use the dataset with these names. So I think it's better to change it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1736\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1735","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1735\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1735\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1735\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1735","id":785184740,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw","number":1735,"title":"Update add new dataset template","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Add new \"dataset\"? ;)","Lol, too used to Transformers ;-)"],"created_at":1610550489000,"updated_at":1610637361000,"closed_at":1610637360000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1735","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1735","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1735.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1735.patch"},"body":"This PR fixes a few typos in the \"Add new dataset template\" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1735\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1734","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1734\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1734\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1734\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1734","id":784956707,"node_id":"MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz","number":1734,"title":"Fix empty token bug for `thainer` and `lst20`","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610531709000,"updated_at":1610620938000,"closed_at":1610620938000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1734","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1734","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1734.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1734.patch"},"body":"add a condition to check if tokens exist before yielding in `thainer` and `lst20`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1734\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1733","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1733\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1733\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1733\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1733","id":784903002,"node_id":"MDU6SXNzdWU3ODQ5MDMwMDI=","number":1733,"title":"connection issue with glue, what is the data url for glue? ","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @juliahane, which config of GLUE causes you trouble?\r\nThe URLs are defined in the dataset script source code: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/glue\/glue.py"],"created_at":1610527060000,"updated_at":1628100835000,"closed_at":1628100835000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nmy codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not\r\nthanks ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1733\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1732","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1732\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1732\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1732\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1732","id":784874490,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzOTkzNTAx","number":1732,"title":"[GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification.","user":{"login":"mounicam","id":11708999,"node_id":"MDQ6VXNlcjExNzA4OTk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11708999?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mounicam","html_url":"https:\/\/github.com\/mounicam","followers_url":"https:\/\/api.github.com\/users\/mounicam\/followers","following_url":"https:\/\/api.github.com\/users\/mounicam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mounicam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mounicam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mounicam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mounicam\/orgs","repos_url":"https:\/\/api.github.com\/users\/mounicam\/repos","events_url":"https:\/\/api.github.com\/users\/mounicam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mounicam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for the feedback! I updated the code. "],"created_at":1610524219000,"updated_at":1610619581000,"closed_at":1610619581000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1732","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1732","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1732.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1732.patch"},"body":"We want to use TurkCorpus for validation and testing of the sentence simplification task. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1732\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1731","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1731\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1731\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1731\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1731","id":784744674,"node_id":"MDU6SXNzdWU3ODQ3NDQ2NzQ=","number":1731,"title":"Couldn't reach swda.py","user":{"login":"yangp725","id":13365326,"node_id":"MDQ6VXNlcjEzMzY1MzI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13365326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yangp725","html_url":"https:\/\/github.com\/yangp725","followers_url":"https:\/\/api.github.com\/users\/yangp725\/followers","following_url":"https:\/\/api.github.com\/users\/yangp725\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yangp725\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yangp725\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yangp725\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yangp725\/orgs","repos_url":"https:\/\/api.github.com\/users\/yangp725\/repos","events_url":"https:\/\/api.github.com\/users\/yangp725\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yangp725\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of \ud83e\udd17`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https:\/\/github.com\/huggingface\/datasets\/issues\/1641#issuecomment-751571471).\r\nLet me know if this helps !","Thanks @SBrandeis ,\r\nProblem solved by downloading and installing the latest version datasets."],"created_at":1610506660000,"updated_at":1610536660000,"closed_at":1610536660000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"ConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.2.0\/datasets\/swda\/swda.py\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1731\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1730","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1730\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1730\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1730\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1730","id":784617525,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0","number":1730,"title":"Add MNIST dataset","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610488082000,"updated_at":1610533187000,"closed_at":1610533186000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1730","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1730","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1730.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1730.patch"},"body":"This PR adds the MNIST dataset to the library.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1730\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1729","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1729\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1729\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1729\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1729","id":784565898,"node_id":"MDU6SXNzdWU3ODQ1NjU4OTg=","number":1729,"title":"Is there support for Deep learning datasets?","user":{"login":"pablodz","id":28235457,"node_id":"MDQ6VXNlcjI4MjM1NDU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28235457?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pablodz","html_url":"https:\/\/github.com\/pablodz","followers_url":"https:\/\/api.github.com\/users\/pablodz\/followers","following_url":"https:\/\/api.github.com\/users\/pablodz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pablodz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pablodz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pablodz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pablodz\/orgs","repos_url":"https:\/\/api.github.com\/users\/pablodz\/repos","events_url":"https:\/\/api.github.com\/users\/pablodz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pablodz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ZurMaD!\r\nThanks for your interest in \ud83e\udd17 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md) guide. New datasets are always very much appreciated \ud83d\ude80\r\n"],"created_at":1610482961000,"updated_at":1617164647000,"closed_at":1617164647000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https:\/\/github.com\/DZPeru\/fish-datasets","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1729\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1728","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1728\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1728\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1728\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1728","id":784458342,"node_id":"MDU6SXNzdWU3ODQ0NTgzNDI=","number":1728,"title":"Add an entry to an arrow dataset","user":{"login":"ameet-1997","id":18645407,"node_id":"MDQ6VXNlcjE4NjQ1NDA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18645407?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ameet-1997","html_url":"https:\/\/github.com\/ameet-1997","followers_url":"https:\/\/api.github.com\/users\/ameet-1997\/followers","following_url":"https:\/\/api.github.com\/users\/ameet-1997\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ameet-1997\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ameet-1997\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ameet-1997\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ameet-1997\/orgs","repos_url":"https:\/\/api.github.com\/users\/ameet-1997\/repos","events_url":"https:\/\/api.github.com\/users\/ameet-1997\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ameet-1997\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ameet-1997,\r\nI think what you are looking for is the `concatenate_datasets` function: https:\/\/huggingface.co\/docs\/datasets\/processing.html?highlight=concatenate#concatenate-several-datasets\r\n\r\nFor your use case, I would use the [`map` method](https:\/\/huggingface.co\/docs\/datasets\/processing.html?highlight=concatenate#processing-data-with-map) to transform the SQuAD sentences and the `concatenate` the original and mapped dataset.\r\n\r\nLet me know If this helps!","That's a great idea! Thank you so much!\r\n\r\nWhen I try that solution, I get the following error when I try to concatenate `datasets` and `modified_dataset`. I have also attached the output I get when I print out those two variables. Am I missing something?\r\n\r\nCode:\r\n``` python\r\ncombined_dataset = concatenate_datasets([datasets, modified_dataset])\r\n```\r\n\r\nError:\r\n```\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nOutput:\r\n```\r\n(Pdb) datasets\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['attention_mask', 'input_ids', 'special_tokens_mask'],\r\n num_rows: 493\r\n })\r\n})\r\n(Pdb) modified_dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['attention_mask', 'input_ids', 'special_tokens_mask'],\r\n num_rows: 493\r\n })\r\n})\r\n```\r\n\r\nThe error is stemming from the fact that the attribute `datasets.features` does not exist. Would it not be possible to use `concatenate_datasets` in such a case? Is there an alternate solution?","You should do `combined_dataset = concatenate_datasets([datasets['train'], modified_dataset['train']])`\r\n\r\nDidn't we talk about returning a Dataset instead of a DatasetDict with load_dataset and no split provided @lhoestq? Not sure it's the way to go but I'm wondering if it's not simpler for some use-cases.","> Didn't we talk about returning a Dataset instead of a DatasetDict with load_dataset and no split provided @lhoestq? Not sure it's the way to go but I'm wondering if it's not simpler for some use-cases.\r\n\r\nMy opinion is that users should always know in advance what type of objects they're going to get. Otherwise the development workflow on their side is going to be pretty chaotic with sometimes unexpected behaviors.\r\nFor instance is `split=` is not specified it's currently always returning a DatasetDict. And if `split=\"train\"` is given for example it's always returning a Dataset.","Thanks @thomwolf. Your solution worked!"],"created_at":1610474507000,"updated_at":1610997332000,"closed_at":1610997332000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Is it possible to add an entry to a dataset object?\r\n\r\n**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**\r\n\r\nFor example, say we have the following code:\r\n\r\n``` python\r\nfrom datasets import load_dataset\r\n\r\n# Load a dataset and print the first examples in the training set\r\nsquad_dataset = load_dataset('squad')\r\nprint(squad_dataset['train'][0])\r\n```\r\n\r\nIs it possible to add an entry to `squad_dataset`? Something like the following?\r\n\r\n``` python\r\nsquad_dataset.append({'text': \"This is a new sentence\"})\r\n```\r\n\r\nThe motivation for doing this is that I want to transform the sentences in the squad dataset and add them to the original dataset.\r\n\r\nIf the above doesn't work, is there any other way of achieving the motivation mentioned above? Perhaps by creating a new arrow dataset by using the older one and the transformer sentences?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1728\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1727","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1727\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1727\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1727\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1727","id":784435131,"node_id":"MDU6SXNzdWU3ODQ0MzUxMzE=","number":1727,"title":"BLEURT score calculation raises UnrecognizedFlagError","user":{"login":"nadavo","id":6603920,"node_id":"MDQ6VXNlcjY2MDM5MjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6603920?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nadavo","html_url":"https:\/\/github.com\/nadavo","followers_url":"https:\/\/api.github.com\/users\/nadavo\/followers","following_url":"https:\/\/api.github.com\/users\/nadavo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nadavo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nadavo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nadavo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nadavo\/orgs","repos_url":"https:\/\/api.github.com\/users\/nadavo\/repos","events_url":"https:\/\/api.github.com\/users\/nadavo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nadavo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Upgrading tensorflow to version 2.4.0 solved the issue.","I still have the same error even with TF 2.4.0.","And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!","I'm seeing the same issue with TF 2.4.1 when running the following in https:\/\/colab.research.google.com\/github\/huggingface\/datasets\/blob\/master\/notebooks\/Overview.ipynb:\r\n```\r\n!pip install git+https:\/\/github.com\/google-research\/bleurt.git\r\nreferences = [\"foo bar baz\", \"one two three\"]\r\nbleurt_metric = load_metric('bleurt')\r\npredictions = [\"foo bar\", \"four five six\"]\r\nbleurt_metric.compute(predictions=predictions, references=references)\r\n```","@aleSuglia @oscartackstrom - Are you getting the error when running your code in a Jupyter notebook ?\r\n\r\nI tried reproducing this error again, and was unable to do so from the python command line console in a virtual environment similar to the one I originally used (and unfortunately no longer have access to) when I first got the error. \r\nHowever, I've managed to reproduce the error by running the same code in a Jupyter notebook running a kernel from the same virtual environment.\r\nThis made me suspect that the problem is somehow related to the Jupyter notebook.\r\n\r\nMore environment details:\r\n```\r\nOS: Ubuntu Linux 18.04\r\nconda==4.8.3\r\npython==3.8.5\r\ndatasets==1.3.0\r\ntensorflow==2.4.0\r\nBLEURT==0.0.1\r\nnotebook==6.2.0\r\n```","This happens when running the notebook on colab. The issue seems to be that colab populates sys.argv with arguments not handled by bleurt.\r\n\r\nRunning this before calling bleurt fixes it:\r\n```\r\nimport sys\r\nsys.argv = sys.argv[:1]\r\n```\r\n\r\nNot the most elegant solution. Perhaps it needs to be fixed in the bleurt code itself rather than huggingface?\r\n\r\nThis is the output of `print(sys.argv)` when running on colab:\r\n```\r\n['\/usr\/local\/lib\/python3.7\/dist-packages\/ipykernel_launcher.py', '-f', '\/root\/.local\/share\/jupyter\/runtime\/kernel-a857a78c-44d6-4b9d-b18a-030b858ee327.json']\r\n```","I got the error when running it from the command line. It looks more like an error that should be fixed in the BLEURT codebase.","Seems to be a known issue in the bleurt codebase: https:\/\/github.com\/google-research\/bleurt\/issues\/24.","Hi, the problem should be solved now."],"created_at":1610472422000,"updated_at":1618266101000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`. \r\n\r\nMy environment:\r\n```\r\npython==3.8.5\r\ndatasets==1.2.0\r\ntensorflow==2.3.1\r\ncudatoolkit==11.0.221\r\n```\r\n\r\nTest code for reproducing the error:\r\n```\r\nfrom datasets import load_metric\r\nbleurt = load_metric('bleurt')\r\ngen_text = \"I am walking on the promenade today\"\r\nref_text = \"I am walking along the promenade on this sunny day\"\r\nbleurt.compute(predictions=[test_text], references=[test_text])\r\n```\r\n\r\nError Output:\r\n```\r\nUsing default BLEURT-Base checkpoint for sequence maximum length 128. You can use a bigger model for better results with e.g.: datasets.load_metric('bleurt', 'bleurt-large-512').\r\nINFO:tensorflow:Reading checkpoint \/home\/ubuntu\/.cache\/huggingface\/metrics\/bleurt\/default\/downloads\/extracted\/9aee35580225730ac5422599f35c4986e4c49cafd08082123342b1019720dac4\/bleurt-base-128.\r\nINFO:tensorflow:Config file found, reading.\r\nINFO:tensorflow:Will load checkpoint bert_custom\r\nINFO:tensorflow:Performs basic checks...\r\nINFO:tensorflow:... name:bert_custom\r\nINFO:tensorflow:... vocab_file:vocab.txt\r\nINFO:tensorflow:... bert_config_file:bert_config.json\r\nINFO:tensorflow:... do_lower_case:True\r\nINFO:tensorflow:... max_seq_length:128\r\nINFO:tensorflow:Creating BLEURT scorer.\r\nINFO:tensorflow:Loading model...\r\nINFO:tensorflow:BLEURT initialized.\r\n---------------------------------------------------------------------------\r\nUnrecognizedFlagError Traceback (most recent call last)\r\n<ipython-input-12-8b3f4322318a> in <module>\r\n 2 gen_text = \"I am walking on the promenade today\"\r\n 3 ref_text = \"I am walking along the promenade on this sunny day\"\r\n----> 4 bleurt.compute(predictions=[gen_text], references=[ref_text])\r\n\r\n~\/anaconda3\/envs\/noved\/lib\/python3.8\/site-packages\/datasets\/metric.py in compute(self, *args, **kwargs)\r\n 396 references = self.data[\"references\"]\r\n 397 with temp_seed(self.seed):\r\n--> 398 output = self._compute(predictions=predictions, references=references, **kwargs)\r\n 399 \r\n 400 if self.buf_writer is not None:\r\n\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/bleurt\/b1de33e1cbbcb1dbe276c887efa1fad68c6aff913885108078fa1ad408908778\/bleurt.py in _compute(self, predictions, references)\r\n 103 \r\n 104 def _compute(self, predictions, references):\r\n--> 105 scores = self.scorer.score(references=references, candidates=predictions)\r\n 106 return {\"scores\": scores}\r\n\r\n~\/anaconda3\/envs\/noved\/lib\/python3.8\/site-packages\/bleurt\/score.py in score(self, references, candidates, batch_size)\r\n 164 \"\"\"\r\n 165 if not batch_size:\r\n--> 166 batch_size = FLAGS.bleurt_batch_size\r\n 167 \r\n 168 candidates, references = list(candidates), list(references)\r\n\r\n~\/anaconda3\/envs\/noved\/lib\/python3.8\/site-packages\/tensorflow\/python\/platform\/flags.py in __getattr__(self, name)\r\n 83 # a flag.\r\n 84 if not wrapped.is_parsed():\r\n---> 85 wrapped(_sys.argv)\r\n 86 return wrapped.__getattr__(name)\r\n 87 \r\n\r\n~\/anaconda3\/envs\/noved\/lib\/python3.8\/site-packages\/absl\/flags\/_flagvalues.py in __call__(self, argv, known_only)\r\n 643 for name, value in unknown_flags:\r\n 644 suggestions = _helpers.get_flag_suggestions(name, list(self))\r\n--> 645 raise _exceptions.UnrecognizedFlagError(\r\n 646 name, value, suggestions=suggestions)\r\n 647 \r\n\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n\r\nPossible Fix:\r\nModify `_compute` method https:\/\/github.com\/huggingface\/datasets\/blob\/7e64851a12263dc74d41c668167918484c8000ab\/metrics\/bleurt\/bleurt.py#L104\r\nto receive a `batch_size` argument, for example:\r\n```\r\ndef _compute(self, predictions, references, batch_size=1):\r\n scores = self.scorer.score(references=references, candidates=predictions, batch_size=batch_size)\r\n return {\"scores\": scores}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1727\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1726","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1726\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1726\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1726\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1726","id":784336370,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4","number":1726,"title":"Offline loading","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_data or something?","Yes I mentioned this in #824 as well. I'm looking into it","Alright now `csv`, `json`, `text` and `pandas` are \"packaged datasets\", i.e. they're part of the `datasets` package, which makes them available in offline mode without any change in terms of API:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"csv\", data_files=[\"path\/to\/data.csv\"])\r\n```\r\n\r\nInstead of loading the dataset script from the module cache, it's loaded from inside the `datasets` package.\r\n\r\nI updated the test to still be able to fetch the dummy data files for those datasets from `datasets\/{text|csv|pandas|json}\/dummy` in the repo.","Alright now all test pass :)\r\n(I don't thank you windows)","LGTM! Since you're getting the local script's last modification date anyways do you think it might be a good idea to show it in the warning?","> LGTM! Since you're getting the local script's last modification date anyways do you think it might be a good idea to show it in the warning?\r\n\r\nYep good idea. I added the date in the warning. For example `(last modified on Mon Nov 30 11:01:56 2020)`"],"created_at":1610464917000,"updated_at":1611857122000,"closed_at":1611074552000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1726","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1726","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1726.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1726.patch"},"body":"As discussed in #824 it would be cool to make the library work in offline mode.\r\nCurrently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.\r\nThis is because `prepare_module` fetches online for the latest version of the module.\r\n\r\nTo make it work in offline mode one suggestion was to reload the latest local version of the module.\r\nI implemented that and I also raise a warning saying that the module that is loaded is the latest local version.\r\n```python\r\nlogger.warning(\r\n f\"Using the latest cached version of the module from {cached_module_path} since it \"\r\n f\"couldn't be found locally at {input_path} or remotely ({error_type_that_prevented_reaching_out_remote_stuff}).\"\r\n)\r\n```\r\n\r\nI added tests to make sure it works as expected and I needed to do a few changes in the code to be able to test things properly. In particular I added a parameter `hf_modules_cache` to `init_dynamic_modules` for testing purposes. It makes it possible to have temporary modules caches for testing.\r\n\r\nI also added a `offline` context utility that allows to test part of the code by making all the requests fail as if there was no internet.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1726\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1725","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1725\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1725\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1725\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1725","id":784182273,"node_id":"MDU6SXNzdWU3ODQxODIyNzM=","number":1725,"title":"load the local dataset","user":{"login":"xinjicong","id":41193842,"node_id":"MDQ6VXNlcjQxMTkzODQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41193842?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xinjicong","html_url":"https:\/\/github.com\/xinjicong","followers_url":"https:\/\/api.github.com\/users\/xinjicong\/followers","following_url":"https:\/\/api.github.com\/users\/xinjicong\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xinjicong\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xinjicong\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xinjicong\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xinjicong\/orgs","repos_url":"https:\/\/api.github.com\/users\/xinjicong\/repos","events_url":"https:\/\/api.github.com\/users\/xinjicong\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xinjicong\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit\u2019s not possible to understand it and help you with only this information.","sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\r\n","Did you try to follow the instructions in the documentation?\r\nHere: https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#from-local-files","thanks a lot \r\ni find that the problem is i dont use vpn...\r\nso i have to keep my net work even if i want to load the local data ?","We will solve this soon (cf #1724)","thanks a lot"],"created_at":1610453575000,"updated_at":1614768943000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"your guidebook's example is like\r\n>>>from datasets import load_dataset\r\n>>> dataset = load_dataset('json', data_files='my_file.json')\r\nbut the first arg is path...\r\nso how should i do if i want to load the local dataset for model training?\r\ni will be grateful if you can help me handle this problem!\r\nthanks a lot!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1725\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1723","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1723\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1723\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1723\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1723","id":783982100,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1","number":1723,"title":"ADD S3 support for downloading and uploading processed datasets","user":{"login":"philschmid","id":32632186,"node_id":"MDQ6VXNlcjMyNjMyMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32632186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/philschmid","html_url":"https:\/\/github.com\/philschmid","followers_url":"https:\/\/api.github.com\/users\/philschmid\/followers","following_url":"https:\/\/api.github.com\/users\/philschmid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/philschmid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/philschmid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/philschmid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/philschmid\/orgs","repos_url":"https:\/\/api.github.com\/users\/philschmid\/repos","events_url":"https:\/\/api.github.com\/users\/philschmid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/philschmid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to\/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystems and cloud storages such as S3 with a link to the newly created documentation page from me. \r\nI Attach a screenshot of it here. \r\n![screencapture-localhost-5500-docs-build-html-filesystems-html-2021-01-19-17_16_10](https:\/\/user-images.githubusercontent.com\/32632186\/105062131-8d6a5c80-5a7a-11eb-90b0-f6128b758605.png)\r\n"],"created_at":1610435854000,"updated_at":1611680528000,"closed_at":1611680528000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1723","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1723","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1723.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1723.patch"},"body":"# What does this PR do?\r\n\r\nThis PR adds the functionality to load and save `datasets` from and to s3. \r\nYou can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`. \r\nYou can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`. \r\n\r\nLoading `csv` or `json` datasets from s3 is not implemented. \r\n\r\nTo save\/load datasets to s3 you either need to provide an `aws_profile`, which is set up on your machine, per default it uses the `default` profile or you have to pass an `aws_access_key_id` and `aws_secret_access_key`. \r\n\r\nThe implementation was done with the `fsspec` and `boto3`.\r\n\r\n\r\n### Example `aws_profile` :\r\n\r\n<details>\r\n\r\n```python\r\ndataset.save_to_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\", aws_profile=\"hf-sm\")\r\n\r\nload_from_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\", aws_profile=\"hf-sm\")\r\n```\r\n\r\n<\/details>\r\n\r\n\r\n### Example `aws_access_key_id` and `aws_secret_access_key` :\r\n\r\n<details>\r\n\r\n```python\r\ndataset.save_to_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\",\r\n aws_access_key_id=\"fake_access_key\", \r\n aws_secret_access_key=\"fake_secret_key\"\r\n )\r\n\r\nload_from_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\",\r\n aws_access_key_id=\"fake_access_key\", \r\n aws_secret_access_key=\"fake_secret_key\"\r\n )\r\n```\r\n\r\n<\/details>\r\n\r\nIf you want to load a dataset from a public s3 bucket you can pass `anon=True` \r\n\r\n### Example `anon=True` :\r\n\r\n<details>\r\n\r\n```python\r\ndataset.save_to_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\", aws_profile=\"hf-sm\")\r\n\r\nload_from_disk(\"s3:\/\/moto-mock-s3-bucketdatasets\/sdk\",anon=True)\r\n```\r\n\r\n<\/details>\r\n\r\n### Full Example\r\n\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"imdb\")\r\nprint(f\"DatasetDict contains {len(dataset)} datasets\")\r\nprint(f\"train Dataset has the size of: {len(dataset['train'])}\")\r\n\r\ndataset.save_to_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\", aws_profile=\"hf-sm\")\r\n\r\nremote_dataset = datasets.load_from_disk(\"s3:\/\/moto-mock-s3-bucket\/datasets\/sdk\", aws_profile=\"hf-sm\")\r\nprint(f\"DatasetDict contains {len(remote_dataset)} datasets\")\r\nprint(f\"train Dataset has the size of: {len(remote_dataset['train'])}\")\r\n```\r\n\r\nRelated to #878 \r\n\r\n\r\nI would also adjust the documentation after the code would be reviewed, as long as I leave the PR in \"draft\" status. Something that we can consider is renaming the functions and changing the `_disk` maybe to `_filesystem` \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1723\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1724","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1724\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1724\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1724\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1724","id":784023338,"node_id":"MDU6SXNzdWU3ODQwMjMzMzg=","number":1724,"title":"could not run models on a offline server successfully","user":{"login":"lkcao","id":49967236,"node_id":"MDQ6VXNlcjQ5OTY3MjM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49967236?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lkcao","html_url":"https:\/\/github.com\/lkcao","followers_url":"https:\/\/api.github.com\/users\/lkcao\/followers","following_url":"https:\/\/api.github.com\/users\/lkcao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lkcao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lkcao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lkcao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lkcao\/orgs","repos_url":"https:\/\/api.github.com\/users\/lkcao\/repos","events_url":"https:\/\/api.github.com\/users\/lkcao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lkcao\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Transferred to `datasets` based on the stack trace.","Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets\/datasets\/text`: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/text\/text.py.\r\nThen you can change the line 221 of `run_mlm_new.py` into:\r\n```python\r\n datasets = load_dataset('\/path\/to\/text.py', data_files=data_files)\r\n```\r\nWhere `\/path\/to\/text.py` is the path on the server where you saved the `text.py` script.","We're working on including the local dataset builders (csv, text, json etc.) directly in the `datasets` package so that they can be used offline","The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon","> The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\n> You can now use them offline\r\n> \r\n> ```python\r\n> datasets = load_dataset('text', data_files=data_files)\r\n> ```\r\n> \r\n> We'll do a new release soon\r\n\r\nso the new version release now?","Yes it's been available since datasets 1.3.0 !"],"created_at":1610431686000,"updated_at":1614785549000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I really need your help about this.\r\nI am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:\r\n![image](https:\/\/user-images.githubusercontent.com\/49967236\/104276256-25a88600-546a-11eb-9776-8ec695dfa24e.png)\r\n\r\nis there anything I can do? Is it possible to download all the things in cache and upload it to the server? Please help me out...","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1724\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1722","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1722\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1722\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1722\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1722","id":783921679,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4","number":1722,"title":"Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task.","user":{"login":"mounicam","id":11708999,"node_id":"MDQ6VXNlcjExNzA4OTk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11708999?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mounicam","html_url":"https:\/\/github.com\/mounicam","followers_url":"https:\/\/api.github.com\/users\/mounicam\/followers","following_url":"https:\/\/api.github.com\/users\/mounicam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mounicam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mounicam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mounicam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mounicam\/orgs","repos_url":"https:\/\/api.github.com\/users\/mounicam\/repos","events_url":"https:\/\/api.github.com\/users\/mounicam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mounicam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The current version of Wiki-Auto dataset contains a filtered version of the aligned dataset. The commit adds unfiltered versions of the data that can be useful the GEM task participants."],"created_at":1610429164000,"updated_at":1610475293000,"closed_at":1610472957000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1722","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1722","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1722.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1722.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1722\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1721","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1721\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1721\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1721\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1721","id":783828428,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5","number":1721,"title":"[Scientific papers] Mirror datasets zip","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip files ? they're quite big (300KB)\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically","That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 tokens. ","Ok thanks :)","Awesome good to merge for me :-) "],"created_at":1610414140000,"updated_at":1610452155000,"closed_at":1610451707000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1721","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1721","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1721.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1721.patch"},"body":"Datasets were uploading to https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/scientific_papers\/1.1.1\/arxiv-dataset.zip and https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/scientific_papers\/1.1.1\/pubmed-dataset.zip respectively to escape google drive quota and enable faster download. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1721\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1720","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1720\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1720\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1720\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1720","id":783721833,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUzMDM0MzYx","number":1720,"title":"Adding the NorNE dataset for NER","user":{"login":"versae","id":173537,"node_id":"MDQ6VXNlcjE3MzUzNw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/173537?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/versae","html_url":"https:\/\/github.com\/versae","followers_url":"https:\/\/api.github.com\/users\/versae\/followers","following_url":"https:\/\/api.github.com\/users\/versae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/versae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/versae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/versae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/versae\/orgs","repos_url":"https:\/\/api.github.com\/users\/versae\/repos","events_url":"https:\/\/api.github.com\/users\/versae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/versae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Quick question, @lhoestq. In this specific dataset, two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`\/`ORG` tags, conflating them with the other annotations of the same type. However, I have not found an easy way to implement that. Using splits or configs does not seem appropriate.\r\n","About the `GPE_LOC` and `GPE_ORG`. The original NorNE paper in which they published the dataset, does an evaluation on three different NER tag sets, one considering `GPE_LOC` and `GPE_ORG` as they are, another changing them to be just `GPE`, and another one by changing it to become `LOC` and `ORG`. The called these sets, `norne-full`, `norne-7`, and `norne-9`. What I would like is to provide a way for the user of this dataset to get `norne-7` and `norne-9` without having to duplicate the code.","Ok I see !\r\nI guess you can have three configurations `norne-full`, `norne-7` and `norne-9`.\r\nEach config can have different feature types. You can simply check for the `self.config.name` in the `_info(self)` method and pick the right ClassLabel names accordingly. And then in `_generate_examples` as well you can check for `self.config.name` to know how to process the labels to yield either GPE_LOC\/GPE_ORG, GPE or LOC\/ORG","But I'm already using the configurations for the different language\nvarieties. So you propose having something like `bokmaal`, `bokmaal-7`,\netc? Would there be a different way? If not, I'd be fine the corpus as it\nis until we come up with a solution. Thanks in any case.\n\n--\nSent using a cell-phone, so sorry for the typos and wrong auto-corrections.\n\nOn Tue, Jan 19, 2021, 4:56 PM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Ok I see !\n> I guess you can have three configurations norne-full, norne-7 and norne-9.\n> Each config can have different feature types. You can simply check for the\n> self.config.name in the _info(self) method and pick the right ClassLabel\n> names accordingly. And then in _generate_examples as well you can check\n> for self.config.name to know how to process the labels to yield either\n> GPE_LOC\/GPE_ORG, GPE or LOC\/ORG\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/pull\/1720#issuecomment-762936612>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/AABKLYOWNDBD76WZPJHFCWLS2WTTHANCNFSM4V6GSUQA>\n> .\n>\n","The first option about having configurations like `bokmaal-7`, `bokmaal-9` etc. would definitely work.\r\n\r\nA second option would be to add a parameter `ner_tags_set` to `NorneConfig` and then one could load them with\r\n```python\r\nbokmaal_full = load_dataset(\"norne\", \"bokmaal\", ner_tags_set=\"norne-full\")\r\n```\r\nfor example.\r\n\r\nWhat do you think ?","Hi @versae have you had a chance to consider one of the two options for the config ?\r\nI think both are ok but I have a small preference for the first one since it's simpler to implement.\r\n\r\nFeel free to ping me if you have questions or if I can help :) ","Hi @lhoestq. Agree, option 1 seems easier to implement. Just haven't had bandwidth to get to it yet. Hopefully starting next week I'll be able to update the PR.","Hi @versae ! Did you manage to add the configurations ? Let me know if we can help you on this","Hi @lhoestq, I do actually have to code ready, just need to generate the dummy data for it. ","One thing I don't know how to do is to make `_info(self)` return the different NER tags in its `DatasetInfo` object depending on the specific config.","OK, I think it's ready now.","Closing this one and opening a new one with a cleaner commit log.","All set now in #2154."],"created_at":1610400853000,"updated_at":1617200629000,"closed_at":1617199997000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1720","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1720","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1720.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1720.patch"},"body":"NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokm\u00e5l and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1720\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1719","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1719\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1719\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1719\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1719","id":783557542,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4","number":1719,"title":"Fix column list comparison in transmit format","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610385836000,"updated_at":1610390703000,"closed_at":1610390702000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1719","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1719","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1719.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1719.patch"},"body":"As noticed in #1718 the cache might not reload the cache files when new columns were added.\r\nThis is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled column names.\r\n\r\nI fixed that by sorting the columns for the comparison and added a test.\r\n\r\nTo properly test that I added a third column `col_3` to the dummy_dataset used for tests.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1719\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1718","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1718\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1718\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1718\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1718","id":783474753,"node_id":"MDU6SXNzdWU3ODM0NzQ3NTM=","number":1718,"title":"Possible cache miss in datasets","user":{"login":"ofirzaf","id":18296312,"node_id":"MDQ6VXNlcjE4Mjk2MzEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18296312?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ofirzaf","html_url":"https:\/\/github.com\/ofirzaf","followers_url":"https:\/\/api.github.com\/users\/ofirzaf\/followers","following_url":"https:\/\/api.github.com\/users\/ofirzaf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ofirzaf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ofirzaf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ofirzaf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ofirzaf\/orgs","repos_url":"https:\/\/api.github.com\/users\/ofirzaf\/repos","events_url":"https:\/\/api.github.com\/users\/ofirzaf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ofirzaf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting list would not always be in the same order and therefore the function that computes the hash used by the cache would not always return the same result.\r\nI'm opening a PR to fix this.\r\n\r\nAlso we plan to do a new release in the coming days so you can expect the fix to be available soon.\r\nNote that you can still specify `cache_file_name=` in the second `map()` call to name the cache file yourself if you want to.","Thanks for the fast reply, waiting for the fix :)\r\n\r\nI tried to use `cache_file_names` and wasn't sure how, I tried to give it the following:\r\n```\r\ntokenized_datasets = tokenized_datasets.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=60,\r\n load_from_cache_file=True,\r\n cache_file_names={k: f'.cache\/{str(k)}' for k in tokenized_datasets}\r\n)\r\n```\r\n\r\nand got an error:\r\n```\r\nmultiprocess.pool.RemoteTraceback:\r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"\/venv\/lib\/python3.6\/site-packages\/multiprocess\/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 157, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1491, in _map_single\r\n tmp_file = tempfile.NamedTemporaryFile(\"wb\", dir=os.path.dirname(cache_file_name), delete=False)\r\n File \"\/usr\/lib\/python3.6\/tempfile.py\", line 690, in NamedTemporaryFile\r\n (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)\r\n File \"\/usr\/lib\/python3.6\/tempfile.py\", line 401, in _mkstemp_inner\r\n fd = _os.open(file, flags, 0o600)\r\nFileNotFoundError: [Errno 2] No such file or directory: '_00000_of_00060.cache\/tmpsvszxtop'\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 48, in <module>\r\n cache_file_names={k: f'.cache\/{str(k)}' for k in tokenized_datasets}\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 303, in map\r\n for k, dataset in self.items()\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 303, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1317, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/venv\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1317, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/venv\/lib\/python3.6\/site-packages\/multiprocess\/pool.py\", line 644, in get\r\n raise self._value\r\nFileNotFoundError: [Errno 2] No such file or directory: '_00000_of_00060.cache\/tmpsvszxtop'\r\n```\r\n","The documentation says\r\n```\r\ncache_file_names (`Optional[Dict[str, str]]`, defaults to `None`): Provide the name of a cache file to use to store the\r\n results of the computation instead of the automatically generated cache file name.\r\n You have to provide one :obj:`cache_file_name` per dataset in the dataset dictionary.\r\n```\r\nWhat is expected is simply the name of a file, not a path. The file will be located in the cache directory of the `wikitext` dataset. You can try again with something like\r\n```python\r\ncache_file_names = {k: f'tokenized_and_grouped_{str(k)}' for k in tokenized_datasets}\r\n```","Managed to get `cache_file_names` working and caching works well with it\r\nHad to make a small modification for it to work:\r\n```\r\ncache_file_names = {k: f'tokenized_and_grouped_{str(k)}.arrow' for k in tokenized_datasets}\r\n```","Another comment on `cache_file_names`, it doesn't save the produced cached files in the dataset's cache folder, it requires to give a path to an existing directory for it to work.\r\nI can confirm that this is how it works in `datasets==1.1.3`","Oh yes indeed ! Maybe we need to update the docstring to mention that it is a path","I fixed the docstring. Hopefully this is less confusing now: https:\/\/github.com\/huggingface\/datasets\/commit\/42ccc0012ba8864e6db1392430100f350236183a","I upgraded to the latest version and I encountered some strange behaviour, the script I posted in the OP doesn't trigger recalculation, however, if I add the following change it does trigger partial recalculation, I am not sure if its something wrong on my machine or a bug:\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\n\r\ndatasets = load_dataset('wikitext', 'wikitext-103-raw-v1')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)\r\n\r\ncolumn_names = datasets[\"train\"].column_names\r\ntext_column_name = \"text\" if \"text\" in column_names else column_names[0]\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n# CHANGE\r\nprint('hello')\r\n# CHANGE\r\n\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n...\r\n```\r\nI am using datasets in the `run_mlm.py` script in the transformers examples and I found that if I change the script without touching any of the preprocessing. it still triggers recalculation which is very weird\r\n\r\nEdit: accidently clicked the close issue button ","This is because the `group_texts` line definition changes (it is defined 3 lines later than in the previous call). Currently if a function is moved elsewhere in a script we consider it to be different.\r\n\r\nNot sure this is actually a good idea to keep this behavior though. We had this as a security in the early development of the lib but now the recursive hashing of objects is robust so we can probably remove that.\r\nMoreover we're already ignoring the line definition for lambda functions.","I opened a PR to change this, let me know what you think.","Sounds great, thank you for your quick responses and help! Looking forward for the next release.","I am having a similar issue where only the grouped files are loaded from cache while the tokenized ones aren't. I can confirm both datasets are being stored to file, but only the grouped version is loaded from cache. Not sure what might be going on. But I've tried to remove all kinds of non deterministic behaviour, but still no luck. Thanks for the help!\r\n\r\n\r\n```python\r\n # Datasets\r\n train = sorted(glob(args.data_dir + '*.{}'.format(args.ext)))\r\n if args.dev_split >= len(train):\r\n raise ValueError(\"Not enough dev files\")\r\n dev = []\r\n state = random.Random(1001)\r\n for _ in range(args.dev_split):\r\n dev.append(train.pop(state.randint(0, len(train) - 1)))\r\n\r\n max_seq_length = min(args.max_seq_length, tokenizer.model_max_length)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples['text'], return_special_tokens_mask=True)\r\n\r\n def group_texts(examples):\r\n # Concatenate all texts from our dataset and generate chunks of max_seq_length\r\n concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # Truncate (not implementing padding)\r\n total_length = (total_length \/\/ max_seq_length) * max_seq_length\r\n # Split by chunks of max_seq_length\r\n result = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n return result\r\n\r\n datasets = load_dataset(\r\n 'text', name='DBNL', data_files={'train': train[:10], 'dev': dev[:5]}, \r\n cache_dir=args.data_cache_dir)\r\n datasets = datasets.map(tokenize_function, \r\n batched=True, remove_columns=['text'], \r\n cache_file_names={k: os.path.join(args.data_cache_dir, f'{k}-tokenized') for k in datasets},\r\n load_from_cache_file=not args.overwrite_cache)\r\n datasets = datasets.map(group_texts, \r\n batched=True,\r\n cache_file_names={k: os.path.join(args.data_cache_dir, f'{k}-grouped') for k in datasets},\r\n load_from_cache_file=not args.overwrite_cache)\r\n```\r\n\r\nAnd this is the log\r\n\r\n```\r\n04\/26\/2021 10:26:59 - WARNING - datasets.builder - Using custom data configuration DBNL-f8d988ad33ccf2c1\r\n04\/26\/2021 10:26:59 - WARNING - datasets.builder - Reusing dataset text (\/home\/manjavacasema\/data\/.cache\/text\/DBNL-f8d988ad33ccf2c1\/0.0.0\/e16f44aa1b321ece1f87b07977cc5d70be93d69b20486d6dacd62e12cf25c9a5)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13\/13 [00:00<00:00, 21.07ba\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 40\/40 [00:01<00:00, 24.28ba\/s]\r\n04\/26\/2021 10:27:01 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at \/home\/manjavacasema\/data\/.cache\/train-grouped\r\n04\/26\/2021 10:27:01 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at \/home\/manjavacasema\/data\/.cache\/dev-grouped\r\n```\r\n","Hi ! What tokenizer are you using ?","It's the ByteLevelBPETokenizer"],"created_at":1610379451000,"updated_at":1619591723000,"closed_at":1611629279000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.\r\nI have attached an example script that for me reproduces the problem.\r\nIn the attached example the second map function always recomputes instead of loading from cache.\r\nIs this a bug or am I doing something wrong?\r\nIs there a way for fix this and avoid all the recomputation?\r\n\r\nThanks\r\n\r\nEdit:\r\ntransformers==3.5.1\r\ndatasets==1.2.0\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer\r\n\r\ndatasets = load_dataset('wikitext', 'wikitext-103-raw-v1')\r\ntokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True)\r\n\r\n\r\ncolumn_names = datasets[\"train\"].column_names\r\ntext_column_name = \"text\" if \"text\" in column_names else column_names[0]\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n\r\ntokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=60,\r\n remove_columns=[text_column_name],\r\n load_from_cache_file=True,\r\n)\r\nmax_seq_length = tokenizer.model_max_length\r\ndef group_texts(examples):\r\n # Concatenate all texts.\r\n concatenated_examples = {\r\n k: sum(examples[k], []) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can\r\n # customize this part to your needs.\r\n total_length = (total_length \/\/ max_seq_length) * max_seq_length\r\n # Split by chunks of max_len.\r\n result = {\r\n k: [t[i: i + max_seq_length]\r\n for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n return result\r\n\r\ntokenized_datasets = tokenized_datasets.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=60,\r\n load_from_cache_file=True,\r\n)\r\nprint(tokenized_datasets)\r\n\r\nprint('finished')\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1718\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1717","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1717\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1717\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1717\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1717","id":783074255,"node_id":"MDU6SXNzdWU3ODMwNzQyNTU=","number":1717,"title":"SciFact dataset - minor changes","user":{"login":"dwadden","id":3091916,"node_id":"MDQ6VXNlcjMwOTE5MTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3091916?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dwadden","html_url":"https:\/\/github.com\/dwadden","followers_url":"https:\/\/api.github.com\/users\/dwadden\/followers","following_url":"https:\/\/api.github.com\/users\/dwadden\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dwadden\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dwadden\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dwadden\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dwadden\/orgs","repos_url":"https:\/\/api.github.com\/users\/dwadden\/repos","events_url":"https:\/\/api.github.com\/users\/dwadden\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dwadden\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi Dave,\r\nYou are more than welcome to open a PR to make these changes! \ud83e\udd17\r\nYou will find the relevant information about opening a PR in the [contributing guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/CONTRIBUTING.md) and in the [dataset addition guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nPinging also @lhoestq for the Google cloud matter.","> I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?\r\n\r\nSure ! Also feel free to ping us for reviews or if we can help :)\r\n\r\n> It also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/scifact\/scifact.py). Can you help me update the version on gcloud?\r\n\r\nWhat makes you think that ?\r\nAfaik there's no scifact on our google storage\r\n","\r\n\r\n> > I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?\r\n> \r\n> Sure ! Also feel free to ping us for reviews or if we can help :)\r\n> \r\nOK! We're organizing a [shared task](https:\/\/sdproc.org\/2021\/sharedtasks.html#sciver) based on the dataset, and I made some updates and changed the download URL - so the current code points to a dead URL. I'll update appropriately once the task is finalized and make a PR.\r\n\r\n> > It also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/scifact\/scifact.py). Can you help me update the version on gcloud?\r\n> \r\n> What makes you think that ?\r\n> Afaik there's no scifact on our google storage\r\n\r\nYou're right, I had the data cached on my machine somewhere. \r\n\r\n","I opened a PR about this: https:\/\/github.com\/huggingface\/datasets\/pull\/1780. Closing this issue, will continue there."],"created_at":1610342800000,"updated_at":1611629537000,"closed_at":1611629537000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nSciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!\r\n\r\nI'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?\r\n\r\nIt also looks like the dataset is being downloaded directly from Huggingface's Google cloud account rather than via the `_URL` in [scifact.py](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/scifact\/scifact.py). Can you help me update the version on gcloud?\r\n\r\nThanks,\r\n\r\nDave","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1717\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1716","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1716\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1716\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1716\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1716","id":782819006,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5","number":1716,"title":"Add Hatexplain Dataset","user":{"login":"kushal2000","id":48222101,"node_id":"MDQ6VXNlcjQ4MjIyMTAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48222101?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kushal2000","html_url":"https:\/\/github.com\/kushal2000","followers_url":"https:\/\/api.github.com\/users\/kushal2000\/followers","following_url":"https:\/\/api.github.com\/users\/kushal2000\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kushal2000\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kushal2000\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kushal2000\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kushal2000\/orgs","repos_url":"https:\/\/api.github.com\/users\/kushal2000\/repos","events_url":"https:\/\/api.github.com\/users\/kushal2000\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kushal2000\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610285401000,"updated_at":1610979702000,"closed_at":1610979702000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1716","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1716","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1716.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1716.patch"},"body":"Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1716\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1715","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1715\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1715\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1715\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1715","id":782754441,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5","number":1715,"title":"add Korean intonation-aided intention identification dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610260144000,"updated_at":1631897653000,"closed_at":1610471673000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1715","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1715","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1715.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1715.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1715\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1714","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1714\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1714\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1714\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1714","id":782416276,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0","number":1714,"title":"Adding adversarialQA dataset","user":{"login":"maxbartolo","id":15869827,"node_id":"MDQ6VXNlcjE1ODY5ODI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15869827?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/maxbartolo","html_url":"https:\/\/github.com\/maxbartolo","followers_url":"https:\/\/api.github.com\/users\/maxbartolo\/followers","following_url":"https:\/\/api.github.com\/users\/maxbartolo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/maxbartolo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/maxbartolo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/maxbartolo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/maxbartolo\/orgs","repos_url":"https:\/\/api.github.com\/users\/maxbartolo\/repos","events_url":"https:\/\/api.github.com\/users\/maxbartolo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/maxbartolo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh that's a really cool one, we'll review\/merge it soon!\r\n\r\nIn the meantime, do you have any specific positive\/negative feedback on the process of adding a datasets Max?\r\nDid you follow the instruction in the [detailed step-by-step](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md)?","Thanks Thom, been a while, hope all is well!\r\n\r\nYes, I followed the step by step instructions and found them pretty straightforward. The only things I wasn't sure of were what should go into the YAML tags field for the dataset card, and whether there was a list of options somewhere (maybe akin to the metrics?) of the possible supported tasks. I found the rest very intuitive and the automated metadata and dummy data generation very handy. Thanks!","Good point! pinging @yjernite here so he can improve this part!","@maxbartolo cool addition!\r\n\r\nFor the YAML tag, you should use the tagging app we provide to choose from a drop-down menu:\r\nhttps:\/\/github.com\/huggingface\/datasets-tagging\r\n\r\nThe process is described toward the end of the [step-by-step guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card), do you have any suggestions for making it easier to find?\r\n\r\nOtherwise, the dataset card is really cool, thanks for making it so complete!\r\n","@yjernite\r\n\r\nThanks, YAML tags added. I think my main issue was with the flow of the [step-by-step guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md). For example, the [card creator](https:\/\/huggingface.co\/datasets\/card-creator\/) is introduced in Step 4, right after creating an empty directory for your dataset. The first field it requires are the YAML tags, which (at least for me) was the last step of the process.\r\n\r\nI'd suggest having the guide structured in the same order as the creation process. For me it was something like:\r\n- Step 1: Preparing your env\r\n- Step 2: Write the loading\/processing code\r\n- Step 3: Automatically generate dummy data and `dataset_infos.json`\r\n- Step 4: Tag the dataset\r\n- Step 5: Write the dataset card using the [card creator](https:\/\/huggingface.co\/datasets\/card-creator\/)\r\n- Step 6: Open a Pull Request on the main HuggingFace repo and share your work!!\r\n\r\nThanks again!"],"created_at":1610142369000,"updated_at":1610553924000,"closed_at":1610553924000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1714","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1714","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1714.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1714.patch"},"body":"Adding the adversarialQA dataset (https:\/\/adversarialqa.github.io\/) from Beat the AI (https:\/\/arxiv.org\/abs\/2002.00293)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1714\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1713","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1713\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1713\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1713\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1713","id":782337723,"node_id":"MDU6SXNzdWU3ODIzMzc3MjM=","number":1713,"title":"Installation using conda","user":{"login":"pranav-s","id":9393002,"node_id":"MDQ6VXNlcjkzOTMwMDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9393002?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pranav-s","html_url":"https:\/\/github.com\/pranav-s","followers_url":"https:\/\/api.github.com\/users\/pranav-s\/followers","following_url":"https:\/\/api.github.com\/users\/pranav-s\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pranav-s\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pranav-s\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pranav-s\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pranav-s\/orgs","repos_url":"https:\/\/api.github.com\/users\/pranav-s\/repos","events_url":"https:\/\/api.github.com\/users\/pranav-s\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pranav-s\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes indeed the idea is to have the next release on conda cc @LysandreJik ","Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.","I think we can have `datasets` on conda by next week. Will see what I can do!","Thank you. Looking forward to it.","`datasets` has been added to the huggingface channel thanks to @LysandreJik :)\r\nIt depends on conda-forge though\r\n\r\n```\r\nconda install -c huggingface -c conda-forge datasets\r\n```"],"created_at":1610133135000,"updated_at":1631882860000,"closed_at":1631882860000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers\/examples folder but am unable to do so at the moment as datasets can only be installed using pip and using pip in a conda environment is generally a bad idea in my experience.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1713\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1712","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1712\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1712\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1712\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1712","id":782313097,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4","number":1712,"title":"Silicone","user":{"login":"eusip","id":1551356,"node_id":"MDQ6VXNlcjE1NTEzNTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1551356?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eusip","html_url":"https:\/\/github.com\/eusip","followers_url":"https:\/\/api.github.com\/users\/eusip\/followers","following_url":"https:\/\/api.github.com\/users\/eusip\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eusip\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eusip\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eusip\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eusip\/orgs","repos_url":"https:\/\/api.github.com\/users\/eusip\/repos","events_url":"https:\/\/api.github.com\/users\/eusip\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eusip\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["When should we expect to see our dataset appear in the search dropdown at huggingface.co?","Hi @eusip,\r\n\r\n> When should we expect to see our dataset appear in the search dropdown at huggingface.co?\r\n\r\nwhen this PR is merged.","Thanks!","I've implemented all the changes requested by @lhoestq but I made the mistake of trying to change the remote branch name. \r\n\r\nHopefully the changes are seen on your end as both branches `silicone` and `main` should be up-to-date.","It looks like the PR includes changes about many other files than the ones for Silicone (+30,000 line changes)\r\n\r\nMaybe you can try to create another branch and another PR ?","> It looks like the PR includes changes about many other files than the ones for Silicone (+30,000 line changes)\r\n> \r\n> Maybe you can try to create another branch and another PR ?\r\n\r\nSure. I will make a new pull request."],"created_at":1610130258000,"updated_at":1611238357000,"closed_at":1611225071000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1712","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1712","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1712.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1712.patch"},"body":"My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1712\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1711","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1711\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1711\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1711\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1711","id":782129083,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2","number":1711,"title":"Fix windows path scheme in cached path","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610113556000,"updated_at":1610357000000,"closed_at":1610356999000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1711","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1711","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1711.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1711.patch"},"body":"As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.\r\n\r\nI fixed this and added tests","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1711\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1710","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1710\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1710\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1710\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1710","id":781914951,"node_id":"MDU6SXNzdWU3ODE5MTQ5NTE=","number":1710,"title":"IsADirectoryError when trying to download C4","user":{"login":"fredriko","id":5771366,"node_id":"MDQ6VXNlcjU3NzEzNjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5771366?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fredriko","html_url":"https:\/\/github.com\/fredriko","followers_url":"https:\/\/api.github.com\/users\/fredriko\/followers","following_url":"https:\/\/api.github.com\/users\/fredriko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fredriko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fredriko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fredriko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fredriko\/orgs","repos_url":"https:\/\/api.github.com\/users\/fredriko\/repos","events_url":"https:\/\/api.github.com\/users\/fredriko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fredriko\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I haven't tested C4 on my side so there so there may be a few bugs in the code\/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'\/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files."],"created_at":1610091090000,"updated_at":1610531053000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**TLDR**:\r\n\r\nI fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.\r\n\r\nHow can the problem be fixed? \r\n\r\n**VERBOSE**:\r\n\r\nI use Python version 3.7 and have the following dependencies listed in my project:\r\n\r\n```\r\ndatasets==1.2.0\r\napache-beam==2.26.0\r\n```\r\n\r\nWhen running the following code, where `\/data\/huggingface\/unpacked\/` contains a single unzipped `wet.paths` file manually downloaded as per the instructions for C4:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"c4\", \"en\", data_dir=\"\/data\/huggingface\/unpacked\", beam_runner='DirectRunner')\r\n```\r\n\r\nI get the following stacktrace:\r\n\r\n```\r\n\/Users\/fredriko\/venv\/misc\/bin\/python \/Users\/fredriko\/source\/misc\/main.py\r\nDownloading and preparing dataset c4\/en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/Users\/fredriko\/.cache\/huggingface\/datasets\/c4\/en\/2.3.0\/8304cf264cc42bdebcb13fca4b9cb36368a96f557d36f9dc969bebbe2568b283...\r\nTraceback (most recent call last):\r\n File \"\/Users\/fredriko\/source\/misc\/main.py\", line 3, in <module>\r\n load_dataset(\"c4\", \"en\", data_dir=\"\/data\/huggingface\/unpacked\", beam_runner='DirectRunner')\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 612, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 527, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1066, in _download_and_prepare\r\n pipeline=pipeline,\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 582, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/Users\/fredriko\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/c4\/8304cf264cc42bdebcb13fca4b9cb36368a96f557d36f9dc969bebbe2568b283\/c4.py\", line 190, in _split_generators\r\n file_paths = dl_manager.download_and_extract(files_to_download)\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 258, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 189, in download\r\n self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 117, in _record_sizes_checksums\r\n self._recorded_sizes_checksums[str(url)] = get_size_checksum_dict(path)\r\n File \"\/Users\/fredriko\/venv\/misc\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 80, in get_size_checksum_dict\r\n with open(path, \"rb\") as f:\r\nIsADirectoryError: [Errno 21] Is a directory: '\/'\r\n\r\nProcess finished with exit code 1\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1710\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1709","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1709\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1709\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1709\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1709","id":781875640,"node_id":"MDU6SXNzdWU3ODE4NzU2NDA=","number":1709,"title":"Databases","user":{"login":"JimmyJim1","id":68724553,"node_id":"MDQ6VXNlcjY4NzI0NTUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/68724553?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JimmyJim1","html_url":"https:\/\/github.com\/JimmyJim1","followers_url":"https:\/\/api.github.com\/users\/JimmyJim1\/followers","following_url":"https:\/\/api.github.com\/users\/JimmyJim1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JimmyJim1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JimmyJim1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JimmyJim1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JimmyJim1\/orgs","repos_url":"https:\/\/api.github.com\/users\/JimmyJim1\/repos","events_url":"https:\/\/api.github.com\/users\/JimmyJim1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JimmyJim1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610086443000,"updated_at":1610096408000,"closed_at":1610096408000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1709\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1708","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1708\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1708\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1708\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1708","id":781631455,"node_id":"MDU6SXNzdWU3ODE2MzE0NTU=","number":1708,"title":"<html dir=\"ltr\" lang=\"en\" class=\"focus-outline-visible\"><head><meta http-equiv=\"Content-Type\" content=\"text\/html; charset=UTF-8\">","user":{"login":"Louiejay54","id":77126849,"node_id":"MDQ6VXNlcjc3MTI2ODQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/77126849?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Louiejay54","html_url":"https:\/\/github.com\/Louiejay54","followers_url":"https:\/\/api.github.com\/users\/Louiejay54\/followers","following_url":"https:\/\/api.github.com\/users\/Louiejay54\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Louiejay54\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Louiejay54\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Louiejay54\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Louiejay54\/orgs","repos_url":"https:\/\/api.github.com\/users\/Louiejay54\/repos","events_url":"https:\/\/api.github.com\/users\/Louiejay54\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Louiejay54\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610055924000,"updated_at":1610096401000,"closed_at":1610096401000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1708\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1707","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1707\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1707\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1707\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1707","id":781507545,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2","number":1707,"title":"Added generated READMEs for datasets that were missing one.","user":{"login":"madlag","id":272253,"node_id":"MDQ6VXNlcjI3MjI1Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/272253?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/madlag","html_url":"https:\/\/github.com\/madlag","followers_url":"https:\/\/api.github.com\/users\/madlag\/followers","following_url":"https:\/\/api.github.com\/users\/madlag\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/madlag\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/madlag\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/madlag\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/madlag\/orgs","repos_url":"https:\/\/api.github.com\/users\/madlag\/repos","events_url":"https:\/\/api.github.com\/users\/madlag\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/madlag\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like we need to trim the ones with too many configs, will look into it tomorrow!"],"created_at":1610043006000,"updated_at":1610980353000,"closed_at":1610980353000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1707","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1707","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1707.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1707.patch"},"body":"This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible.\r\n\r\nCode is available here for the moment: https:\/\/github.com\/madlag\/datasets_readme_generator .\r\nWe will move it to a Hugging Face repository and to https:\/\/huggingface.co\/datasets\/card-creator\/ later.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1707\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1706","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1706\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1706\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1706\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1706","id":781494476,"node_id":"MDU6SXNzdWU3ODE0OTQ0NzY=","number":1706,"title":"Error when downloading a large dataset on slow connection.","user":{"login":"lucadiliello","id":23355969,"node_id":"MDQ6VXNlcjIzMzU1OTY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23355969?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lucadiliello","html_url":"https:\/\/github.com\/lucadiliello","followers_url":"https:\/\/api.github.com\/users\/lucadiliello\/followers","following_url":"https:\/\/api.github.com\/users\/lucadiliello\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lucadiliello\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lucadiliello\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lucadiliello\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lucadiliello\/orgs","repos_url":"https:\/\/api.github.com\/users\/lucadiliello\/repos","events_url":"https:\/\/api.github.com\/users\/lucadiliello\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lucadiliello\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=\"force_redownload\")\r\n```"],"created_at":1610041695000,"updated_at":1610534102000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I receive the following error after about an hour trying to download the `openwebtext` dataset.\r\n\r\nThe code used is:\r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\")\r\n```\r\n\r\n> Traceback (most recent call last): [4\/28]\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 610, in load_dataset\r\n> ignore_verifications=ignore_verifications,\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 515, in download_and_prepare\r\n> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 570, in _download_and_prepare\r\n> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n> File \"\/home\/lucadiliello\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/openwebtext\/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02\/openwebtext.py\", line 62, in _split_generators\r\n> dl_dir = dl_manager.download_and_extract(_URL)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n> return self.extract(self.download(url_or_urls))\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 235, in extract\r\n> num_proc=num_proc,\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n> return function(data_struct)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 343, in cached_path\r\n> tar_file.extractall(output_path_extracted)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/tarfile.py\", line 2000, in extractall\r\n> numeric_owner=numeric_owner)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/tarfile.py\", line 2042, in extract\r\n> numeric_owner=numeric_owner)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/tarfile.py\", line 2112, in _extract_member\r\n> self.makefile(tarinfo, targetpath)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/tarfile.py\", line 2161, in makefile\r\n> copyfileobj(source, target, tarinfo.size, ReadError, bufsize)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/tarfile.py\", line 253, in copyfileobj\r\n> buf = src.read(remainder)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/lzma.py\", line 200, in read\r\n> return self._buffer.read(size)\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/_compression.py\", line 68, in readinto\r\n> data = self.read(len(byte_view))\r\n> File \"\/home\/lucadiliello\/anaconda3\/envs\/nlp\/lib\/python3.7\/_compression.py\", line 99, in read\r\n> raise EOFError(\"Compressed file ended before the \"\r\n> EOFError: Compressed file ended before the end-of-stream marker was reached\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1706\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1705","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1705\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1705\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1705\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1705","id":781474949,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4","number":1705,"title":"Add information about caching and verifications in \"Load a Dataset\" docs","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892861,"node_id":"MDU6TGFiZWwxOTM1ODkyODYx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/documentation","name":"documentation","color":"0075ca","default":true,"description":"Improvements or additions to documentation"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610039924000,"updated_at":1610460481000,"closed_at":1610460481000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1705","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1705","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1705.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1705.patch"},"body":"Related to #215.\r\n\r\nMissing improvements from @lhoestq's #1703.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1705\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1704","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1704\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1704\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1704\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1704","id":781402757,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1","number":1704,"title":"Update XSUM Factuality DatasetCard","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610033834000,"updated_at":1610458204000,"closed_at":1610458204000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1704","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1704","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1704.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1704.patch"},"body":"Update XSUM Factuality DatasetCard","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1704\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1703","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1703\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1703\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1703\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1703","id":781395146,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMTI2MjA5","number":1703,"title":"Improvements regarding caching and fingerprinting","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I few comments here for discussion:\r\n- I'm not convinced yet the end user should really have to understand the difference between \"caching\" and 'fingerprinting\", what do you think? I think fingerprinting should probably stay as an internal thing. Is there a case where we want cahing without fingerprinting or vice-versa?\r\n- while I think the random fingerprint mechanism is smart, I have one question: when we disable caching or fingerprinting we also probably don't want the disk usage to grow so we should then try to keep only one cache file. Is it the case currently?\r\n- the warning should be emitted only once per session if possible (we have a mechanism to do that in transformers, you should ask Lysandre\/Sylvain)\r\n\r\n","About your points:\r\n- Yes I agree, I just wanted to bring the discussion on this point. Until now fingerprinting hasn't been blocking for user experience. I'll probably remove the enable\/disable fingerprinting function to keep things simple from the user's perspective.\r\n- Right now every time a not in-place transform (i.e. map, filter) is applied, a new cache file is created. It is the case even if caching is disabled since disabling it only means that the cache file won't be reloaded. Therefore you're right that it might end up filling the disk with files that won't be reused. I like the idea of keeping only one cache file. Currently all the cache files are kept on disk until the user clears the cache. To be able to keep only one, we need to know if a dataset that has been transformed is still loaded or not. For example\r\n```python\r\n# case 1 - keep both cache files (dataset1 and dataset2)\r\ndataset2 = dataset1.map(...)\r\n# case 2 - keep only the new cache file\r\ndataset1 = dataset1.map(...)\r\n```\r\nIn python it doesn't seem trivial to detect such changes. One thing that we can actually do on the other hand is store the cache files in a temporary directory that is cleared when the session closes. I think that's a good a simple solution for this problem.\r\n- Yes good idea ! I don't like spam either :) ","> * To be able to keep only one, we need to know if a dataset that has been transformed is still loaded or not. For example\r\n> \r\n> ```python\r\n> # case 1 - keep both cache files (dataset1 and dataset2)\r\n> dataset2 = dataset1.map(...)\r\n> # case 2 - keep only the new cache file\r\n> dataset1 = dataset1.map(...)\r\n> ```\r\n\r\nI see what you mean. It's a tricky question. One option would be that if caching is deactivated we have a single memory mapped file and have copy act as a copy by reference instead of a copy by value. We will then probably want a `copy()` or `deepcopy()` functionality. Maybe we should think a little bit about it though.","- I like the idea of using a temporary directory per session!\r\n- If the default behavior when caching is disabled is to re-use the same file, I'm a little worried about people making mistakes and having to re-download and process from scratch.\r\n- So we already have a keyword argument for `dataset1 = dataset1.map(..., in_place=True)`?","> * If the default behavior when caching is disabled is to re-use the same file, I'm a little worried about people making mistakes and having to re-download and process from scratch.\r\n\r\nWe should distinguish between the caching from load_dataset (base dataset cache files) and the caching after dataset transforms such as map or filter (transformed dataset cache files). When disabling caching only the second type (for map and filter) doesn't reload from cache files.\r\nTherefore nothing is re-downloaded. To re-download the dataset entirely the argument `download_mode=\"force_redownload\"` must be used in `load_dataset`.\r\nDo we have to think more about the naming to make things less confusing in your opinion ?\r\n\r\n> * So we already have a keyword argument for `dataset1 = dataset1.map(..., in_place=True)`?\r\n\r\nThere's no such `in_place` parameter in map, what do you mean exactly ?","I updated the PR:\r\n- I removed the enable\/disable fingerprinting function\r\n- if caching is disabled arrow files are written in a temporary directory that is deleted when session closes\r\n- the warning that is showed when hashing a transform fails is only showed once\r\n- I added the `set_caching_enabled` function to the docs and explained the caching mechanism and its relation with fingerprinting\r\n\r\nI would love to have some feedback :) ","> > * So we already have a keyword argument for `dataset1 = dataset1.map(..., in_place=True)`?\r\n> \r\n> There's no such `in_place` parameter in map, what do you mean exactly ?\r\n\r\nSorry, that wasn't clear at all. I was responding to your previous comment about case 1 \/ case 2. I don't think the behavior should depend on the command, but we could have:\r\n\r\n```\r\n# case 1 - keep both cache files (dataset1 and dataset2)\r\ndataset2 = dataset1.map(...)\r\n# case 2 - keep only the new cache file\r\ndataset1 = dataset1.map(..., in_place=True)\r\n```\r\n\r\nCase 1 returns a new reference using the new cache file, case 2 returns the same reference","> Sorry, that wasn't clear at all. I was responding to your previous comment about case 1 \/ case 2. I don't think the behavior should depend on the command, but we could have:\r\n> \r\n> ```\r\n> # case 1 - keep both cache files (dataset1 and dataset2)\r\n> dataset2 = dataset1.map(...)\r\n> # case 2 - keep only the new cache file\r\n> dataset1 = dataset1.map(..., in_place=True)\r\n> ```\r\n> \r\n> Case 1 returns a new reference using the new cache file, case 2 returns the same reference\r\n\r\nOk I see !\r\n`in_place` is a parameter that is used in general to designate a transform so I would name that differently (maybe `overwrite` or something like that).\r\nNot sure if it's possible to update an already existing arrow file that is memory-mapped, let me check real quick.\r\nAlso it's possible to call `dataset2.cleanup_cache_files()` to delete the other cache files if we create a new one after the transform. Or even to get the cache file with `dataset1.cache_files` and let the user remove them by hand.\r\n\r\nEDIT: updating an arrow file in place is not part of the current API of pyarrow, so we would have to make new files.\r\n"],"created_at":1610033189000,"updated_at":1611077531000,"closed_at":1611077530000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1703","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1703","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1703.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1703.patch"},"body":"This PR adds these features:\r\n- Enable\/disable caching\r\n If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\r\n It is equivalent to setting `load_from_cache` to `False` in dataset transforms.\r\n```python\r\nfrom datasets import set_caching_enabled\r\n\r\nset_caching_enabled(False)\r\n```\r\n- Allow unpicklable functions in `map`\r\n If an unpicklable function is used, then it's not possible to hash it to update the dataset fingerprint that is used to name cache files. To workaround that, a random fingerprint is generated instead and a warning is raised.\r\n```python\r\nlogger.warning(\r\n f\"Transform {transform} couldn't be hashed properly, a random hash was used instead. \"\r\n \"Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. \"\r\n \"If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything.\"\r\n)\r\n```\r\n\r\nand also (open to discussion, EDIT: actually NOT included):\r\n- Enable\/disable fingerprinting\r\n Fingerprinting allows to have one deterministic fingerprint per dataset state.\r\n A dataset fingerprint is updated after each transform.\r\n Re-running the same transforms on a dataset in a different session results in the same fingerprint.\r\n Disabling the fingerprinting mechanism makes all the fingerprints random.\r\n Since the caching mechanism uses fingerprints to name the cache files, then cache file names will be different.\r\n Therefore disabling fingerprinting will prevent the caching mechanism from reloading datasets files that have already been computed.\r\n Disabling fingerprinting may speed up the lib for users that don't care about this feature and don't want to use caching.\r\n```python\r\nfrom datasets import set_fingerprinting_enabled\r\n\r\nset_fingerprinting_enabled(False)\r\n```\r\n\r\nOther details:\r\n- I renamed the `fingerprint` decorator to `fingerprint_transform` since the name was clearly not explicit. This decorator is used on dataset transform functions to allow them to update fingerprints.\r\n- I added some `ignore_kwargs` when decorating transforms with `fingerprint_transform`, to make the fingerprint update not sensible to kwargs like `load_from_cache` or `cache_file_name`.\r\n\r\nTodo: tests for set_fingerprinting_enabled + documentation for all the above features","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1703\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1702","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1702\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1702\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1702\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1702","id":781383277,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMTE2NDc0","number":1702,"title":"Fix importlib metdata import in py38","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610032230000,"updated_at":1610102835000,"closed_at":1610102835000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1702","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1702","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1702.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1702.patch"},"body":"In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1702\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1701","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1701\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1701\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1701\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1701","id":781345717,"node_id":"MDU6SXNzdWU3ODEzNDU3MTc=","number":1701,"title":"Some datasets miss dataset_infos.json or dummy_data.zip","user":{"login":"madlag","id":272253,"node_id":"MDQ6VXNlcjI3MjI1Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/272253?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/madlag","html_url":"https:\/\/github.com\/madlag","followers_url":"https:\/\/api.github.com\/users\/madlag\/followers","following_url":"https:\/\/api.github.com\/users\/madlag\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/madlag\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/madlag\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/madlag\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/madlag\/orgs","repos_url":"https:\/\/api.github.com\/users\/madlag\/repos","events_url":"https:\/\/api.github.com\/users\/madlag\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/madlag\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n"],"created_at":1610029033000,"updated_at":1610458846000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"While working on dataset REAME generation script at https:\/\/github.com\/madlag\/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json : \r\n\r\n```\r\nc4\r\nlm1b\r\nreclor\r\nwikihow\r\n```\r\n\r\nAnd some does not have a dummy_data.zip : \r\n\r\n```\r\nkor_nli\r\nmath_dataset\r\nmlqa\r\nms_marco\r\nnewsgroup\r\nqa4mre\r\nqangaroo\r\nreddit_tifu\r\nsuper_glue\r\ntrivia_qa\r\nweb_of_science\r\nwmt14\r\nwmt15\r\nwmt16\r\nwmt17\r\nwmt18\r\nwmt19\r\nxtreme\r\n```\r\n\r\nBut it seems that some of those last do have a \"dummy\" directory .\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1701\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1700","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1700\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1700\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1700\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1700","id":781333589,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2","number":1700,"title":"Update Curiosity dialogs DatasetCard","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1610027967000,"updated_at":1610477492000,"closed_at":1610477492000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1700","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1700","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1700.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1700.patch"},"body":"Update Curiosity dialogs DatasetCard\r\n\r\nThere are some entries in the data fields section yet to be filled. There is little information regarding those fields.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1700\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1699","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1699\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1699\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1699\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1699","id":781271558,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5","number":1699,"title":"Update DBRD dataset card and download URL","user":{"login":"benjaminvdb","id":8875786,"node_id":"MDQ6VXNlcjg4NzU3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8875786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminvdb","html_url":"https:\/\/github.com\/benjaminvdb","followers_url":"https:\/\/api.github.com\/users\/benjaminvdb\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminvdb\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminvdb\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminvdb\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminvdb\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminvdb\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminvdb\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminvdb\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminvdb\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["not sure why the CI was not triggered though"],"created_at":1610021803000,"updated_at":1610026899000,"closed_at":1610026859000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1699","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1699","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1699.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1699.patch"},"body":"I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes:\r\n\r\n1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316.\r\n2. I've updated the dataset card.\r\n\r\nCheers! \ud83d\ude04","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1699\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1698","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1698\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1698\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1698\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1698","id":781152561,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3","number":1698,"title":"Update Coached Conv Pref DatasetCard","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Really cool!\r\n\r\nCan you add some task tags for `dialogue-modeling` (under `sequence-modeling`) and `parsing` (under `structured-prediction`)?"],"created_at":1610010436000,"updated_at":1610125473000,"closed_at":1610125472000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1698","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1698","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1698.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1698.patch"},"body":"Update Coached Conversation Preferance DatasetCard","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1698\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1697","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1697\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1697\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1697\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1697","id":781126579,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5","number":1697,"title":"Update DialogRE DatasetCard","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?"],"created_at":1610007753000,"updated_at":1610026468000,"closed_at":1610026468000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1697","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1697","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1697.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1697.patch"},"body":"Update the information in the dataset card for the Dialog RE dataset. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1697\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1696","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1696\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1696\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1696\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1696","id":781096918,"node_id":"MDU6SXNzdWU3ODEwOTY5MTg=","number":1696,"title":"Unable to install datasets","user":{"login":"glee2429","id":12635475,"node_id":"MDQ6VXNlcjEyNjM1NDc1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12635475?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/glee2429","html_url":"https:\/\/github.com\/glee2429","followers_url":"https:\/\/api.github.com\/users\/glee2429\/followers","following_url":"https:\/\/api.github.com\/users\/glee2429\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/glee2429\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/glee2429\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/glee2429\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/glee2429\/orgs","repos_url":"https:\/\/api.github.com\/users\/glee2429\/repos","events_url":"https:\/\/api.github.com\/users\/glee2429\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/glee2429\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe try to create a virtual env with python 3.8 or 3.7","Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ","Damn sorry","Damn sorry"],"created_at":1610004277000,"updated_at":1610065985000,"closed_at":1610057165000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"** Edit **\r\nI believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight! \r\n\r\n**Short description**\r\n\r\nI followed the instructions for installing datasets (https:\/\/huggingface.co\/docs\/datasets\/installation.html). However, while I tried to download datasets using `pip install datasets` I got a massive error message after getting stuck at \"Installing build dependencies...\" \r\n\r\nI was wondering if this problem can be fixed by creating a virtual environment, but it didn't help. Can anyone offer some advice on how to fix this issue? \r\n\r\nHere's an error message: \r\n\r\n`(env) Gas-MacBook-Pro:Downloads destiny$ pip install datasets\r\nCollecting datasets\r\n Using cached datasets-1.2.0-py3-none-any.whl (159 kB)\r\nCollecting numpy>=1.17\r\n Using cached numpy-1.19.5-cp39-cp39-macosx_10_9_x86_64.whl (15.6 MB)\r\nCollecting pyarrow>=0.17.1\r\n Using cached pyarrow-2.0.0.tar.gz (58.9 MB)\r\n....\r\n\r\n _configtest.c:9:5: warning: incompatible redeclaration of library function 'ceilf' [-Wincompatible-library-redeclaration]\r\n int ceilf (void);\r\n ^\r\n _configtest.c:9:5: note: 'ceilf' is a builtin with type 'float (float)'\r\n _configtest.c:10:5: warning: incompatible redeclaration of library function 'rintf' [-Wincompatible-library-redeclaration]\r\n int rintf (void);\r\n ^\r\n _configtest.c:10:5: note: 'rintf' is a builtin with type 'float (float)'\r\n _configtest.c:11:5: warning: incompatible redeclaration of library function 'truncf' [-Wincompatible-library-redeclaration]\r\n int truncf (void);\r\n ^\r\n _configtest.c:11:5: note: 'truncf' is a builtin with type 'float (float)'\r\n _configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtf' [-Wincompatible-library-redeclaration]\r\n int sqrtf (void);\r\n ^\r\n _configtest.c:12:5: note: 'sqrtf' is a builtin with type 'float (float)'\r\n _configtest.c:13:5: warning: incompatible redeclaration of library function 'log10f' [-Wincompatible-library-redeclaration]\r\n int log10f (void);\r\n ^\r\n _configtest.c:13:5: note: 'log10f' is a builtin with type 'float (float)'\r\n _configtest.c:14:5: warning: incompatible redeclaration of library function 'logf' [-Wincompatible-library-redeclaration]\r\n int logf (void);\r\n ^\r\n _configtest.c:14:5: note: 'logf' is a builtin with type 'float (float)'\r\n _configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pf' [-Wincompatible-library-redeclaration]\r\n int log1pf (void);\r\n ^\r\n _configtest.c:15:5: note: 'log1pf' is a builtin with type 'float (float)'\r\n _configtest.c:16:5: warning: incompatible redeclaration of library function 'expf' [-Wincompatible-library-redeclaration]\r\n int expf (void);\r\n ^\r\n _configtest.c:16:5: note: 'expf' is a builtin with type 'float (float)'\r\n _configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1f' [-Wincompatible-library-redeclaration]\r\n int expm1f (void);\r\n ^\r\n _configtest.c:17:5: note: 'expm1f' is a builtin with type 'float (float)'\r\n _configtest.c:18:5: warning: incompatible redeclaration of library function 'asinf' [-Wincompatible-library-redeclaration]\r\n int asinf (void);\r\n ^\r\n _configtest.c:18:5: note: 'asinf' is a builtin with type 'float (float)'\r\n _configtest.c:19:5: warning: incompatible redeclaration of library function 'acosf' [-Wincompatible-library-redeclaration]\r\n int acosf (void);\r\n ^\r\n _configtest.c:19:5: note: 'acosf' is a builtin with type 'float (float)'\r\n _configtest.c:20:5: warning: incompatible redeclaration of library function 'atanf' [-Wincompatible-library-redeclaration]\r\n int atanf (void);\r\n ^\r\n _configtest.c:20:5: note: 'atanf' is a builtin with type 'float (float)'\r\n _configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhf' [-Wincompatible-library-redeclaration]\r\n int asinhf (void);\r\n ^\r\n _configtest.c:21:5: note: 'asinhf' is a builtin with type 'float (float)'\r\n _configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshf' [-Wincompatible-library-redeclaration]\r\n int acoshf (void);\r\n ^\r\n _configtest.c:22:5: note: 'acoshf' is a builtin with type 'float (float)'\r\n _configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhf' [-Wincompatible-library-redeclaration]\r\n int atanhf (void);\r\n ^\r\n _configtest.c:23:5: note: 'atanhf' is a builtin with type 'float (float)'\r\n _configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotf' [-Wincompatible-library-redeclaration]\r\n int hypotf (void);\r\n ^\r\n _configtest.c:24:5: note: 'hypotf' is a builtin with type 'float (float, float)'\r\n _configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2f' [-Wincompatible-library-redeclaration]\r\n int atan2f (void);\r\n ^\r\n _configtest.c:25:5: note: 'atan2f' is a builtin with type 'float (float, float)'\r\n _configtest.c:26:5: warning: incompatible redeclaration of library function 'powf' [-Wincompatible-library-redeclaration]\r\n int powf (void);\r\n ^\r\n _configtest.c:26:5: note: 'powf' is a builtin with type 'float (float, float)'\r\n _configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodf' [-Wincompatible-library-redeclaration]\r\n int fmodf (void);\r\n ^\r\n _configtest.c:27:5: note: 'fmodf' is a builtin with type 'float (float, float)'\r\n _configtest.c:28:5: warning: incompatible redeclaration of library function 'modff' [-Wincompatible-library-redeclaration]\r\n int modff (void);\r\n ^\r\n _configtest.c:28:5: note: 'modff' is a builtin with type 'float (float, float *)'\r\n _configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpf' [-Wincompatible-library-redeclaration]\r\n int frexpf (void);\r\n ^\r\n _configtest.c:29:5: note: 'frexpf' is a builtin with type 'float (float, int *)'\r\n _configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpf' [-Wincompatible-library-redeclaration]\r\n int ldexpf (void);\r\n ^\r\n _configtest.c:30:5: note: 'ldexpf' is a builtin with type 'float (float, int)'\r\n _configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2f' [-Wincompatible-library-redeclaration]\r\n int exp2f (void);\r\n ^\r\n _configtest.c:31:5: note: 'exp2f' is a builtin with type 'float (float)'\r\n _configtest.c:32:5: warning: incompatible redeclaration of library function 'log2f' [-Wincompatible-library-redeclaration]\r\n int log2f (void);\r\n ^\r\n _configtest.c:32:5: note: 'log2f' is a builtin with type 'float (float)'\r\n _configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignf' [-Wincompatible-library-redeclaration]\r\n int copysignf (void);\r\n ^\r\n _configtest.c:33:5: note: 'copysignf' is a builtin with type 'float (float, float)'\r\n _configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterf' [-Wincompatible-library-redeclaration]\r\n int nextafterf (void);\r\n ^\r\n _configtest.c:34:5: note: 'nextafterf' is a builtin with type 'float (float, float)'\r\n _configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtf' [-Wincompatible-library-redeclaration]\r\n int cbrtf (void);\r\n ^\r\n _configtest.c:35:5: note: 'cbrtf' is a builtin with type 'float (float)'\r\n 35 warnings generated.\r\n clang _configtest.o -o _configtest\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d _configtest\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:1:5: warning: incompatible redeclaration of library function 'sinl' [-Wincompatible-library-redeclaration]\r\n int sinl (void);\r\n ^\r\n _configtest.c:1:5: note: 'sinl' is a builtin with type 'long double (long double)'\r\n _configtest.c:2:5: warning: incompatible redeclaration of library function 'cosl' [-Wincompatible-library-redeclaration]\r\n int cosl (void);\r\n ^\r\n _configtest.c:2:5: note: 'cosl' is a builtin with type 'long double (long double)'\r\n _configtest.c:3:5: warning: incompatible redeclaration of library function 'tanl' [-Wincompatible-library-redeclaration]\r\n int tanl (void);\r\n ^\r\n _configtest.c:3:5: note: 'tanl' is a builtin with type 'long double (long double)'\r\n _configtest.c:4:5: warning: incompatible redeclaration of library function 'sinhl' [-Wincompatible-library-redeclaration]\r\n int sinhl (void);\r\n ^\r\n _configtest.c:4:5: note: 'sinhl' is a builtin with type 'long double (long double)'\r\n _configtest.c:5:5: warning: incompatible redeclaration of library function 'coshl' [-Wincompatible-library-redeclaration]\r\n int coshl (void);\r\n ^\r\n _configtest.c:5:5: note: 'coshl' is a builtin with type 'long double (long double)'\r\n _configtest.c:6:5: warning: incompatible redeclaration of library function 'tanhl' [-Wincompatible-library-redeclaration]\r\n int tanhl (void);\r\n ^\r\n _configtest.c:6:5: note: 'tanhl' is a builtin with type 'long double (long double)'\r\n _configtest.c:7:5: warning: incompatible redeclaration of library function 'fabsl' [-Wincompatible-library-redeclaration]\r\n int fabsl (void);\r\n ^\r\n _configtest.c:7:5: note: 'fabsl' is a builtin with type 'long double (long double)'\r\n _configtest.c:8:5: warning: incompatible redeclaration of library function 'floorl' [-Wincompatible-library-redeclaration]\r\n int floorl (void);\r\n ^\r\n _configtest.c:8:5: note: 'floorl' is a builtin with type 'long double (long double)'\r\n _configtest.c:9:5: warning: incompatible redeclaration of library function 'ceill' [-Wincompatible-library-redeclaration]\r\n int ceill (void);\r\n ^\r\n _configtest.c:9:5: note: 'ceill' is a builtin with type 'long double (long double)'\r\n _configtest.c:10:5: warning: incompatible redeclaration of library function 'rintl' [-Wincompatible-library-redeclaration]\r\n int rintl (void);\r\n ^\r\n _configtest.c:10:5: note: 'rintl' is a builtin with type 'long double (long double)'\r\n _configtest.c:11:5: warning: incompatible redeclaration of library function 'truncl' [-Wincompatible-library-redeclaration]\r\n int truncl (void);\r\n ^\r\n _configtest.c:11:5: note: 'truncl' is a builtin with type 'long double (long double)'\r\n _configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtl' [-Wincompatible-library-redeclaration]\r\n int sqrtl (void);\r\n ^\r\n _configtest.c:12:5: note: 'sqrtl' is a builtin with type 'long double (long double)'\r\n _configtest.c:13:5: warning: incompatible redeclaration of library function 'log10l' [-Wincompatible-library-redeclaration]\r\n int log10l (void);\r\n ^\r\n _configtest.c:13:5: note: 'log10l' is a builtin with type 'long double (long double)'\r\n _configtest.c:14:5: warning: incompatible redeclaration of library function 'logl' [-Wincompatible-library-redeclaration]\r\n int logl (void);\r\n ^\r\n _configtest.c:14:5: note: 'logl' is a builtin with type 'long double (long double)'\r\n _configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pl' [-Wincompatible-library-redeclaration]\r\n int log1pl (void);\r\n ^\r\n _configtest.c:15:5: note: 'log1pl' is a builtin with type 'long double (long double)'\r\n _configtest.c:16:5: warning: incompatible redeclaration of library function 'expl' [-Wincompatible-library-redeclaration]\r\n int expl (void);\r\n ^\r\n _configtest.c:16:5: note: 'expl' is a builtin with type 'long double (long double)'\r\n _configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1l' [-Wincompatible-library-redeclaration]\r\n int expm1l (void);\r\n ^\r\n _configtest.c:17:5: note: 'expm1l' is a builtin with type 'long double (long double)'\r\n _configtest.c:18:5: warning: incompatible redeclaration of library function 'asinl' [-Wincompatible-library-redeclaration]\r\n int asinl (void);\r\n ^\r\n _configtest.c:18:5: note: 'asinl' is a builtin with type 'long double (long double)'\r\n _configtest.c:19:5: warning: incompatible redeclaration of library function 'acosl' [-Wincompatible-library-redeclaration]\r\n int acosl (void);\r\n ^\r\n _configtest.c:19:5: note: 'acosl' is a builtin with type 'long double (long double)'\r\n _configtest.c:20:5: warning: incompatible redeclaration of library function 'atanl' [-Wincompatible-library-redeclaration]\r\n int atanl (void);\r\n ^\r\n _configtest.c:20:5: note: 'atanl' is a builtin with type 'long double (long double)'\r\n _configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhl' [-Wincompatible-library-redeclaration]\r\n int asinhl (void);\r\n ^\r\n _configtest.c:21:5: note: 'asinhl' is a builtin with type 'long double (long double)'\r\n _configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshl' [-Wincompatible-library-redeclaration]\r\n int acoshl (void);\r\n ^\r\n _configtest.c:22:5: note: 'acoshl' is a builtin with type 'long double (long double)'\r\n _configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhl' [-Wincompatible-library-redeclaration]\r\n int atanhl (void);\r\n ^\r\n _configtest.c:23:5: note: 'atanhl' is a builtin with type 'long double (long double)'\r\n _configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotl' [-Wincompatible-library-redeclaration]\r\n int hypotl (void);\r\n ^\r\n _configtest.c:24:5: note: 'hypotl' is a builtin with type 'long double (long double, long double)'\r\n _configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2l' [-Wincompatible-library-redeclaration]\r\n int atan2l (void);\r\n ^\r\n _configtest.c:25:5: note: 'atan2l' is a builtin with type 'long double (long double, long double)'\r\n _configtest.c:26:5: warning: incompatible redeclaration of library function 'powl' [-Wincompatible-library-redeclaration]\r\n int powl (void);\r\n ^\r\n _configtest.c:26:5: note: 'powl' is a builtin with type 'long double (long double, long double)'\r\n _configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodl' [-Wincompatible-library-redeclaration]\r\n int fmodl (void);\r\n ^\r\n _configtest.c:27:5: note: 'fmodl' is a builtin with type 'long double (long double, long double)'\r\n _configtest.c:28:5: warning: incompatible redeclaration of library function 'modfl' [-Wincompatible-library-redeclaration]\r\n int modfl (void);\r\n ^\r\n _configtest.c:28:5: note: 'modfl' is a builtin with type 'long double (long double, long double *)'\r\n _configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpl' [-Wincompatible-library-redeclaration]\r\n int frexpl (void);\r\n ^\r\n _configtest.c:29:5: note: 'frexpl' is a builtin with type 'long double (long double, int *)'\r\n _configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpl' [-Wincompatible-library-redeclaration]\r\n int ldexpl (void);\r\n ^\r\n _configtest.c:30:5: note: 'ldexpl' is a builtin with type 'long double (long double, int)'\r\n _configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2l' [-Wincompatible-library-redeclaration]\r\n int exp2l (void);\r\n ^\r\n _configtest.c:31:5: note: 'exp2l' is a builtin with type 'long double (long double)'\r\n _configtest.c:32:5: warning: incompatible redeclaration of library function 'log2l' [-Wincompatible-library-redeclaration]\r\n int log2l (void);\r\n ^\r\n _configtest.c:32:5: note: 'log2l' is a builtin with type 'long double (long double)'\r\n _configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignl' [-Wincompatible-library-redeclaration]\r\n int copysignl (void);\r\n ^\r\n _configtest.c:33:5: note: 'copysignl' is a builtin with type 'long double (long double, long double)'\r\n _configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterl' [-Wincompatible-library-redeclaration]\r\n int nextafterl (void);\r\n ^\r\n _configtest.c:34:5: note: 'nextafterl' is a builtin with type 'long double (long double, long double)'\r\n _configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtl' [-Wincompatible-library-redeclaration]\r\n int cbrtl (void);\r\n ^\r\n _configtest.c:35:5: note: 'cbrtl' is a builtin with type 'long double (long double)'\r\n 35 warnings generated.\r\n clang _configtest.o -o _configtest\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d _configtest\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:8:12: error: use of undeclared identifier 'HAVE_DECL_SIGNBIT'\r\n (void) HAVE_DECL_SIGNBIT;\r\n ^\r\n 1 error generated.\r\n failure.\r\n removing: _configtest.c _configtest.o\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:1:5: warning: incompatible redeclaration of library function 'cabs' [-Wincompatible-library-redeclaration]\r\n int cabs (void);\r\n ^\r\n _configtest.c:1:5: note: 'cabs' is a builtin with type 'double (_Complex double)'\r\n _configtest.c:2:5: warning: incompatible redeclaration of library function 'cacos' [-Wincompatible-library-redeclaration]\r\n int cacos (void);\r\n ^\r\n _configtest.c:2:5: note: 'cacos' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:3:5: warning: incompatible redeclaration of library function 'cacosh' [-Wincompatible-library-redeclaration]\r\n int cacosh (void);\r\n ^\r\n _configtest.c:3:5: note: 'cacosh' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:4:5: warning: incompatible redeclaration of library function 'carg' [-Wincompatible-library-redeclaration]\r\n int carg (void);\r\n ^\r\n _configtest.c:4:5: note: 'carg' is a builtin with type 'double (_Complex double)'\r\n _configtest.c:5:5: warning: incompatible redeclaration of library function 'casin' [-Wincompatible-library-redeclaration]\r\n int casin (void);\r\n ^\r\n _configtest.c:5:5: note: 'casin' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:6:5: warning: incompatible redeclaration of library function 'casinh' [-Wincompatible-library-redeclaration]\r\n int casinh (void);\r\n ^\r\n _configtest.c:6:5: note: 'casinh' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:7:5: warning: incompatible redeclaration of library function 'catan' [-Wincompatible-library-redeclaration]\r\n int catan (void);\r\n ^\r\n _configtest.c:7:5: note: 'catan' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:8:5: warning: incompatible redeclaration of library function 'catanh' [-Wincompatible-library-redeclaration]\r\n int catanh (void);\r\n ^\r\n _configtest.c:8:5: note: 'catanh' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:9:5: warning: incompatible redeclaration of library function 'ccos' [-Wincompatible-library-redeclaration]\r\n int ccos (void);\r\n ^\r\n _configtest.c:9:5: note: 'ccos' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:10:5: warning: incompatible redeclaration of library function 'ccosh' [-Wincompatible-library-redeclaration]\r\n int ccosh (void);\r\n ^\r\n _configtest.c:10:5: note: 'ccosh' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:11:5: warning: incompatible redeclaration of library function 'cexp' [-Wincompatible-library-redeclaration]\r\n int cexp (void);\r\n ^\r\n _configtest.c:11:5: note: 'cexp' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:12:5: warning: incompatible redeclaration of library function 'cimag' [-Wincompatible-library-redeclaration]\r\n int cimag (void);\r\n ^\r\n _configtest.c:12:5: note: 'cimag' is a builtin with type 'double (_Complex double)'\r\n _configtest.c:13:5: warning: incompatible redeclaration of library function 'clog' [-Wincompatible-library-redeclaration]\r\n int clog (void);\r\n ^\r\n _configtest.c:13:5: note: 'clog' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:14:5: warning: incompatible redeclaration of library function 'conj' [-Wincompatible-library-redeclaration]\r\n int conj (void);\r\n ^\r\n _configtest.c:14:5: note: 'conj' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:15:5: warning: incompatible redeclaration of library function 'cpow' [-Wincompatible-library-redeclaration]\r\n int cpow (void);\r\n ^\r\n _configtest.c:15:5: note: 'cpow' is a builtin with type '_Complex double (_Complex double, _Complex double)'\r\n _configtest.c:16:5: warning: incompatible redeclaration of library function 'cproj' [-Wincompatible-library-redeclaration]\r\n int cproj (void);\r\n ^\r\n _configtest.c:16:5: note: 'cproj' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:17:5: warning: incompatible redeclaration of library function 'creal' [-Wincompatible-library-redeclaration]\r\n int creal (void);\r\n ^\r\n _configtest.c:17:5: note: 'creal' is a builtin with type 'double (_Complex double)'\r\n _configtest.c:18:5: warning: incompatible redeclaration of library function 'csin' [-Wincompatible-library-redeclaration]\r\n int csin (void);\r\n ^\r\n _configtest.c:18:5: note: 'csin' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:19:5: warning: incompatible redeclaration of library function 'csinh' [-Wincompatible-library-redeclaration]\r\n int csinh (void);\r\n ^\r\n _configtest.c:19:5: note: 'csinh' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrt' [-Wincompatible-library-redeclaration]\r\n int csqrt (void);\r\n ^\r\n _configtest.c:20:5: note: 'csqrt' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:21:5: warning: incompatible redeclaration of library function 'ctan' [-Wincompatible-library-redeclaration]\r\n int ctan (void);\r\n ^\r\n _configtest.c:21:5: note: 'ctan' is a builtin with type '_Complex double (_Complex double)'\r\n _configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanh' [-Wincompatible-library-redeclaration]\r\n int ctanh (void);\r\n ^\r\n _configtest.c:22:5: note: 'ctanh' is a builtin with type '_Complex double (_Complex double)'\r\n 22 warnings generated.\r\n clang _configtest.o -o _configtest\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d _configtest\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsf' [-Wincompatible-library-redeclaration]\r\n int cabsf (void);\r\n ^\r\n _configtest.c:1:5: note: 'cabsf' is a builtin with type 'float (_Complex float)'\r\n _configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosf' [-Wincompatible-library-redeclaration]\r\n int cacosf (void);\r\n ^\r\n _configtest.c:2:5: note: 'cacosf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshf' [-Wincompatible-library-redeclaration]\r\n int cacoshf (void);\r\n ^\r\n _configtest.c:3:5: note: 'cacoshf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:4:5: warning: incompatible redeclaration of library function 'cargf' [-Wincompatible-library-redeclaration]\r\n int cargf (void);\r\n ^\r\n _configtest.c:4:5: note: 'cargf' is a builtin with type 'float (_Complex float)'\r\n _configtest.c:5:5: warning: incompatible redeclaration of library function 'casinf' [-Wincompatible-library-redeclaration]\r\n int casinf (void);\r\n ^\r\n _configtest.c:5:5: note: 'casinf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhf' [-Wincompatible-library-redeclaration]\r\n int casinhf (void);\r\n ^\r\n _configtest.c:6:5: note: 'casinhf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:7:5: warning: incompatible redeclaration of library function 'catanf' [-Wincompatible-library-redeclaration]\r\n int catanf (void);\r\n ^\r\n _configtest.c:7:5: note: 'catanf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhf' [-Wincompatible-library-redeclaration]\r\n int catanhf (void);\r\n ^\r\n _configtest.c:8:5: note: 'catanhf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosf' [-Wincompatible-library-redeclaration]\r\n int ccosf (void);\r\n ^\r\n _configtest.c:9:5: note: 'ccosf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshf' [-Wincompatible-library-redeclaration]\r\n int ccoshf (void);\r\n ^\r\n _configtest.c:10:5: note: 'ccoshf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpf' [-Wincompatible-library-redeclaration]\r\n int cexpf (void);\r\n ^\r\n _configtest.c:11:5: note: 'cexpf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagf' [-Wincompatible-library-redeclaration]\r\n int cimagf (void);\r\n ^\r\n _configtest.c:12:5: note: 'cimagf' is a builtin with type 'float (_Complex float)'\r\n _configtest.c:13:5: warning: incompatible redeclaration of library function 'clogf' [-Wincompatible-library-redeclaration]\r\n int clogf (void);\r\n ^\r\n _configtest.c:13:5: note: 'clogf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:14:5: warning: incompatible redeclaration of library function 'conjf' [-Wincompatible-library-redeclaration]\r\n int conjf (void);\r\n ^\r\n _configtest.c:14:5: note: 'conjf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowf' [-Wincompatible-library-redeclaration]\r\n int cpowf (void);\r\n ^\r\n _configtest.c:15:5: note: 'cpowf' is a builtin with type '_Complex float (_Complex float, _Complex float)'\r\n _configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojf' [-Wincompatible-library-redeclaration]\r\n int cprojf (void);\r\n ^\r\n _configtest.c:16:5: note: 'cprojf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:17:5: warning: incompatible redeclaration of library function 'crealf' [-Wincompatible-library-redeclaration]\r\n int crealf (void);\r\n ^\r\n _configtest.c:17:5: note: 'crealf' is a builtin with type 'float (_Complex float)'\r\n _configtest.c:18:5: warning: incompatible redeclaration of library function 'csinf' [-Wincompatible-library-redeclaration]\r\n int csinf (void);\r\n ^\r\n _configtest.c:18:5: note: 'csinf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhf' [-Wincompatible-library-redeclaration]\r\n int csinhf (void);\r\n ^\r\n _configtest.c:19:5: note: 'csinhf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtf' [-Wincompatible-library-redeclaration]\r\n int csqrtf (void);\r\n ^\r\n _configtest.c:20:5: note: 'csqrtf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanf' [-Wincompatible-library-redeclaration]\r\n int ctanf (void);\r\n ^\r\n _configtest.c:21:5: note: 'ctanf' is a builtin with type '_Complex float (_Complex float)'\r\n _configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhf' [-Wincompatible-library-redeclaration]\r\n int ctanhf (void);\r\n ^\r\n _configtest.c:22:5: note: 'ctanhf' is a builtin with type '_Complex float (_Complex float)'\r\n 22 warnings generated.\r\n clang _configtest.o -o _configtest\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d _configtest\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsl' [-Wincompatible-library-redeclaration]\r\n int cabsl (void);\r\n ^\r\n _configtest.c:1:5: note: 'cabsl' is a builtin with type 'long double (_Complex long double)'\r\n _configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosl' [-Wincompatible-library-redeclaration]\r\n int cacosl (void);\r\n ^\r\n _configtest.c:2:5: note: 'cacosl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshl' [-Wincompatible-library-redeclaration]\r\n int cacoshl (void);\r\n ^\r\n _configtest.c:3:5: note: 'cacoshl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:4:5: warning: incompatible redeclaration of library function 'cargl' [-Wincompatible-library-redeclaration]\r\n int cargl (void);\r\n ^\r\n _configtest.c:4:5: note: 'cargl' is a builtin with type 'long double (_Complex long double)'\r\n _configtest.c:5:5: warning: incompatible redeclaration of library function 'casinl' [-Wincompatible-library-redeclaration]\r\n int casinl (void);\r\n ^\r\n _configtest.c:5:5: note: 'casinl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhl' [-Wincompatible-library-redeclaration]\r\n int casinhl (void);\r\n ^\r\n _configtest.c:6:5: note: 'casinhl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:7:5: warning: incompatible redeclaration of library function 'catanl' [-Wincompatible-library-redeclaration]\r\n int catanl (void);\r\n ^\r\n _configtest.c:7:5: note: 'catanl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhl' [-Wincompatible-library-redeclaration]\r\n int catanhl (void);\r\n ^\r\n _configtest.c:8:5: note: 'catanhl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosl' [-Wincompatible-library-redeclaration]\r\n int ccosl (void);\r\n ^\r\n _configtest.c:9:5: note: 'ccosl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshl' [-Wincompatible-library-redeclaration]\r\n int ccoshl (void);\r\n ^\r\n _configtest.c:10:5: note: 'ccoshl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpl' [-Wincompatible-library-redeclaration]\r\n int cexpl (void);\r\n ^\r\n _configtest.c:11:5: note: 'cexpl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagl' [-Wincompatible-library-redeclaration]\r\n int cimagl (void);\r\n ^\r\n _configtest.c:12:5: note: 'cimagl' is a builtin with type 'long double (_Complex long double)'\r\n _configtest.c:13:5: warning: incompatible redeclaration of library function 'clogl' [-Wincompatible-library-redeclaration]\r\n int clogl (void);\r\n ^\r\n _configtest.c:13:5: note: 'clogl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:14:5: warning: incompatible redeclaration of library function 'conjl' [-Wincompatible-library-redeclaration]\r\n int conjl (void);\r\n ^\r\n _configtest.c:14:5: note: 'conjl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration]\r\n int cpowl (void);\r\n ^\r\n _configtest.c:15:5: note: 'cpowl' is a builtin with type '_Complex long double (_Complex long double, _Complex long double)'\r\n _configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojl' [-Wincompatible-library-redeclaration]\r\n int cprojl (void);\r\n ^\r\n _configtest.c:16:5: note: 'cprojl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:17:5: warning: incompatible redeclaration of library function 'creall' [-Wincompatible-library-redeclaration]\r\n int creall (void);\r\n ^\r\n _configtest.c:17:5: note: 'creall' is a builtin with type 'long double (_Complex long double)'\r\n _configtest.c:18:5: warning: incompatible redeclaration of library function 'csinl' [-Wincompatible-library-redeclaration]\r\n int csinl (void);\r\n ^\r\n _configtest.c:18:5: note: 'csinl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhl' [-Wincompatible-library-redeclaration]\r\n int csinhl (void);\r\n ^\r\n _configtest.c:19:5: note: 'csinhl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtl' [-Wincompatible-library-redeclaration]\r\n int csqrtl (void);\r\n ^\r\n _configtest.c:20:5: note: 'csqrtl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanl' [-Wincompatible-library-redeclaration]\r\n int ctanl (void);\r\n ^\r\n _configtest.c:21:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)'\r\n _configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhl' [-Wincompatible-library-redeclaration]\r\n int ctanhl (void);\r\n ^\r\n _configtest.c:22:5: note: 'ctanhl' is a builtin with type '_Complex long double (_Complex long double)'\r\n 22 warnings generated.\r\n clang _configtest.o -o _configtest\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d _configtest\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:2:12: warning: unused function 'static_func' [-Wunused-function]\r\n static int static_func (char * restrict a)\r\n ^\r\n 1 warning generated.\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:3:19: warning: unused function 'static_func' [-Wunused-function]\r\n static inline int static_func (void)\r\n ^\r\n 1 warning generated.\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n File: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/config.h\r\n #define SIZEOF_PY_INTPTR_T 8\r\n #define SIZEOF_OFF_T 8\r\n #define SIZEOF_PY_LONG_LONG 8\r\n #define MATHLIB\r\n #define HAVE_SIN 1\r\n #define HAVE_COS 1\r\n #define HAVE_TAN 1\r\n #define HAVE_SINH 1\r\n #define HAVE_COSH 1\r\n #define HAVE_TANH 1\r\n #define HAVE_FABS 1\r\n #define HAVE_FLOOR 1\r\n #define HAVE_CEIL 1\r\n #define HAVE_SQRT 1\r\n #define HAVE_LOG10 1\r\n #define HAVE_LOG 1\r\n #define HAVE_EXP 1\r\n #define HAVE_ASIN 1\r\n #define HAVE_ACOS 1\r\n #define HAVE_ATAN 1\r\n #define HAVE_FMOD 1\r\n #define HAVE_MODF 1\r\n #define HAVE_FREXP 1\r\n #define HAVE_LDEXP 1\r\n #define HAVE_RINT 1\r\n #define HAVE_TRUNC 1\r\n #define HAVE_EXP2 1\r\n #define HAVE_LOG2 1\r\n #define HAVE_ATAN2 1\r\n #define HAVE_POW 1\r\n #define HAVE_NEXTAFTER 1\r\n #define HAVE_STRTOLL 1\r\n #define HAVE_STRTOULL 1\r\n #define HAVE_CBRT 1\r\n #define HAVE_STRTOLD_L 1\r\n #define HAVE_BACKTRACE 1\r\n #define HAVE_MADVISE 1\r\n #define HAVE_XMMINTRIN_H 1\r\n #define HAVE_EMMINTRIN_H 1\r\n #define HAVE_XLOCALE_H 1\r\n #define HAVE_DLFCN_H 1\r\n #define HAVE_SYS_MMAN_H 1\r\n #define HAVE___BUILTIN_ISNAN 1\r\n #define HAVE___BUILTIN_ISINF 1\r\n #define HAVE___BUILTIN_ISFINITE 1\r\n #define HAVE___BUILTIN_BSWAP32 1\r\n #define HAVE___BUILTIN_BSWAP64 1\r\n #define HAVE___BUILTIN_EXPECT 1\r\n #define HAVE___BUILTIN_MUL_OVERFLOW 1\r\n #define HAVE___BUILTIN_CPU_SUPPORTS 1\r\n #define HAVE__M_FROM_INT64 1\r\n #define HAVE__MM_LOAD_PS 1\r\n #define HAVE__MM_PREFETCH 1\r\n #define HAVE__MM_LOAD_PD 1\r\n #define HAVE___BUILTIN_PREFETCH 1\r\n #define HAVE_LINK_AVX 1\r\n #define HAVE_LINK_AVX2 1\r\n #define HAVE_XGETBV 1\r\n #define HAVE_ATTRIBUTE_NONNULL 1\r\n #define HAVE_ATTRIBUTE_TARGET_AVX 1\r\n #define HAVE_ATTRIBUTE_TARGET_AVX2 1\r\n #define HAVE___THREAD 1\r\n #define HAVE_SINF 1\r\n #define HAVE_COSF 1\r\n #define HAVE_TANF 1\r\n #define HAVE_SINHF 1\r\n #define HAVE_COSHF 1\r\n #define HAVE_TANHF 1\r\n #define HAVE_FABSF 1\r\n #define HAVE_FLOORF 1\r\n #define HAVE_CEILF 1\r\n #define HAVE_RINTF 1\r\n #define HAVE_TRUNCF 1\r\n #define HAVE_SQRTF 1\r\n #define HAVE_LOG10F 1\r\n #define HAVE_LOGF 1\r\n #define HAVE_LOG1PF 1\r\n #define HAVE_EXPF 1\r\n #define HAVE_EXPM1F 1\r\n #define HAVE_ASINF 1\r\n #define HAVE_ACOSF 1\r\n #define HAVE_ATANF 1\r\n #define HAVE_ASINHF 1\r\n #define HAVE_ACOSHF 1\r\n #define HAVE_ATANHF 1\r\n #define HAVE_HYPOTF 1\r\n #define HAVE_ATAN2F 1\r\n #define HAVE_POWF 1\r\n #define HAVE_FMODF 1\r\n #define HAVE_MODFF 1\r\n #define HAVE_FREXPF 1\r\n #define HAVE_LDEXPF 1\r\n #define HAVE_EXP2F 1\r\n #define HAVE_LOG2F 1\r\n #define HAVE_COPYSIGNF 1\r\n #define HAVE_NEXTAFTERF 1\r\n #define HAVE_CBRTF 1\r\n #define HAVE_SINL 1\r\n #define HAVE_COSL 1\r\n #define HAVE_TANL 1\r\n #define HAVE_SINHL 1\r\n #define HAVE_COSHL 1\r\n #define HAVE_TANHL 1\r\n #define HAVE_FABSL 1\r\n #define HAVE_FLOORL 1\r\n #define HAVE_CEILL 1\r\n #define HAVE_RINTL 1\r\n #define HAVE_TRUNCL 1\r\n #define HAVE_SQRTL 1\r\n #define HAVE_LOG10L 1\r\n #define HAVE_LOGL 1\r\n #define HAVE_LOG1PL 1\r\n #define HAVE_EXPL 1\r\n #define HAVE_EXPM1L 1\r\n #define HAVE_ASINL 1\r\n #define HAVE_ACOSL 1\r\n #define HAVE_ATANL 1\r\n #define HAVE_ASINHL 1\r\n #define HAVE_ACOSHL 1\r\n #define HAVE_ATANHL 1\r\n #define HAVE_HYPOTL 1\r\n #define HAVE_ATAN2L 1\r\n #define HAVE_POWL 1\r\n #define HAVE_FMODL 1\r\n #define HAVE_MODFL 1\r\n #define HAVE_FREXPL 1\r\n #define HAVE_LDEXPL 1\r\n #define HAVE_EXP2L 1\r\n #define HAVE_LOG2L 1\r\n #define HAVE_COPYSIGNL 1\r\n #define HAVE_NEXTAFTERL 1\r\n #define HAVE_CBRTL 1\r\n #define HAVE_DECL_SIGNBIT\r\n #define HAVE_COMPLEX_H 1\r\n #define HAVE_CABS 1\r\n #define HAVE_CACOS 1\r\n #define HAVE_CACOSH 1\r\n #define HAVE_CARG 1\r\n #define HAVE_CASIN 1\r\n #define HAVE_CASINH 1\r\n #define HAVE_CATAN 1\r\n #define HAVE_CATANH 1\r\n #define HAVE_CCOS 1\r\n #define HAVE_CCOSH 1\r\n #define HAVE_CEXP 1\r\n #define HAVE_CIMAG 1\r\n #define HAVE_CLOG 1\r\n #define HAVE_CONJ 1\r\n #define HAVE_CPOW 1\r\n #define HAVE_CPROJ 1\r\n #define HAVE_CREAL 1\r\n #define HAVE_CSIN 1\r\n #define HAVE_CSINH 1\r\n #define HAVE_CSQRT 1\r\n #define HAVE_CTAN 1\r\n #define HAVE_CTANH 1\r\n #define HAVE_CABSF 1\r\n #define HAVE_CACOSF 1\r\n #define HAVE_CACOSHF 1\r\n #define HAVE_CARGF 1\r\n #define HAVE_CASINF 1\r\n #define HAVE_CASINHF 1\r\n #define HAVE_CATANF 1\r\n #define HAVE_CATANHF 1\r\n #define HAVE_CCOSF 1\r\n #define HAVE_CCOSHF 1\r\n #define HAVE_CEXPF 1\r\n #define HAVE_CIMAGF 1\r\n #define HAVE_CLOGF 1\r\n #define HAVE_CONJF 1\r\n #define HAVE_CPOWF 1\r\n #define HAVE_CPROJF 1\r\n #define HAVE_CREALF 1\r\n #define HAVE_CSINF 1\r\n #define HAVE_CSINHF 1\r\n #define HAVE_CSQRTF 1\r\n #define HAVE_CTANF 1\r\n #define HAVE_CTANHF 1\r\n #define HAVE_CABSL 1\r\n #define HAVE_CACOSL 1\r\n #define HAVE_CACOSHL 1\r\n #define HAVE_CARGL 1\r\n #define HAVE_CASINL 1\r\n #define HAVE_CASINHL 1\r\n #define HAVE_CATANL 1\r\n #define HAVE_CATANHL 1\r\n #define HAVE_CCOSL 1\r\n #define HAVE_CCOSHL 1\r\n #define HAVE_CEXPL 1\r\n #define HAVE_CIMAGL 1\r\n #define HAVE_CLOGL 1\r\n #define HAVE_CONJL 1\r\n #define HAVE_CPOWL 1\r\n #define HAVE_CPROJL 1\r\n #define HAVE_CREALL 1\r\n #define HAVE_CSINL 1\r\n #define HAVE_CSINHL 1\r\n #define HAVE_CSQRTL 1\r\n #define HAVE_CTANL 1\r\n #define HAVE_CTANHL 1\r\n #define NPY_RESTRICT restrict\r\n #define NPY_RELAXED_STRIDES_CHECKING 1\r\n #define HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE 1\r\n #define NPY_PY3K 1\r\n #ifndef __cplusplus\r\n \/* #undef inline *\/\r\n #endif\r\n \r\n #ifndef _NPY_NPY_CONFIG_H_\r\n #error config.h should never be included directly, include npy_config.h instead\r\n #endif\r\n \r\n EOF\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/config.h' to sources.\r\n Generating build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/_numpyconfig.h\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n _configtest.c:1:5: warning: incompatible redeclaration of library function 'exp' [-Wincompatible-library-redeclaration]\r\n int exp (void);\r\n ^\r\n _configtest.c:1:5: note: 'exp' is a builtin with type 'double (double)'\r\n 1 warning generated.\r\n clang _configtest.o -o _configtest\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d _configtest\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -c'\r\n clang: _configtest.c\r\n success!\r\n removing: _configtest.c _configtest.o _configtest.o.d\r\n File: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/_numpyconfig.h\r\n #define NPY_SIZEOF_SHORT SIZEOF_SHORT\r\n #define NPY_SIZEOF_INT SIZEOF_INT\r\n #define NPY_SIZEOF_LONG SIZEOF_LONG\r\n #define NPY_SIZEOF_FLOAT 4\r\n #define NPY_SIZEOF_COMPLEX_FLOAT 8\r\n #define NPY_SIZEOF_DOUBLE 8\r\n #define NPY_SIZEOF_COMPLEX_DOUBLE 16\r\n #define NPY_SIZEOF_LONGDOUBLE 16\r\n #define NPY_SIZEOF_COMPLEX_LONGDOUBLE 32\r\n #define NPY_SIZEOF_PY_INTPTR_T 8\r\n #define NPY_SIZEOF_OFF_T 8\r\n #define NPY_SIZEOF_PY_LONG_LONG 8\r\n #define NPY_SIZEOF_LONGLONG 8\r\n #define NPY_NO_SMP 0\r\n #define NPY_HAVE_DECL_ISNAN\r\n #define NPY_HAVE_DECL_ISINF\r\n #define NPY_HAVE_DECL_ISFINITE\r\n #define NPY_HAVE_DECL_SIGNBIT\r\n #define NPY_USE_C99_COMPLEX 1\r\n #define NPY_HAVE_COMPLEX_DOUBLE 1\r\n #define NPY_HAVE_COMPLEX_FLOAT 1\r\n #define NPY_HAVE_COMPLEX_LONG_DOUBLE 1\r\n #define NPY_RELAXED_STRIDES_CHECKING 1\r\n #define NPY_USE_C99_FORMATS 1\r\n #define NPY_VISIBILITY_HIDDEN __attribute__((visibility(\"hidden\")))\r\n #define NPY_ABI_VERSION 0x01000009\r\n #define NPY_API_VERSION 0x0000000D\r\n \r\n #ifndef __STDC_FORMAT_MACROS\r\n #define __STDC_FORMAT_MACROS 1\r\n #endif\r\n \r\n EOF\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/_numpyconfig.h' to sources.\r\n executing numpy\/core\/code_generators\/generate_numpy_api.py\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__multiarray_api.h' to sources.\r\n numpy.core - nothing done with h_files = ['build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/config.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/_numpyconfig.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__multiarray_api.h']\r\n building extension \"numpy.core._multiarray_tests\" sources\r\n creating build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/_multiarray_tests.c\r\n building extension \"numpy.core._multiarray_umath\" sources\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/config.h' to sources.\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/_numpyconfig.h' to sources.\r\n executing numpy\/core\/code_generators\/generate_numpy_api.py\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__multiarray_api.h' to sources.\r\n executing numpy\/core\/code_generators\/generate_ufunc_api.py\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__ufunc_api.h' to sources.\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/arraytypes.c\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/einsum.c\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/lowlevel_strided_loops.c\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/nditer_templ.c\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/scalartypes.c\r\n creating build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/funcs.inc\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath' to include_dirs.\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/simd.inc\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/loops.h\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/loops.c\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/matmul.h\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/matmul.c\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/scalarmath.c\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath' to include_dirs.\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/templ_common.h\r\n adding 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common' to include_dirs.\r\n numpy.core - nothing done with h_files = ['build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/funcs.inc', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/simd.inc', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/loops.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/matmul.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/npy_math_internal.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/templ_common.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/config.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/_numpyconfig.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__multiarray_api.h', 'build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__ufunc_api.h']\r\n building extension \"numpy.core._umath_tests\" sources\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_umath_tests.c\r\n building extension \"numpy.core._rational_tests\" sources\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_rational_tests.c\r\n building extension \"numpy.core._struct_ufunc_tests\" sources\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_struct_ufunc_tests.c\r\n building extension \"numpy.core._operand_flag_tests\" sources\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_operand_flag_tests.c\r\n building extension \"numpy.fft.fftpack_lite\" sources\r\n building extension \"numpy.linalg.lapack_lite\" sources\r\n creating build\/src.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n adding 'numpy\/linalg\/lapack_lite\/python_xerbla.c' to sources.\r\n building extension \"numpy.linalg._umath_linalg\" sources\r\n adding 'numpy\/linalg\/lapack_lite\/python_xerbla.c' to sources.\r\n conv_template:> build\/src.macosx-10.15-x86_64-3.9\/numpy\/linalg\/umath_linalg.c\r\n building extension \"numpy.random.mtrand\" sources\r\n creating build\/src.macosx-10.15-x86_64-3.9\/numpy\/random\r\n building data_files sources\r\n build_src: building npy-pkg config files\r\n running build_py\r\n creating build\/lib.macosx-10.15-x86_64-3.9\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/conftest.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/version.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/_globals.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/dual.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/_distributor_init.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/ctypeslib.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/matlib.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying numpy\/_pytesttester.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n copying build\/src.macosx-10.15-x86_64-3.9\/numpy\/__config__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/compat\r\n copying numpy\/compat\/py3k.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/compat\r\n copying numpy\/compat\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/compat\r\n copying numpy\/compat\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/compat\r\n copying numpy\/compat\/_inspect.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/compat\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/umath.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/fromnumeric.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_dtype.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_add_newdocs.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_methods.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_internal.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_string_helpers.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/multiarray.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/records.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/setup_common.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_aliased_types.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/memmap.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/overrides.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/getlimits.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_dtype_ctypes.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/defchararray.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/shape_base.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/machar.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/numeric.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/function_base.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/einsumfunc.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/umath_tests.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/numerictypes.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/_type_aliases.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/cversions.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/arrayprint.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n copying numpy\/core\/code_generators\/generate_numpy_api.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/unixccompiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/numpy_distribution.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/conv_template.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/cpuinfo.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/ccompiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/msvc9compiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/npy_pkg_config.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/compat.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/misc_util.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/log.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/line_endings.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/lib2def.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/pathccompiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/system_info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/core.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/__version__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/exec_command.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/from_template.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/mingw32ccompiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/extension.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/msvccompiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/intelccompiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying numpy\/distutils\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n copying build\/src.macosx-10.15-x86_64-3.9\/numpy\/distutils\/__config__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/build.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/config_compiler.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/build_ext.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/config.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/install_headers.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/build_py.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/build_src.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/sdist.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/build_scripts.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/bdist_rpm.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/install_clib.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/build_clib.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/autodist.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/egg_info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/install.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/develop.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n copying numpy\/distutils\/command\/install_data.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/command\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/gnu.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/compaq.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/intel.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/none.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/nag.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/pg.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/ibm.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/sun.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/lahey.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/g95.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/mips.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/hpux.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/environment.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/pathf95.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/absoft.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n copying numpy\/distutils\/fcompiler\/vast.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/distutils\/fcompiler\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/misc.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/internals.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/creation.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/constants.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/ufuncs.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/broadcasting.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/basics.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/subclassing.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/indexing.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/byteswapping.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/structured_arrays.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n copying numpy\/doc\/glossary.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/doc\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/cfuncs.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/common_rules.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/crackfortran.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/cb_rules.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/rules.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/f2py2e.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/func2subr.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/__version__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/diagnose.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/capi_maps.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/f90mod_rules.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/f2py_testing.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/use_rules.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/auxfuncs.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n copying numpy\/f2py\/__main__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/f2py\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n copying numpy\/fft\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n copying numpy\/fft\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n copying numpy\/fft\/helper.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n copying numpy\/fft\/fftpack.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n copying numpy\/fft\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/_iotools.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/mixins.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/nanfunctions.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/recfunctions.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/histograms.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/scimath.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/_version.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/user_array.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/format.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/twodim_base.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/financial.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/index_tricks.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/npyio.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/shape_base.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/stride_tricks.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/utils.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/arrayterator.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/function_base.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/arraysetops.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/arraypad.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/type_check.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/polynomial.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/_datasource.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n copying numpy\/lib\/ufunclike.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/lib\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n copying numpy\/linalg\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n copying numpy\/linalg\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n copying numpy\/linalg\/linalg.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n copying numpy\/linalg\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/extras.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/version.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/testutils.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/core.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/bench.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/timer_comparison.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n copying numpy\/ma\/mrecords.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/ma\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/matrixlib\r\n copying numpy\/matrixlib\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/matrixlib\r\n copying numpy\/matrixlib\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/matrixlib\r\n copying numpy\/matrixlib\/defmatrix.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/matrixlib\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/laguerre.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/_polybase.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/polyutils.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/hermite_e.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/chebyshev.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/polynomial.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/legendre.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n copying numpy\/polynomial\/hermite.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/polynomial\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/random\r\n copying numpy\/random\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/random\r\n copying numpy\/random\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/random\r\n copying numpy\/random\/info.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/random\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/nosetester.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/noseclasses.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/setup.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/utils.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/print_coercion_tables.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n copying numpy\/testing\/decorators.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\r\n creating build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n copying numpy\/testing\/_private\/nosetester.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n copying numpy\/testing\/_private\/__init__.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n copying numpy\/testing\/_private\/noseclasses.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n copying numpy\/testing\/_private\/utils.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n copying numpy\/testing\/_private\/parameterized.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n copying numpy\/testing\/_private\/decorators.py -> build\/lib.macosx-10.15-x86_64-3.9\/numpy\/testing\/_private\r\n running build_clib\r\n customize UnixCCompiler\r\n customize UnixCCompiler using build_clib\r\n building 'npymath' library\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\r\n compile options: '-Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: numpy\/core\/src\/npymath\/npy_math.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/npy_math_complex.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/ieee754.c\r\n clang: numpy\/core\/src\/npymath\/halffloat.c\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]\r\n static const volatile npy_float tiny = 3.9443045e-31f;\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]\r\n static const npy_cfloat c_halff = {0.5F, 0.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]\r\n static const npy_cfloat c_if = {0.0, 1.0F};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]\r\n static const npy_cfloat c_ihalff = {0.0, 0.5F};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]\r\n caddf(npy_cfloat a, npy_cfloat b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]\r\n csubf(npy_cfloat a, npy_cfloat b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]\r\n cnegf(npy_cfloat a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]\r\n cmulif(npy_cfloat a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]\r\n static const npy_cdouble c_half = {0.5, 0.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]\r\n static const npy_cdouble c_i = {0.0, 1.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]\r\n static const npy_cdouble c_ihalf = {0.0, 0.5};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]\r\n cadd(npy_cdouble a, npy_cdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]\r\n csub(npy_cdouble a, npy_cdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]\r\n cneg(npy_cdouble a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]\r\n cmuli(npy_cdouble a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]\r\n static const npy_clongdouble c_halfl = {0.5L, 0.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]\r\n static const npy_clongdouble c_il = {0.0, 1.0L};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]\r\n static const npy_clongdouble c_ihalfl = {0.0, 0.5L};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]\r\n caddl(npy_clongdouble a, npy_clongdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]\r\n csubl(npy_clongdouble a, npy_clongdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]\r\n cnegl(npy_clongdouble a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]\r\n cmulil(npy_clongdouble a)\r\n ^\r\n 22 warnings generated.\r\n ar: adding 4 object files to build\/temp.macosx-10.15-x86_64-3.9\/libnpymath.a\r\n ranlib:@ build\/temp.macosx-10.15-x86_64-3.9\/libnpymath.a\r\n building 'npysort' library\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npysort\r\n compile options: '-Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npysort\/quicksort.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npysort\/mergesort.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npysort\/heapsort.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npysort\/selection.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npysort\/binsearch.c\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/npysort\/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp k;\r\n ^~~~~~~~~~~\r\n numpy\/core\/src\/npysort\/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead\r\n else if (0 && kth == num - 1) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n 22 warnings generated.\r\n ar: adding 5 object files to build\/temp.macosx-10.15-x86_64-3.9\/libnpysort.a\r\n ranlib:@ build\/temp.macosx-10.15-x86_64-3.9\/libnpysort.a\r\n running build_ext\r\n customize UnixCCompiler\r\n customize UnixCCompiler using build_ext\r\n building 'numpy.core._dummy' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: numpy\/core\/src\/dummymodule.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/dummymodule.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_dummy.cpython-39-darwin.so\r\n building 'numpy.core._multiarray_tests' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/_multiarray_tests.c\r\n clang: numpy\/core\/src\/common\/mem_overlap.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/_multiarray_tests.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/mem_overlap.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_multiarray_tests.cpython-39-darwin.so\r\n building 'numpy.core._multiarray_umath' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\/numpy\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\/numpy\/_build_utils\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\/numpy\/_build_utils\/src\r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n extra options: '-msse3 -I\/System\/Library\/Frameworks\/vecLib.framework\/Headers'\r\n clang: numpy\/core\/src\/multiarray\/alloc.c\r\n clang: numpy\/core\/src\/multiarray\/calculation.cclang: numpy\/core\/src\/multiarray\/array_assign_scalar.c\r\n clang: numpy\/core\/src\/multiarray\/convert.c\r\n \r\n clang: numpy\/core\/src\/multiarray\/ctors.c\r\n clang: numpy\/core\/src\/multiarray\/datetime_busday.c\r\n clang: numpy\/core\/src\/multiarray\/dragon4.cclang: numpy\/core\/src\/multiarray\/flagsobject.c\r\n \r\n numpy\/core\/src\/multiarray\/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/ctors.c:2261:36: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n clang: numpy\/core\/src\/multiarray\/arrayobject.c\r\n clang: numpy\/core\/src\/multiarray\/array_assign_array.c\r\n clang: numpy\/core\/src\/multiarray\/convert_datatype.c\r\n clang: numpy\/core\/src\/multiarray\/getset.c\r\n clang: numpy\/core\/src\/multiarray\/datetime_busdaycal.c\r\n clang: numpy\/core\/src\/multiarray\/buffer.c\r\n clang: numpy\/core\/src\/multiarray\/compiled_base.c\r\n clang: numpy\/core\/src\/multiarray\/hashdescr.c\r\n clang: numpy\/core\/src\/multiarray\/descriptor.c\r\n numpy\/core\/src\/multiarray\/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n if (PyUString_GET_SIZE(name) == 0) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/descriptor.c:453:13: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n if (PyUString_GET_SIZE(name) == 0) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n if (PyUString_GET_SIZE(name) == 0) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/descriptor.c:460:48: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {\r\n ^\r\n numpy\/core\/include\/numpy\/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'\r\n #define PyUString_GET_SIZE PyUnicode_GET_SIZE\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n clang: numpy\/core\/src\/multiarray\/conversion_utils.c\r\n clang: numpy\/core\/src\/multiarray\/item_selection.c\r\n clang: numpy\/core\/src\/multiarray\/dtype_transfer.c\r\n clang: numpy\/core\/src\/multiarray\/mapping.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/arraytypes.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/nditer_templ.c\r\n 3 warnings generated.\r\n clang: numpy\/core\/src\/multiarray\/datetime.c\r\n numpy\/core\/src\/multiarray\/arraytypes.c.src:477:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n ptr = PyUnicode_AS_UNICODE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'\r\n PyUnicode_AsUnicode(_PyObject_CAST(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n datalen = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/arraytypes.c.src:482:15: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n datalen = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n datalen = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n clang: numpy\/core\/src\/multiarray\/common.c\r\n numpy\/core\/src\/multiarray\/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n itemsize = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:187:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n itemsize = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n itemsize = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n itemsize = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:239:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n itemsize = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n itemsize = PyUnicode_GET_DATA_SIZE(temp);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n int itemsize = PyUnicode_GET_DATA_SIZE(obj);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:282:24: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n int itemsize = PyUnicode_GET_DATA_SIZE(obj);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n int itemsize = PyUnicode_GET_DATA_SIZE(obj);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n 6 warnings generated.\r\n clang: numpy\/core\/src\/multiarray\/nditer_pywrap.c\r\n 9 warnings generated.\r\n clang: numpy\/core\/src\/multiarray\/sequence.c\r\n clang: numpy\/core\/src\/multiarray\/shape.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/einsum.c\r\n clang: numpy\/core\/src\/multiarray\/methods.c\r\n clang: numpy\/core\/src\/multiarray\/iterators.c\r\n clang: numpy\/core\/src\/multiarray\/datetime_strings.c\r\n clang: numpy\/core\/src\/multiarray\/number.c\r\n clang: numpy\/core\/src\/multiarray\/scalarapi.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/scalartypes.c\r\n numpy\/core\/src\/multiarray\/scalarapi.c:74:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n return (void *)PyUnicode_AS_DATA(scalar);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'\r\n ((const char *)(PyUnicode_AS_UNICODE(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'\r\n PyUnicode_AsUnicode(_PyObject_CAST(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalarapi.c:135:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n return (void *)PyUnicode_AS_DATA(scalar);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'\r\n ((const char *)(PyUnicode_AS_UNICODE(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'\r\n PyUnicode_AsUnicode(_PyObject_CAST(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n descr->elsize = PyUnicode_GET_DATA_SIZE(sc);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalarapi.c:568:29: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n descr->elsize = PyUnicode_GET_DATA_SIZE(sc);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n descr->elsize = PyUnicode_GET_DATA_SIZE(sc);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n ip = dptr = PyUnicode_AS_UNICODE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'\r\n PyUnicode_AsUnicode(_PyObject_CAST(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n len = PyUnicode_GET_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n len = PyUnicode_GET_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n len = PyUnicode_GET_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]\r\n new = PyUnicode_FromUnicode(ip, len);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n ip = dptr = PyUnicode_AS_UNICODE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'\r\n PyUnicode_AsUnicode(_PyObject_CAST(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n len = PyUnicode_GET_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n len = PyUnicode_GET_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n len = PyUnicode_GET_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]\r\n new = PyUnicode_FromUnicode(ip, len);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:1849:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n buffer = PyUnicode_AS_DATA(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'\r\n ((const char *)(PyUnicode_AS_UNICODE(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'\r\n PyUnicode_AsUnicode(_PyObject_CAST(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n buflen = PyUnicode_GET_DATA_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:1850:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n buflen = PyUnicode_GET_DATA_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n buflen = PyUnicode_GET_DATA_SIZE(self);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'\r\n (PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n 5 warnings generated.\r\n clang: numpy\/core\/src\/multiarray\/typeinfo.c\r\n clang: numpy\/core\/src\/multiarray\/refcount.c\r\n clang: numpy\/core\/src\/multiarray\/usertypes.c\r\n clang: numpy\/core\/src\/multiarray\/multiarraymodule.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/lowlevel_strided_loops.c\r\n clang: numpy\/core\/src\/multiarray\/vdot.c\r\n clang: numpy\/core\/src\/umath\/umathmodule.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/matmul.c\r\n clang: numpy\/core\/src\/umath\/reduction.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/loops.c\r\n clang: numpy\/core\/src\/multiarray\/nditer_api.c\r\n 14 warnings generated.\r\n clang: numpy\/core\/src\/multiarray\/strfuncs.c\r\n numpy\/core\/src\/umath\/loops.c.src:655:18: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]\r\n result = PyEval_CallObject(tocall, arglist);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'\r\n PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/strfuncs.c:178:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]\r\n s = PyEval_CallObject(PyArray_ReprFunction, arglist);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'\r\n PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/core\/src\/multiarray\/strfuncs.c:195:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]\r\n s = PyEval_CallObject(PyArray_StrFunction, arglist);\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'\r\n PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n 2 warnings generated.\r\n clang: numpy\/core\/src\/multiarray\/temp_elide.c\r\n clang: numpy\/core\/src\/umath\/cpuid.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/scalarmath.c\r\n clang: numpy\/core\/src\/umath\/ufunc_object.c\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'byte_long' [-Wunused-function]\r\n byte_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'ubyte_long' [-Wunused-function]\r\n ubyte_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'short_long' [-Wunused-function]\r\n short_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'ushort_long' [-Wunused-function]\r\n ushort_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'int_long' [-Wunused-function]\r\n int_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'uint_long' [-Wunused-function]\r\n uint_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'long_long' [-Wunused-function]\r\n long_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'ulong_long' [-Wunused-function]\r\n ulong_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'longlong_long' [-Wunused-function]\r\n longlong_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'ulonglong_long' [-Wunused-function]\r\n ulonglong_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'half_long' [-Wunused-function]\r\n half_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'float_long' [-Wunused-function]\r\n float_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'double_long' [-Wunused-function]\r\n double_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'longdouble_long' [-Wunused-function]\r\n longdouble_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'cfloat_long' [-Wunused-function]\r\n cfloat_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'cdouble_long' [-Wunused-function]\r\n cdouble_long(PyObject *obj)\r\n ^\r\n numpy\/core\/src\/umath\/scalarmath.c.src:1449:1: warning: unused function 'clongdouble_long' [-Wunused-function]\r\n clongdouble_long(PyObject *obj)\r\n ^\r\n clang: numpy\/core\/src\/multiarray\/nditer_constr.c\r\n numpy\/core\/src\/umath\/ufunc_object.c:657:19: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]\r\n for (i = 0; i < len; i++) {\r\n ~ ^ ~~~\r\n clang: numpy\/core\/src\/umath\/override.c\r\n clang: numpy\/core\/src\/npymath\/npy_math.c\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/ieee754.c\r\n numpy\/core\/src\/umath\/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp n = dimensions[0];\r\n ^~~~~~~~~~\r\n numpy\/core\/src\/umath\/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead\r\n if (IS_BINARY_REDUCE && 0) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/umath\/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp n = dimensions[0];\r\n ^~~~~~~~~~\r\n numpy\/core\/src\/umath\/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead\r\n if (IS_BINARY_REDUCE && 0) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n numpy\/core\/src\/umath\/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]\r\n npy_intp n = dimensions[0];\r\n ^~~~~~~~~~\r\n numpy\/core\/src\/umath\/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead\r\n if (IS_BINARY_REDUCE && 0) {\r\n ^\r\n \/* DISABLES CODE *\/ ( )\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/npy_math_complex.c\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]\r\n static const volatile npy_float tiny = 3.9443045e-31f;\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]\r\n static const npy_cfloat c_halff = {0.5F, 0.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]\r\n static const npy_cfloat c_if = {0.0, 1.0F};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]\r\n static const npy_cfloat c_ihalff = {0.0, 0.5F};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]\r\n caddf(npy_cfloat a, npy_cfloat b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]\r\n csubf(npy_cfloat a, npy_cfloat b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]\r\n cnegf(npy_cfloat a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]\r\n cmulif(npy_cfloat a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]\r\n static const npy_cdouble c_half = {0.5, 0.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]\r\n static const npy_cdouble c_i = {0.0, 1.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]\r\n static const npy_cdouble c_ihalf = {0.0, 0.5};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]\r\n cadd(npy_cdouble a, npy_cdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]\r\n csub(npy_cdouble a, npy_cdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]\r\n cneg(npy_cdouble a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]\r\n cmuli(npy_cdouble a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]\r\n static const npy_clongdouble c_halfl = {0.5L, 0.0};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]\r\n static const npy_clongdouble c_il = {0.0, 1.0L};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]\r\n static const npy_clongdouble c_ihalfl = {0.0, 0.5L};\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]\r\n caddl(npy_clongdouble a, npy_clongdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]\r\n csubl(npy_clongdouble a, npy_clongdouble b)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]\r\n cnegl(npy_clongdouble a)\r\n ^\r\n numpy\/core\/src\/npymath\/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]\r\n cmulil(npy_clongdouble a)\r\n ^\r\n 22 warnings generated.\r\n clang: numpy\/core\/src\/common\/mem_overlap.c\r\n clang: numpy\/core\/src\/npymath\/halffloat.c\r\n clang: numpy\/core\/src\/common\/array_assign.c\r\n clang: numpy\/core\/src\/common\/ufunc_override.c\r\n clang: numpy\/core\/src\/common\/npy_longdouble.c\r\n clang: numpy\/core\/src\/common\/numpyos.c\r\n clang: numpy\/core\/src\/common\/ucsnarrow.c\r\n 1 warning generated.\r\n clang: numpy\/core\/src\/umath\/extobj.c\r\n numpy\/core\/src\/common\/ucsnarrow.c:139:34: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]\r\n ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE*)buf,\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n 1 warning generated.\r\n clang: numpy\/core\/src\/common\/python_xerbla.c\r\n clang: numpy\/core\/src\/common\/cblasfuncs.c\r\n clang: \/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\/numpy\/_build_utils\/src\/apple_sgemv_fix.c\r\n In file included from \/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\/numpy\/_build_utils\/src\/apple_sgemv_fix.c:26:\r\n In file included from numpy\/core\/include\/numpy\/arrayobject.h:4:\r\n In file included from numpy\/core\/include\/numpy\/ndarrayobject.h:21:\r\n build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy\/__multiarray_api.h:1463:1: warning: unused function '_import_array' [-Wunused-function]\r\n _import_array(void)\r\n ^\r\n 1 warning generated.\r\n 17 warnings generated.\r\n clang: numpy\/core\/src\/umath\/ufunc_type_resolution.c\r\n 4 warnings generated.\r\n 4 warnings generated.\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/alloc.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/arrayobject.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/arraytypes.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/array_assign_scalar.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/array_assign_array.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/buffer.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/calculation.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/compiled_base.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/common.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/convert.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/convert_datatype.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/conversion_utils.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/ctors.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/datetime.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/datetime_strings.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/datetime_busday.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/datetime_busdaycal.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/descriptor.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/dragon4.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/dtype_transfer.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/einsum.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/flagsobject.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/getset.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/hashdescr.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/item_selection.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/iterators.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/lowlevel_strided_loops.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/mapping.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/methods.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/multiarraymodule.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/nditer_templ.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/nditer_api.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/nditer_constr.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/nditer_pywrap.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/number.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/refcount.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/sequence.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/shape.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/scalarapi.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/scalartypes.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/strfuncs.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/temp_elide.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/typeinfo.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/usertypes.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/multiarray\/vdot.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/umathmodule.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/reduction.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/loops.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/matmul.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/ufunc_object.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/extobj.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/cpuid.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/scalarmath.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/ufunc_type_resolution.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/override.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/npy_math.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/ieee754.o build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/npy_math_complex.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath\/halffloat.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/array_assign.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/mem_overlap.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/npy_longdouble.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/ucsnarrow.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/ufunc_override.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/numpyos.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/cblasfuncs.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common\/python_xerbla.o build\/temp.macosx-10.15-x86_64-3.9\/private\/var\/folders\/fz\/0j719tys48x7jlnjnwc69smr0000gn\/T\/pip-install-ufzck51l\/numpy_b0e8a3953a1d4b46801f12bcea55536e\/numpy\/_build_utils\/src\/apple_sgemv_fix.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -lnpymath -lnpysort -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_multiarray_umath.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate\r\n building 'numpy.core._umath_tests' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_umath_tests.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_umath_tests.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_umath_tests.cpython-39-darwin.so\r\n building 'numpy.core._rational_tests' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_rational_tests.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_rational_tests.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_rational_tests.cpython-39-darwin.so\r\n building 'numpy.core._struct_ufunc_tests' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_struct_ufunc_tests.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_struct_ufunc_tests.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_struct_ufunc_tests.cpython-39-darwin.so\r\n building 'numpy.core._operand_flag_tests' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_operand_flag_tests.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/umath\/_operand_flag_tests.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/core\/_operand_flag_tests.cpython-39-darwin.so\r\n building 'numpy.fft.fftpack_lite' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/fft\r\n compile options: '-Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: numpy\/fft\/fftpack_litemodule.c\r\n clang: numpy\/fft\/fftpack.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/numpy\/fft\/fftpack_litemodule.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/fft\/fftpack.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/fft\/fftpack_lite.cpython-39-darwin.so\r\n building 'numpy.linalg.lapack_lite' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/linalg\/lapack_lite\r\n compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n extra options: '-msse3 -I\/System\/Library\/Frameworks\/vecLib.framework\/Headers'\r\n clang: numpy\/linalg\/lapack_litemodule.c\r\n clang: numpy\/linalg\/lapack_lite\/python_xerbla.c\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/numpy\/linalg\/lapack_litemodule.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/linalg\/lapack_lite\/python_xerbla.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\/lapack_lite.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate\r\n building 'numpy.linalg._umath_linalg' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/linalg\r\n compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n extra options: '-msse3 -I\/System\/Library\/Frameworks\/vecLib.framework\/Headers'\r\n clang: build\/src.macosx-10.15-x86_64-3.9\/numpy\/linalg\/umath_linalg.c\r\n numpy\/linalg\/umath_linalg.c.src:735:32: warning: unknown warning group '-Wmaybe-uninitialized', ignored [-Wunknown-warning-option]\r\n #pragma GCC diagnostic ignored \"-Wmaybe-uninitialized\"\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:541:1: warning: unused function 'dump_ufunc_object' [-Wunused-function]\r\n dump_ufunc_object(PyUFuncObject* ufunc)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:566:1: warning: unused function 'dump_linearize_data' [-Wunused-function]\r\n dump_linearize_data(const char* name, const LINEARIZE_DATA_t* params)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:602:1: warning: unused function 'dump_FLOAT_matrix' [-Wunused-function]\r\n dump_FLOAT_matrix(const char* name,\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:602:1: warning: unused function 'dump_DOUBLE_matrix' [-Wunused-function]\r\n dump_DOUBLE_matrix(const char* name,\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:602:1: warning: unused function 'dump_CFLOAT_matrix' [-Wunused-function]\r\n dump_CFLOAT_matrix(const char* name,\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:602:1: warning: unused function 'dump_CDOUBLE_matrix' [-Wunused-function]\r\n dump_CDOUBLE_matrix(const char* name,\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:865:1: warning: unused function 'zero_FLOAT_matrix' [-Wunused-function]\r\n zero_FLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:865:1: warning: unused function 'zero_DOUBLE_matrix' [-Wunused-function]\r\n zero_DOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:865:1: warning: unused function 'zero_CFLOAT_matrix' [-Wunused-function]\r\n zero_CFLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:865:1: warning: unused function 'zero_CDOUBLE_matrix' [-Wunused-function]\r\n zero_CDOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:1862:1: warning: unused function 'dump_geev_params' [-Wunused-function]\r\n dump_geev_params(const char *name, GEEV_PARAMS_t* params)\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:2132:1: warning: unused function 'init_cgeev' [-Wunused-function]\r\n init_cgeev(GEEV_PARAMS_t* params,\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:2213:1: warning: unused function 'process_cgeev_results' [-Wunused-function]\r\n process_cgeev_results(GEEV_PARAMS_t *NPY_UNUSED(params))\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:2376:1: warning: unused function 'dump_gesdd_params' [-Wunused-function]\r\n dump_gesdd_params(const char *name,\r\n ^\r\n numpy\/linalg\/umath_linalg.c.src:2864:1: warning: unused function 'dump_gelsd_params' [-Wunused-function]\r\n dump_gelsd_params(const char *name,\r\n ^\r\n 16 warnings generated.\r\n clang -bundle -undefined dynamic_lookup -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk build\/temp.macosx-10.15-x86_64-3.9\/build\/src.macosx-10.15-x86_64-3.9\/numpy\/linalg\/umath_linalg.o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/linalg\/lapack_lite\/python_xerbla.o -L\/usr\/local\/lib -L\/usr\/local\/opt\/openssl@1.1\/lib -L\/usr\/local\/opt\/sqlite\/lib -Lbuild\/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build\/lib.macosx-10.15-x86_64-3.9\/numpy\/linalg\/_umath_linalg.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate\r\n building 'numpy.random.mtrand' extension\r\n compiling C sources\r\n C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers\r\n \r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/random\r\n creating build\/temp.macosx-10.15-x86_64-3.9\/numpy\/random\/mtrand\r\n compile options: '-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c'\r\n clang: numpy\/random\/mtrand\/mtrand.c\r\n clang: numpy\/random\/mtrand\/initarray.cclang: numpy\/random\/mtrand\/randomkit.c\r\n \r\n clang: numpy\/random\/mtrand\/distributions.c\r\n numpy\/random\/mtrand\/mtrand.c:40400:34: error: no member named 'tp_print' in 'struct _typeobject'\r\n __pyx_type_6mtrand_RandomState.tp_print = 0;\r\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^\r\n numpy\/random\/mtrand\/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42673:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42673:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42689:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op) : \\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42689:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n ((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\\\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n numpy\/random\/mtrand\/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]\r\n (PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'\r\n PyUnicode_WSTR_LENGTH(op)))\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'\r\n #define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/cpython\/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here\r\n Py_DEPRECATED(3.3)\r\n ^\r\n \/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9\/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'\r\n #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))\r\n ^\r\n 12 warnings and 1 error generated.\r\n error: Command \"clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot \/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/usr\/include -I\/Library\/Developer\/CommandLineTools\/SDKs\/MacOSX10.15.sdk\/System\/Library\/Frameworks\/Tk.framework\/Versions\/8.5\/Headers -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy\/core\/include -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/include\/numpy -Inumpy\/core\/src\/common -Inumpy\/core\/src -Inumpy\/core -Inumpy\/core\/src\/npymath -Inumpy\/core\/src\/multiarray -Inumpy\/core\/src\/umath -Inumpy\/core\/src\/npysort -I\/usr\/local\/include -I\/usr\/local\/opt\/openssl@1.1\/include -I\/usr\/local\/opt\/sqlite\/include -I\/Users\/destiny\/Downloads\/env\/include -I\/usr\/local\/Cellar\/python@3.9\/3.9.0_1\/Frameworks\/Python.framework\/Versions\/3.9\/include\/python3.9 -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/common -Ibuild\/src.macosx-10.15-x86_64-3.9\/numpy\/core\/src\/npymath -c numpy\/random\/mtrand\/mtrand.c -o build\/temp.macosx-10.15-x86_64-3.9\/numpy\/random\/mtrand\/mtrand.o -MMD -MF build\/temp.macosx-10.15-x86_64-3.9\/numpy\/random\/mtrand\/mtrand.o.d\" failed with exit status 1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1696\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1695","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1695\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1695\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1695\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1695","id":780971987,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUwNzc1OTU4","number":1695,"title":"fix ner_tag bugs in thainer","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Thanks :)\r\n> \r\n> Apparently the dummy_data.zip got removed. Is this expected ?\r\n> Also can you remove the `data-pos.conll` file that you added ?\r\n\r\nNot expected. I forgot to remove the `dummy_data` folder used to create `dummy_data.zip`. \r\nChanged to only `dummy_data.zip`."],"created_at":1609985553000,"updated_at":1610030625000,"closed_at":1610030608000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1695","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1695","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1695.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1695.patch"},"body":"fix bug that results in `ner_tag` always equal to 'O'.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1695\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1694","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1694\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1694\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1694\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1694","id":780429080,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUwMzI0Mjcx","number":1694,"title":"Add OSCAR","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, on the OSCAR dataset, the document boundaries are defined by an empty line. Are there any chances to keep this empty line or explicitly group the sentences of a document? I'm asking for this 'cause I need to know if some sentences belong to the same document on my current OSCAR dataset usage.","Indeed currently it yields one example per line and ignore the empty lines.\r\nMaybe the best is to group them by paragraph then, and yield one example when an empty line is found.\r\nWhat do you think ?","I think to group them is the best choice indeed, I actually did this on [brwac](https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/brwac) dataset too, it's another huge textual dataset.","Ok I just launched the computation of the dataset_infos.json again by grouping lines in paragraphs.\r\nThe new _generate_examples is\r\n```python\r\n def _generate_examples(self, filepaths):\r\n \"\"\"This function returns the examples in the raw (text) form.\"\"\"\r\n id_ = 0\r\n current_lines = []\r\n for filepath in filepaths:\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with gzip.open(filepath, \"rt\") as f:\r\n for line in f:\r\n if len(line.strip()) > 0:\r\n current_lines.append(line)\r\n else:\r\n feature = id_, {\"id\": id_, \"text\": \"\".join(current_lines)}\r\n yield feature\r\n id_ += 1\r\n current_lines = []\r\n # last paragraph\r\n if current_lines:\r\n feature = id_, {\"id\": id_, \"text\": \"\".join(current_lines)}\r\n yield feature\r\n```","Is there any chance to also keep the sentences raw (without the `\"\".join()`)?. This is useful if you wanna train models where one of the tasks you perform is document sentence permutation... that's my case :)","They are raw in the sense that nothing is changed from the raw file for each paragraph.\r\nYou can split sentences on new lines `\\n` for example.\r\n\r\nThe first example for the unshuffled deduplicated english is going to be \r\n> Mtendere Village was inspired by the vision of Chief Napoleon Dzombe, which he shared with John Blanchard during his first visit to Malawi. Chief Napoleon conveyed the desperate need for a program to intervene and care for the orphans and vulnerable children (OVC) in Malawi, and John committed to help.\r\n> Established in honor of John & Lindy\u2019s son, Christopher Blanchard, this particular program is very dear to the Blanchard family. Dana Blanchard, or Mama Dana as she is more commonly referred to at Mtendere, lived on site during the initial development, and she returns each summer to spend the season with her Malawian family. The heart of the program is to be His hands and feet by caring for the children at Mtendere, and meeting their spiritual, physical, academic, and emotional needs.\r\n> [...]\r\n> 100X Development Foundation, Inc. is registered 503 (c)(3) nonprofit organization. Donations are deductable to the full extent allowable under IRS regulations.","I thought the line reader would omit the `\\n` character. I can easily split the sentences as you suggested. Thanks @lhoestq! \ud83d\ude03 ","The recomputation of the metadata finished a few days ago, I'll update the PR soon :) ","Let me know if you have comments @pjox @jonatasgrosman :) \r\n\r\nOtherwise we can merge it","Everything seems fine to me \ud83d\ude04 "],"created_at":1609928468000,"updated_at":1611565833000,"closed_at":1611565832000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1694","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1694","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1694.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1694.patch"},"body":"Continuation of #348 \r\nThe files have been moved to S3 and only the unshuffled version is available.\r\nBoth original and deduplicated versions of each language are available.\r\n\r\nExample of usage:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noscar_dedup_en = load_dataset(\"oscar\", \"unshuffled_deduplicated_en\", split=\"train\")\r\noscar_orig_fr = load_dataset(\"oscar\", \"unshuffled_original_fr\", split=\"train\")\r\n```\r\n\r\ncc @pjox @jonatasgrosman \r\n\r\n-------------\r\n\r\nTo make the metadata generation work in parallel I did a few changes in the `datasets-cli test` command to add the `num_proc` and `proc_rank` arguments. This way you can run multiple processes for the metadata computation.\r\n\r\n```\r\ndatasets-cli test .\/datasets\/oscar --save_infos --all_configs --num_proc 4 --proc_rank 0 --clear_cache --cache_dir tmp0\r\n```\r\n\r\n-------------\r\n\r\nToDo: add the dummy_data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1694\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1693","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1693\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1693\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1693\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1693","id":780268595,"node_id":"MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx","number":1693,"title":"Fix reuters metadata parsing errors","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609921563000,"updated_at":1610063627000,"closed_at":1610028082000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1693","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1693","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1693.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1693.patch"},"body":"Was missing the last entry in each metadata category","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1693\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1691","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1691\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1691\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1691\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1691","id":779882271,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ5ODE3NTM0","number":1691,"title":"Updated HuggingFace Datasets README (fix typos)","user":{"login":"8bitmp3","id":19637339,"node_id":"MDQ6VXNlcjE5NjM3MzM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19637339?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/8bitmp3","html_url":"https:\/\/github.com\/8bitmp3","followers_url":"https:\/\/api.github.com\/users\/8bitmp3\/followers","following_url":"https:\/\/api.github.com\/users\/8bitmp3\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/8bitmp3\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/8bitmp3\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/8bitmp3\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/8bitmp3\/orgs","repos_url":"https:\/\/api.github.com\/users\/8bitmp3\/repos","events_url":"https:\/\/api.github.com\/users\/8bitmp3\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/8bitmp3\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609899278000,"updated_at":1610839847000,"closed_at":1610013992000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1691","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1691","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1691.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1691.patch"},"body":"Awesome work on \ud83e\udd17 Datasets. I found a couple of small typos in the README. Hope this helps.\r\n\r\n\r\n\r\n![](https:\/\/emojipedia-us.s3.dualstack.us-west-1.amazonaws.com\/thumbs\/160\/google\/56\/hugging-face_1f917.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1691\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1690","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1690\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1690\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1690\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1690","id":779441631,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ5NDEwOTgw","number":1690,"title":"Fast start up","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609873673000,"updated_at":1609942859000,"closed_at":1609942858000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1690","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1690","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1690.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1690.patch"},"body":"Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies.\r\n\r\nTo make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is being imported. On my side it changed the import time of `datasets` from 5sec to 0.5sec, which is enjoyable.\r\n\r\nTo be able to check if optional dependencies are available without importing them I'm using `importlib_metadata`, which is part of the standard lib in python>=3.8 and was backported. The difference with `importlib` is that it also enables to get the versions of the libraries without importing them.\r\nI added this dependency in `setup.py`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1690\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1689","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1689\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1689\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1689\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1689","id":779107313,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ5MTEwMDgw","number":1689,"title":"Fix ade_corpus_v2 config names","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609857208000,"updated_at":1609858509000,"closed_at":1609858508000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1689","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1689","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1689.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1689.patch"},"body":"There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them:\r\n\r\n- Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification\r\n- Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation\r\n- Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1689\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1688","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1688\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1688\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1688\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1688","id":779029685,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0","number":1688,"title":"Fix DaNE last example","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609853377000,"updated_at":1609855215000,"closed_at":1609855213000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1688","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1688","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1688.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1688.patch"},"body":"The last example from the DaNE dataset is empty.\r\n\r\nFix #1686 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1688\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1687","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1687\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1687\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1687\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1687","id":779004894,"node_id":"MDU6SXNzdWU3NzkwMDQ4OTQ=","number":1687,"title":"Question: Shouldn't .info be a part of DatasetDict?","user":{"login":"KennethEnevoldsen","id":23721977,"node_id":"MDQ6VXNlcjIzNzIxOTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23721977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KennethEnevoldsen","html_url":"https:\/\/github.com\/KennethEnevoldsen","followers_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/followers","following_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/repos","events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We could do something. There is a part of `.info` which is split specific (cache files, split instructions) but maybe if could be made to work.","Yes this was kinda the idea I was going for. DatasetDict.info would be the shared info amongs the datasets (maybe even some info on how they differ). "],"created_at":1609852121000,"updated_at":1610014686000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets. \r\n\r\nFor instance:\r\n```\r\n>>> ds = datasets.load_dataset(\"conll2002\", \"es\")\r\n>>> ds.info\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nAttributeError: 'DatasetDict' object has no attribute 'info'\r\n```\r\n\r\nI could imagine that this wouldn't work for datasets dicts which hold entirely different datasets (multimodal datasets), but it seems odd that splits of the same dataset is treated the same as what is essentially different datasets. \r\n\r\nIntuitively it would also make sense that if a dataset is supplied via. the load_dataset that is have a common .info which covers the entire dataset.\r\n\r\nIt is entirely possible that I am missing another perspective","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1687\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1686","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1686\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1686\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1686\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1686","id":778921684,"node_id":"MDU6SXNzdWU3Nzg5MjE2ODQ=","number":1686,"title":"Dataset Error: DaNE contains empty samples at the end","user":{"login":"KennethEnevoldsen","id":23721977,"node_id":"MDQ6VXNlcjIzNzIxOTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23721977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KennethEnevoldsen","html_url":"https:\/\/github.com\/KennethEnevoldsen","followers_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/followers","following_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/repos","events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting, I opened a PR to fix that","One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\n```","If you have other questions feel free to reopen :) "],"created_at":1609847666000,"updated_at":1609855269000,"closed_at":1609855213000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.\r\n\r\n```python\r\n>>> import datasets\r\n[...]\r\n>>> dataset = datasets.load_dataset(\"dane\")\r\n[...]\r\n>>> dataset[\"test\"][-1]\r\n{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}\r\n>>> dataset[\"train\"][-1]\r\n{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}\r\n```\r\n\r\nBest,\r\nKenneth","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1686\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1685","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1685\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1685\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1685\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1685","id":778914431,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2","number":1685,"title":"Update README.md of covid-tweets-japanese","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reviewing and merging!"],"created_at":1609847247000,"updated_at":1609928832000,"closed_at":1609925470000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1685","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1685","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1685.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1685.patch"},"body":"Update README.md of covid-tweets-japanese added by PR https:\/\/github.com\/huggingface\/datasets\/pull\/1367 and https:\/\/github.com\/huggingface\/datasets\/pull\/1402.\r\n\r\n- Update \"Data Splits\" to be more precise that no information is provided for now.\r\n - old: [More Information Needed]\r\n - new: No information about data splits is provided for now.\r\n\r\n- The automatic generation of links seemed not working properly, so I added a space before and after the URL to make the links work correctly.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1685\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1684","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1684\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1684\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1684\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1684","id":778356196,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ4NDU3NDY1","number":1684,"title":"Add CANER Corpus","user":{"login":"KMFODA","id":35491698,"node_id":"MDQ6VXNlcjM1NDkxNjk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35491698?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KMFODA","html_url":"https:\/\/github.com\/KMFODA","followers_url":"https:\/\/api.github.com\/users\/KMFODA\/followers","following_url":"https:\/\/api.github.com\/users\/KMFODA\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KMFODA\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KMFODA\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KMFODA\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KMFODA\/orgs","repos_url":"https:\/\/api.github.com\/users\/KMFODA\/repos","events_url":"https:\/\/api.github.com\/users\/KMFODA\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KMFODA\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609793351000,"updated_at":1611565760000,"closed_at":1611565760000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1684","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1684","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1684.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1684.patch"},"body":"What does this PR do?\r\n\r\nAdds the following dataset:\r\n\r\nhttps:\/\/github.com\/RamziSalah\/Classical-Arabic-Named-Entity-Recognition-Corpus\r\n\r\nWho can review?\r\n\r\n@lhoestq","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1684\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1683","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1683\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1683\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1683\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1683","id":778287612,"node_id":"MDU6SXNzdWU3NzgyODc2MTI=","number":1683,"title":"`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext","user":{"login":"abarbosa94","id":6608232,"node_id":"MDQ6VXNlcjY2MDgyMzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6608232?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abarbosa94","html_url":"https:\/\/github.com\/abarbosa94","followers_url":"https:\/\/api.github.com\/users\/abarbosa94\/followers","following_url":"https:\/\/api.github.com\/users\/abarbosa94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abarbosa94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abarbosa94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abarbosa94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abarbosa94\/orgs","repos_url":"https:\/\/api.github.com\/users\/abarbosa94\/repos","events_url":"https:\/\/api.github.com\/users\/abarbosa94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abarbosa94\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]` ? In my opinion you only need one of them, not two.","It makes sense :D\r\n\r\nIt seems to work! Thanks a lot :))\r\n\r\nClosing the issue"],"created_at":1609786073000,"updated_at":1609787085000,"closed_at":1609787085000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"It seems to fail the final batch ):\r\n\r\nsteps to reproduce:\r\n```\r\nfrom datasets import load_dataset\r\nfrom elasticsearch import Elasticsearch\r\nimport torch\r\nfrom transformers import file_utils, set_seed\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast\r\nMAX_SEQ_LENGTH = 256\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook\/dpr-ctx_encoder-single-nq-base\", cache_dir=\"..\/datasets\/\")\r\nctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(\r\n \"facebook\/dpr-ctx_encoder-single-nq-base\", \r\n cache_dir=\"..datasets\/\"\r\n)\r\n\r\ndataset = load_dataset('text', \r\n data_files='data\/raw\/ARC_Corpus.txt',\r\n cache_dir='..\/datasets')\r\n\r\ntorch.set_grad_enabled(False)\r\nds_with_embeddings = dataset.map(\r\n lambda example: {\r\n 'embeddings': ctx_encoder(\r\n **ctx_tokenizer(\r\n example[\"text\"], \r\n padding='max_length', \r\n truncation=True, \r\n max_length=MAX_SEQ_LENGTH,\r\n return_tensors=\"pt\"\r\n )\r\n )[0][0].numpy(),\r\n },\r\n batched=True,\r\n load_from_cache_file=False,\r\n batch_size=1000\r\n)\r\n```\r\nARC Corpus can be obtained from [here](https:\/\/ai2-datasets.s3-us-west-2.amazonaws.com\/arc\/ARC-V1-Feb2018.zip)\r\n\r\nAnd then the error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-13-67d139bb2ed3> in <module>\r\n 14 batched=True,\r\n 15 load_from_cache_file=False,\r\n---> 16 batch_size=1000\r\n 17 )\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 301 num_proc=num_proc,\r\n 302 )\r\n--> 303 for k, dataset in self.items()\r\n 304 }\r\n 305 )\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/dataset_dict.py in <dictcomp>(.0)\r\n 301 num_proc=num_proc,\r\n 302 )\r\n--> 303 for k, dataset in self.items()\r\n 304 }\r\n 305 )\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1257 fn_kwargs=fn_kwargs,\r\n 1258 new_fingerprint=new_fingerprint,\r\n-> 1259 update_data=update_data,\r\n 1260 )\r\n 1261 else:\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 155 }\r\n 156 # apply actual function\r\n--> 157 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 158 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 159 # re-apply format to the output\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 161 # Call actual function\r\n 162 \r\n--> 163 out = func(self, *args, **kwargs)\r\n 164 \r\n 165 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1526 if update_data:\r\n 1527 batch = cast_to_python_objects(batch)\r\n-> 1528 writer.write_batch(batch)\r\n 1529 if update_data:\r\n 1530 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 276 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 277 typed_sequence_examples[col] = typed_sequence\r\n--> 278 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 279 self.write_table(pa_table)\r\n 280 \r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.from_arrays()\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.validate()\r\n\r\n~\/.cache\/pypoetry\/virtualenvs\/masters-utTTC0p8-py3.7\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Column 1 named text expected length 768 but got length 1000\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1683\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1682","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1682\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1682\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1682\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1682","id":778268156,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1","number":1682,"title":"Don't use xlrd for xlsx files","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609783910000,"updated_at":1609783994000,"closed_at":1609783993000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1682","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1682","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1682.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1682.patch"},"body":"Since the latest release of `xlrd` (2.0), the support for xlsx files stopped.\r\nTherefore we needed to use something else.\r\nA good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`.\r\n\r\nI left the unused import of `openpyxl` in the dataset scripts to show users that this is a required dependency to use the scripts.\r\n\r\nI tested the different datasets using `datasets-cli test` and the tests are successful (no missing examples).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1682\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1681","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1681\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1681\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1681\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1681","id":777644163,"node_id":"MDU6SXNzdWU3Nzc2NDQxNjM=","number":1681,"title":"Dataset \"dane\" missing","user":{"login":"KennethEnevoldsen","id":23721977,"node_id":"MDQ6VXNlcjIzNzIxOTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23721977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KennethEnevoldsen","html_url":"https:\/\/github.com\/KennethEnevoldsen","followers_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/followers","following_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/repos","events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KennethEnevoldsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip install git+https:\/\/github.com\/huggingface\/datasets.git@master","The `dane` dataset was added recently, that's why it wasn't available yet. We did an intermediate release today just before the v2.0.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `dane` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"dane\")\r\n```","Thanks. Solved the problem."],"created_at":1609682583000,"updated_at":1609835735000,"closed_at":1609835713000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"the `dane` dataset appear to be missing in the latest version (1.1.3).\r\n\r\n```python\r\n>>> import datasets\r\n>>> datasets.__version__\r\n'1.1.3'\r\n>>> \"dane\" in datasets.list_datasets()\r\nTrue\r\n```\r\n\r\nAs we can see it should be present, but doesn't seem to be findable when using `load_dataset`.\r\n\r\n```python\r\n>>> datasets.load_dataset(\"dane\")\r\nTraceback (most recent call last):\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/dane\/dane.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 278, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/dane\/dane.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 588, in load_dataset\r\n module_path, hash = prepare_module(\r\n File \"\/home\/kenneth\/.Envs\/EDP\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 280, in prepare_module\r\n raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find file locally at dane\/dane.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/dane\/dane.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/dane\/dane.py\r\n```\r\n\r\nThis issue might be relevant to @ophelielacroix from the Alexandra Institut whom created the data.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1681\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1680","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1680\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1680\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1680\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1680","id":777623053,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ3ODY4MjEw","number":1680,"title":"added TurkishProductReviews dataset","user":{"login":"basakbuluz","id":41359672,"node_id":"MDQ6VXNlcjQxMzU5Njcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41359672?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/basakbuluz","html_url":"https:\/\/github.com\/basakbuluz","followers_url":"https:\/\/api.github.com\/users\/basakbuluz\/followers","following_url":"https:\/\/api.github.com\/users\/basakbuluz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/basakbuluz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/basakbuluz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/basakbuluz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/basakbuluz\/orgs","repos_url":"https:\/\/api.github.com\/users\/basakbuluz\/repos","events_url":"https:\/\/api.github.com\/users\/basakbuluz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/basakbuluz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, can you please review this PR?","Thanks for the suggestions. Updates were made and dataset_infos.json file was created again."],"created_at":1609674779000,"updated_at":1609784135000,"closed_at":1609784135000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1680","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1680","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1680.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1680.patch"},"body":"This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**.\r\n\r\n- **Repository:** [turkish-text-data](https:\/\/github.com\/fthbrmnby\/turkish-text-data)\r\n- **Point of Contact:** Fatih Barmanbay - @fthbrmnby","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1680\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1679","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1679\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1679\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1679\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1679","id":777587792,"node_id":"MDU6SXNzdWU3Nzc1ODc3OTI=","number":1679,"title":"Can't import cc100 dataset","user":{"login":"alighofrani95","id":14968123,"node_id":"MDQ6VXNlcjE0OTY4MTIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14968123?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alighofrani95","html_url":"https:\/\/github.com\/alighofrani95","followers_url":"https:\/\/api.github.com\/users\/alighofrani95\/followers","following_url":"https:\/\/api.github.com\/users\/alighofrani95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alighofrani95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alighofrani95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alighofrani95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alighofrani95\/orgs","repos_url":"https:\/\/api.github.com\/users\/alighofrani95\/repos","events_url":"https:\/\/api.github.com\/users\/alighofrani95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alighofrani95\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc100 was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `cc100` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlang = \"en\"\r\ndataset = load_dataset(\"cc100\", lang=lang, split=\"train\")\r\n```"],"created_at":1609657976000,"updated_at":1609785698000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"There is some issue to import cc100 dataset.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"cc100\")\r\n```\r\n\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/cc100\/cc100.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/cc100\/cc100.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 280 raise FileNotFoundError(\r\n 281 \"Couldn't find file locally at {}, or remotely at {} or {}\".format(\r\n--> 282 combined_path, github_file_path, file_path\r\n 283 )\r\n 284 )\r\n\r\nFileNotFoundError: Couldn't find file locally at cc100\/cc100.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/cc100\/cc100.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/cc100\/cc100.py","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1679\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1678","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1678\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1678\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1678\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1678","id":777567920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy","number":1678,"title":"Switchboard Dialog Act Corpus added under `datasets\/swda`","user":{"login":"gmihaila","id":22454783,"node_id":"MDQ6VXNlcjIyNDU0Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22454783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gmihaila","html_url":"https:\/\/github.com\/gmihaila","followers_url":"https:\/\/api.github.com\/users\/gmihaila\/followers","following_url":"https:\/\/api.github.com\/users\/gmihaila\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gmihaila\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gmihaila\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gmihaila\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gmihaila\/orgs","repos_url":"https:\/\/api.github.com\/users\/gmihaila\/repos","events_url":"https:\/\/api.github.com\/users\/gmihaila\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gmihaila\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.","It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ","Hi @lhoestq,\r\nI'm working on this to add the full dataset","> It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik\r\n\r\n@lhoestq Any info on how to add them?","@gmihaila, instead of using the current repo you should look into [this](https:\/\/github.com\/cgpotts\/swda). You can use the `csv` files uploaded in this repo (`swda.zip`) to access other fields and include them in this dataset. It has one dependency too, `swda.py`, you can download that separately and include it in your dataset's folder to be imported while reading the `csv` files.\r\n\r\nAlmost all the attributes of `Transcript` and `Utterance` objects are of the type str, int, or list. As far as `trees` attribute is concerned in utterance object you can simply parse it as string and user can maybe later convert it to nltk.tree object","@bhavitvyamalik Thank you for the clarification! \r\n\r\nI didn't use [that](https:\/\/github.com\/cgpotts\/swda) because it doesn't have the splits. I think in combination with [what I used](https:\/\/github.com\/NathanDuran\/Switchboard-Corpus) would help.\r\n\r\nLet me know if I can help! I can make those changes if you don't have the time.","I'm a bit busy for the next 2 weeks. I'll be able to complete it by end of January only. Maybe you can start with it and I'll help you?\r\nAlso, I looked into the official train\/val\/test splits and not all the files are there in the repo I used so I think either we'll have to skip them or put all of that into just train","Yes, I can start working on it and ask you to do a code review.\r\n\r\nYes, not all files are there. I'll try to find papers that have the correct and full splits, if not, I'll do like you suggested.\r\n\r\nThank you again for your help @bhavitvyamalik !"],"created_at":1609646021000,"updated_at":1610129361000,"closed_at":1609841195000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1678","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1678","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1678.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1678.patch"},"body":"Switchboard Dialog Act Corpus\r\n\r\nIntro:\r\nThe Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,\r\nwith turn\/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information\r\nabout the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.\r\n\r\nDetails:\r\n[homepage](http:\/\/compprag.christopherpotts.net\/swda.html)\r\n[repo](https:\/\/github.com\/NathanDuran\/Switchboard-Corpus\/raw\/master\/swda_data\/)\r\n\r\nI believe this is an important dataset to have since there is no dataset related to dialogue act added.\r\n\r\nI didn't find any formatting for pull request. I hope all this information is enough.\r\n\r\nFor any support please contact me. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1678\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1677","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1677\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1677\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1677\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1677","id":777553383,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ3ODE3ODI1","number":1677,"title":"Switchboard Dialog Act Corpus added under `datasets\/swda`","user":{"login":"gmihaila","id":22454783,"node_id":"MDQ6VXNlcjIyNDU0Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22454783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gmihaila","html_url":"https:\/\/github.com\/gmihaila","followers_url":"https:\/\/api.github.com\/users\/gmihaila\/followers","following_url":"https:\/\/api.github.com\/users\/gmihaila\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gmihaila\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gmihaila\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gmihaila\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gmihaila\/orgs","repos_url":"https:\/\/api.github.com\/users\/gmihaila\/repos","events_url":"https:\/\/api.github.com\/users\/gmihaila\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gmihaila\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Need to fix code formatting."],"created_at":1609636602000,"updated_at":1609642557000,"closed_at":1609642556000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1677","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1677","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1677.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1677.patch"},"body":"Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**.\r\n\r\n\r\nI think this is an important datasets to be added since it is the only one related to dialogue act classification. \r\n\r\nHope the pull request is ok. Wasn't able to see any special formatting for the pull request form.\r\n\r\n\r\nThe Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,\r\nwith turn\/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information\r\nabout the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.\r\n\r\n\r\n[webpage](http:\/\/compprag.christopherpotts.net\/swda.html)\r\n\r\n[repo](https:\/\/github.com\/NathanDuran\/Switchboard-Corpus\/raw\/master\/swda_data\/)\r\n\r\nPlease contact me for any support!\r\n\r\nAll tests passed and followed all steps in the contribution guide!\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1677\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1676","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1676\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1676\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1676\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1676","id":777477645,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ3NzY1OTY3","number":1676,"title":"new version of Ted Talks IWSLT (WIT3)","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice thank you ! Actually as it is a translation dataset we should probably have one configuration = one language pair no ?\r\n> \r\n> Could you use the same trick for this dataset ?\r\n\r\nI was looking for this input, infact I had written a long post on the Slack channel,...(_but unfortunately due to the holidays didn;t get a respones_). Initially I had tried with language pairs and then with specific language configs. \r\n\r\nI'll have a look at the `opus-gnomes` dataset\r\n","Oh sorry I must have missed your message then :\/\r\nI was off a few days during the holidays\r\n\r\nHopefully this trick can enable the use of any language pair (+ year ?) combination and also simplify a lot the dummy data creation since it will only require a few configs.","Updated it as per the comments. But couldn't figure out why the dummy tests are failing >> \r\n```\r\n$RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_ted_talks_iwslt\r\n.....\r\n....\r\ntests\/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```"],"created_at":1609601403000,"updated_at":1610619019000,"closed_at":1610619019000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1676","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1676","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1676.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1676.patch"},"body":"In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!! \r\n\r\nNow, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually. \r\n\r\nLocally I was able to clear the `python datasets-cli test datasets\/......` . Which created the `dataset_info.json` file . The test for the dummy files was also cleared. However couldn't figure out how to specify the local data folder for the real dataset\r\n\r\n\r\n**Note: that this requires manual download of the dataset.** \r\n**Note2: The high number of _Files changed (112)_ is because of the large number of dummy files\/configs!**","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1676\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1675","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1675\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1675\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1675\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1675","id":777367320,"node_id":"MDU6SXNzdWU3NzczNjczMjA=","number":1675,"title":"Add the 800GB Pile dataset?","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models","The pile can very easily be added and adapted using this [tfds implementation](https:\/\/github.com\/EleutherAI\/The-Pile\/blob\/master\/the_pile\/tfds_pile.py) from the repo. \r\n\r\nHowever, the question is whether you'd be ok with 800GB+ cached in your local disk, since the tfds implementation was designed to offload the storage to Google Cloud Storage.","With the dataset streaming feature (see #2375) it will be more convenient to play with such big datasets :)\r\nI'm currently adding C4 (see #2511 ) but I can probably start working on this afterwards","Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too!","Hi folks, thanks to some awesome work by @lhoestq and @albertvillanova you can now stream the Pile as follows:\r\n\r\n```python\r\n# Install master branch of `datasets`\r\npip install git+https:\/\/github.com\/huggingface\/datasets.git#egg=datasets[streaming]\r\npip install zstandard\r\n\r\nfrom datasets import load_dataset\r\n\r\ndset = load_dataset(\"json\", data_files=\"https:\/\/the-eye.eu\/public\/AI\/pile\/train\/00.jsonl.zst\", streaming=True, split=\"train\")\r\nnext(iter(dset))\r\n# {'meta': {'pile_set_name': 'Pile-CC'},\r\n# 'text': 'It is done, and submitted. You can play \u201cSurvival of the Tastiest\u201d on Android, and on the web ... '}\r\n```\r\n\r\nNext step is to add the Pile as a \"canonical\" dataset that can be streamed without specifying the file names explicitly :)","> Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too!\r\n\r\nHi @siddk thanks to a tip from @richarddwang it seems we can access some of the partitions that EleutherAI created for the Pile [here](https:\/\/the-eye.eu\/public\/AI\/pile_preliminary_components\/). What's missing are links to the preprocessed versions of pre-existing datasets like DeepMind Mathematics and OpenSubtitles, but worst case we do the processing ourselves and host these components on the Hub.\r\n\r\nMy current idea is that we could provide 23 configs: one for each of the 22 datasets and an `all` config that links to the train \/ dev \/ test splits that EleutherAI released [here](https:\/\/the-eye.eu\/public\/AI\/pile\/), e.g.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n# Load a single component\r\nyoutube_subtitles = load_dataset(\"the_pile\", \"youtube_subtitles\")\r\n# Load the train \/ dev \/ test splits of the whole corpus\r\ndset = load_dataset(\"the_pile\", \"all\")\r\n```\r\n\r\nIdeally we'd like everything to be compatible with the streaming API and there's ongoing work by @albertvillanova to make this happen for the various compression algorithms.\r\n\r\ncc @lhoestq ","Ah I just saw that @lhoestq is already thinking about the specifying of one or more subsets in [this PR](https:\/\/github.com\/huggingface\/datasets\/pull\/2817#issuecomment-901874049) :)"],"created_at":1609541892000,"updated_at":1629392205000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** The Pile\r\n- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https:\/\/twitter.com\/nabla_theta\/status\/1345130408170541056?s=20) for the Twitter announcement\r\n- **Paper:** https:\/\/pile.eleuther.ai\/paper.pdf\r\n- **Data:** https:\/\/pile.eleuther.ai\/\r\n- **Motivation:** Enables hardcore (GPT-3 scale!) language modelling\r\n\r\n## Remarks\r\nGiven the extreme size of this dataset, I'm not sure how feasible this will be to include in `datasets` \ud83e\udd2f . I'm also unsure how many `datasets` users are pretraining LMs, so the usage of this dataset may not warrant the effort to integrate it.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1675\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1674","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1674\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1674\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1674\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1674","id":777321840,"node_id":"MDU6SXNzdWU3NzczMjE4NDA=","number":1674,"title":"dutch_social can't be loaded","user":{"login":"koenvandenberge","id":10134844,"node_id":"MDQ6VXNlcjEwMTM0ODQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10134844?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/koenvandenberge","html_url":"https:\/\/github.com\/koenvandenberge","followers_url":"https:\/\/api.github.com\/users\/koenvandenberge\/followers","following_url":"https:\/\/api.github.com\/users\/koenvandenberge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/koenvandenberge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/koenvandenberge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/koenvandenberge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/koenvandenberge\/orgs","repos_url":"https:\/\/api.github.com\/users\/koenvandenberge\/repos","events_url":"https:\/\/api.github.com\/users\/koenvandenberge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/koenvandenberge\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n","Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the library.\r\nMeanwhile, you can still load the datasets using one of the techniques described in this issue: #1641 \r\nLet me know if this helps!","Maybe we should do a small release on Monday in the meantime @lhoestq ?","Yes sure !","I just did the release :)\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `dutch_social` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"dutch_social\")\r\n```","@lhoestq could you also shed light on the Hindi Wikipedia Dataset for issue number #1673. Will this also be available in the new release that you committed recently?","The issue is different for this one, let me give more details in the issue","Okay. Could you comment on the #1673 thread? Actually @thomwolf had commented that if i use datasets library from source, it would allow me to download the Hindi Wikipedia Dataset but even the version 1.1.3 gave me the same issue. The details are there in the issue #1673 thread."],"created_at":1609522628000,"updated_at":1609841821000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi all,\r\n\r\nI'm trying to import the `dutch_social` dataset described [here](https:\/\/huggingface.co\/datasets\/dutch_social).\r\n\r\nHowever, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.\r\n\r\n```\r\n(base) Koens-MacBook-Pro:~ koenvandenberge$ python\r\nPython 3.7.4 (default, Aug 13 2019, 15:17:50) \r\n[Clang 4.0.1 (tags\/RELEASE_401\/final)] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset\r\ndataset = load_dataset(\r\n 'dutch_social')\r\n>>> dataset = load_dataset(\r\n... 'dutch_social')\r\nTraceback (most recent call last):\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/dutch_social\/dutch_social.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 278, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/dutch_social\/dutch_social.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 2, in <module>\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/Users\/koenvandenberge\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 282, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at dutch_social\/dutch_social.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/dutch_social\/dutch_social.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/dutch_social\/dutch_social.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1674\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1673","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1673\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1673\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1673\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1673","id":777263651,"node_id":"MDU6SXNzdWU3NzcyNjM2NTE=","number":1673,"title":"Unable to Download Hindi Wikipedia Dataset","user":{"login":"aditya3498","id":30871963,"node_id":"MDQ6VXNlcjMwODcxOTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30871963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aditya3498","html_url":"https:\/\/github.com\/aditya3498","followers_url":"https:\/\/api.github.com\/users\/aditya3498\/followers","following_url":"https:\/\/api.github.com\/users\/aditya3498\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aditya3498\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aditya3498\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aditya3498\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aditya3498\/orgs","repos_url":"https:\/\/api.github.com\/users\/aditya3498\/repos","events_url":"https:\/\/api.github.com\/users\/aditya3498\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aditya3498\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Currently this dataset is only available when the library is installed from source since it was added after the last release.\r\n\r\nWe pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library.\r\n\r\nWe'll see if we can provide access to newer datasets with a warning that they are newer than your library version, that would help in cases like yours.","So for now, should i try and install the library from source and then try out the same piece of code? Will it work then, considering both the versions will match then?","Yes","Hey, so i tried installing the library from source using the commands : **git clone https:\/\/github.com\/huggingface\/datasets**, **cd datasets** and then **pip3 install -e .**. But i still am facing the same error that file is not found. Please advise.\r\n\r\nThe Datasets library version now is 1.1.3 by installing from source as compared to the earlier 1.0.3 that i had loaded using pip command but I am still getting same error\r\n\r\n![Error](https:\/\/user-images.githubusercontent.com\/30871963\/103479005-69f3b080-4df0-11eb-83ae-58d7bb56a90e.png)\r\n","Looks like the wikipedia dump for hindi at the date of 05\/05\/2020 is not available anymore.\r\nYou can try to load a more recent version of wikipedia\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"wikipedia\", language=\"hi\", date=\"20210101\", split=\"train\", beam_runner=\"DirectRunner\")\r\n```","Okay, thank you so much"],"created_at":1609498373000,"updated_at":1609842132000,"closed_at":1609842132000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to resolve this issue.\r\n\r\n![Code](https:\/\/user-images.githubusercontent.com\/30871963\/103437466-1f3a3300-4c4e-11eb-9d54-fc9601abfeec.png)\r\n\r\n![Error](https:\/\/user-images.githubusercontent.com\/30871963\/103437407-7ee40e80-4c4d-11eb-8151-a86eb664e6be.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1673\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1672","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1672\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1672\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1672\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1672","id":777258941,"node_id":"MDU6SXNzdWU3NzcyNTg5NDE=","number":1672,"title":"load_dataset hang on file_lock","user":{"login":"tomacai","id":69860107,"node_id":"MDQ6VXNlcjY5ODYwMTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69860107?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tomacai","html_url":"https:\/\/github.com\/tomacai","followers_url":"https:\/\/api.github.com\/users\/tomacai\/followers","following_url":"https:\/\/api.github.com\/users\/tomacai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tomacai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tomacai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tomacai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tomacai\/orgs","repos_url":"https:\/\/api.github.com\/users\/tomacai\/repos","events_url":"https:\/\/api.github.com\/users\/tomacai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tomacai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you try to upgrade to a more recent version of datasets?","Thank, upgrading to 1.1.3 resolved the issue.","Having the same issue with `datasets 1.1.3` of `1.5.0` (both tracebacks look the same) and `kilt_wikipedia`, Ubuntu 20.04\r\n\r\n```py\r\nIn [1]: from datasets import load_dataset \r\n\r\nIn [2]: wikipedia = load_dataset('kilt_wikipedia')['full'] \r\nDownloading: 7.37kB [00:00, 2.74MB\/s] \r\nDownloading: 3.33kB [00:00, 1.44MB\/s] \r\n^C---------------------------------------------------------------------------\r\nOSError Traceback (most recent call last)\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/utils\/filelock.py in _acquire(self)\r\n 380 try:\r\n--> 381 fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\n 382 except (IOError, OSError):\r\n\r\nOSError: [Errno 37] No locks available\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nKeyboardInterrupt Traceback (most recent call last)\r\n<ipython-input-2-f412d3d46ec9> in <module>\r\n----> 1 wikipedia = load_dataset('kilt_wikipedia')['full']\r\n\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, sav\r\ne_infos, script_version, **config_kwargs)\r\n 601 hash=hash,\r\n 602 features=features,\r\n--> 603 **config_kwargs,\r\n 604 )\r\n 605 \r\n\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/builder.py in __init__(self, *args, **kwargs)\r\n 841 def __init__(self, *args, **kwargs):\r\n 842 self._writer_batch_size = kwargs.pop(\"writer_batch_size\", self._writer_batch_size)\r\n--> 843 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n 844 \r\n 845 @abc.abstractmethod\r\n\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)\r\n 174 os.makedirs(self._cache_dir_root, exist_ok=True)\r\n 175 lock_path = os.path.join(self._cache_dir_root, self._cache_dir.replace(os.sep, \"_\") + \".lock\")\r\n--> 176 with FileLock(lock_path):\r\n 177 if os.path.exists(self._cache_dir): # check if data exist\r\n 178 if len(os.listdir(self._cache_dir)) > 0:\r\n\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/utils\/filelock.py in __enter__(self)\r\n 312 \r\n 313 def __enter__(self):\r\n--> 314 self.acquire()\r\n 315 return self\r\n 316 \r\n\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/utils\/filelock.py in acquire(self, timeout, poll_intervall)\r\n 261 if not self.is_locked:\r\n 262 logger().debug(\"Attempting to acquire lock %s on %s\", lock_id, lock_filename)\r\n--> 263 self._acquire()\r\n 264 \r\n 265 if self.is_locked:\r\n\r\n~\/anaconda3\/envs\/transformers2\/lib\/python3.7\/site-packages\/datasets\/utils\/filelock.py in _acquire(self)\r\n 379 \r\n 380 try:\r\n--> 381 fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)\r\n 382 except (IOError, OSError):\r\n 383 os.close(fd)\r\n\r\nKeyboardInterrupt: \r\n\r\n```"],"created_at":1609496707000,"updated_at":1617207853000,"closed_at":1609501656000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab.\r\nTransformers: 3.3.1\r\nDatasets: 1.0.2\r\nWindows 10 (also tested in WSL)\r\n\r\n```\r\ndatasets.logging.set_verbosity_debug()\r\ndatasets.\r\ntrain_dataset = load_dataset('squad', split='train')\r\nvalid_dataset = load_dataset('squad', split='validation')\r\n\r\ntrain_dataset.features\r\n```\r\n\r\n```\r\nhttps:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/squad.py not found in cache or force_download set to True, downloading to C:\\Users\\simpl\\.cache\\huggingface\\datasets\\tmpzj_o_6u7\r\nDownloading:\r\n5.24k\/? [00:00<00:00, 134kB\/s]\r\nstoring https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/squad.py in cache at C:\\Users\\simpl\\.cache\\huggingface\\datasets\\f6877c8d2e01e8fcb60dc101be28b54a7522feac756deb9ac5c39c6d8ebef1ce.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py\r\ncreating metadata file for C:\\Users\\simpl\\.cache\\huggingface\\datasets\\f6877c8d2e01e8fcb60dc101be28b54a7522feac756deb9ac5c39c6d8ebef1ce.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py\r\n\r\nChecking C:\\Users\\simpl\\.cache\\huggingface\\datasets\\f6877c8d2e01e8fcb60dc101be28b54a7522feac756deb9ac5c39c6d8ebef1ce.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py for additional imports.\r\nFound main folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/squad.py at C:\\Users\\simpl\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\squad\r\nFound specific version folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/squad.py at C:\\Users\\simpl\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\squad\\1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41\r\nFound script file from https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/squad.py to C:\\Users\\simpl\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\squad\\1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41\\squad.py\r\nCouldn't find dataset infos file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\\dataset_infos.json\r\nFound metadata file for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/squad.py at C:\\Users\\simpl\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\squad\\1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41\\squad.json\r\nNo config specified, defaulting to first: squad\/plain_text\r\n```\r\n\r\nInterrupting the jupyter kernel we are in a file lock.\r\n\r\nIn Google Colab the download is ok. In contrast to a local run in colab dataset_infos.json is downloaded\r\n```\r\nhttps:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/squad\/dataset_infos.json not found in cache or force_download set to True, downloading to \/root\/.cache\/huggingface\/datasets\/tmptl9ha_ad\r\n\r\nDownloading:\r\n2.19k\/? [00:00<00:00, 26.2kB\/s]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1672\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1671","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1671\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1671\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1671\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1671","id":776652193,"node_id":"MDU6SXNzdWU3NzY2NTIxOTM=","number":1671,"title":"connection issue ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do \r\n\r\nd = datasets.load_from_disk(\"imdb\")\r\nd = d[\"train\"][:10] => the format of this is no more in datasets format\r\nthis is different from you call load_datasets(\"train[10]\")\r\n\r\ncould you tell me how I can make the two datastes the same format @lhoestq \r\n\r\n","> `\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/glue\/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))`\r\n\r\nDo you have an internet connection on the machine ? Is there a proxy that might block requests to aws ?\r\n\r\n> I tried to do read the data, save it to a path and then set HF_HOME, which does not work and this is still not reading from the old set path, could you assist me how to save the datasets in a path, and let dataset library read from this path to avoid connection issue. thanks\r\n\r\nHF_HOME is used to specify the directory for the cache files of this library.\r\nYou can use save_to_disk and load_from_disk without changing the HF_HOME:\r\n```python\r\nimdb = datasets.load_dataset(\"imdb\")\r\nimdb.save_to_disk(\"\/idiap\/temp\/rkarimi\/hf_datasets\/imdb\")\r\nimdb = datasets.load_from_disk(\"\/idiap\/temp\/rkarimi\/hf_datasets\/imdb\")\r\n```\r\n\r\n> could you tell me how I can make the two datastes the same format\r\n\r\nIndeed they returns different things:\r\n- `load_dataset` returns a `Dataset` object if the split is specified, or a `DatasetDict` if no split is given. Therefore `load_datasets(\"imdb\", split=\"train[10]\")` returns a `Dataset` object containing 10 elements.\r\n- doing `d[\"train\"][:10]` on a DatasetDict \"d\" gets the train split `d[\"train\"]` as a `Dataset` object and then gets the first 10 elements as a dictionary"],"created_at":1609365380000,"updated_at":1609754391000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.\r\n\r\nIf I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library reads from, could you assist me how this can be done, thanks\r\n\r\nI tried to do read the data, save it to a path and then set HF_HOME, which does not work and this is still not reading from the old set path, could you assist me how to save the datasets in a path, and let dataset library read from this path to avoid connection issue. thanks\r\n\r\n```\r\nimdb = datasets.load_dataset(\"imdb\")\r\nimdb.save_to_disk(\"\/idiap\/temp\/rkarimi\/hf_datasets\/imdb\")\r\n>>> os.environ[\"HF_HOME\"]=\"\/idiap\/temp\/rkarimi\/hf_datasets\/\"\r\n>>> imdb = datasets.load_dataset(\"imdb\")\r\nReusing dataset imdb (\/idiap\/temp\/rkarimi\/cache_home_2\/datasets\/imdb\/plain_text\/1.0.0\/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3)\r\n```\r\n\r\nI tried afterwards to set HF_HOME in bash, this makes it read from it, but it cannot let dataset library load from the saved path and still downloading data. could you tell me how to fix this issue @lhoestq thanks \r\n\r\nAlso this is on cloud, so I save them in a path, copy it to \"another machine\" to load the data\r\n\r\n### Error stack\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".\/finetune_t5_trainer.py\", line 344, in <module>\r\n main()\r\n File \".\/finetune_t5_trainer.py\", line 232, in main\r\n for task in data_args.eval_tasks} if training_args.do_test else None\r\n File \".\/finetune_t5_trainer.py\", line 232, in <dictcomp>\r\n for task in data_args.eval_tasks} if training_args.do_test else None\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 136, in get_dataset\r\n split = self.get_sampled_split(split, n_obs)\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 64, in get_sampled_split\r\n dataset = self.load_dataset(split)\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 454, in load_dataset\r\n split=split, script_version=\"master\")\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 263, in prepare_module\r\n head_hf_s3(path, filename=name, dataset=dataset)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 200, in head_hf_s3\r\n return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 403, in http_head\r\n url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/api.py\", line 104, in head\r\n return request('head', url, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/api.py\", line 61, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/sessions.py\", line 542, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/sessions.py\", line 655, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/adapters.py\", line 504, in send\r\n raise ConnectTimeout(e, request=request)\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/glue\/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1671\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1670","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1670\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1670\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1670\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1670","id":776608579,"node_id":"MDU6SXNzdWU3NzY2MDg1Nzk=","number":1670,"title":"wiki_dpr pre-processing performance","user":{"login":"dbarnhart","id":753898,"node_id":"MDQ6VXNlcjc1Mzg5OA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/753898?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dbarnhart","html_url":"https:\/\/github.com\/dbarnhart","followers_url":"https:\/\/api.github.com\/users\/dbarnhart\/followers","following_url":"https:\/\/api.github.com\/users\/dbarnhart\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dbarnhart\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dbarnhart\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dbarnhart\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dbarnhart\/orgs","repos_url":"https:\/\/api.github.com\/users\/dbarnhart\/repos","events_url":"https:\/\/api.github.com\/users\/dbarnhart\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dbarnhart\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! And thanks for the tips :) \r\n\r\nIndeed currently `wiki_dpr` takes some time to be processed.\r\nMultiprocessing for dataset generation is definitely going to speed up things.\r\n\r\nRegarding the index note that for the default configurations, the index is downloaded instead of being built, which avoid spending time on constructing the index. However in other cases it would be awesome to make the construction faster.\r\n\r\nAny contribution that can help things faster are welcome. In particular in you have some code that can build a wiki_dpr IVF PQ index in a sharded GPU setup and would like to share it, we can add it to an `examples` folder. In particular since faiss is becoming the library of reference for dataset indexing for tasks like Open Domain Question Answering.\r\n\r\n","I'd be happy to contribute something when I get the time, probably adding multiprocessing and \/ or cython support to wiki_dpr. I've written cythonized apache beam code before as well.\r\n\r\nFor sharded index building, I used the FAISS example code for indexing 1 billion vectors as a start. I'm sure you're aware that the documentation isn't great, but the source code is fairly easy to follow.","Nice thanks ! That would be awesome to make its construction faster :) "],"created_at":1609357303000,"updated_at":1611826896000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).\r\n\r\nI won't repeat the concerns around multiprocessing as they are addressed in other issues (#786), but this is the first obvious thing to do. Using cython to speed up the text manipulation may be also help. Loading and processing a dataset of this size in under 15 minutes does not seem unreasonable on a modern multi-core machine. I have hit such targets myself on similar tasks. Would love to see this improve.\r\n\r\nThe other issue is that it takes 3h to construct the FAISS index. If only we could use GPUs with HNSW, but we can't. My sharded GPU indexing code can build an IVF + PQ index in 10 minutes on 20 million vectors. Still, 3h seems slow even for the CPU.\r\n\r\nIt looks like HF is adding only 1000 vectors at a time by default [2], whereas the faiss benchmarks adds 1 million vectors at a time (effectively) [3]. It's possible the runtime could be reduced with a larger batch. Also, it looks like project dependencies ultimately use OpenBLAS, but this is known to have issues when combined with OpenMP, which HNSW does [3]. A workaround is to set the environment variable `OMP_WAIT_POLICY=PASSIVE` via `os.environ` or similar.\r\n\r\nReferences:\r\n[1] https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wiki_dpr\/wiki_dpr.py\r\n[2] https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/search.py\r\n[3] https:\/\/github.com\/facebookresearch\/faiss\/blob\/master\/benchs\/bench_hnsw.py\r\n[4] https:\/\/github.com\/facebookresearch\/faiss\/issues\/422","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1670\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1669","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1669\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1669\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1669\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1669","id":776608386,"node_id":"MDU6SXNzdWU3NzY2MDgzODY=","number":1669,"title":"wiki_dpr dataset pre-processesing performance","user":{"login":"dbarnhart","id":753898,"node_id":"MDQ6VXNlcjc1Mzg5OA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/753898?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dbarnhart","html_url":"https:\/\/github.com\/dbarnhart","followers_url":"https:\/\/api.github.com\/users\/dbarnhart\/followers","following_url":"https:\/\/api.github.com\/users\/dbarnhart\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dbarnhart\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dbarnhart\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dbarnhart\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dbarnhart\/orgs","repos_url":"https:\/\/api.github.com\/users\/dbarnhart\/repos","events_url":"https:\/\/api.github.com\/users\/dbarnhart\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dbarnhart\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry, double posted."],"created_at":1609357269000,"updated_at":1609357345000,"closed_at":1609357345000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).\r\n\r\nI won't repeat the concerns around multiprocessing as they are addressed in other issues (#786), but this is the first obvious thing to do. Using cython to speed up the text manipulation may be also help. Loading and processing a dataset of this size in under 15 minutes does not seem unreasonable on a modern multi-core machine. I have hit such targets myself on similar tasks. Would love to see this improve.\r\n\r\nThe other issue is that it takes 3h to construct the FAISS index. If only we could use GPUs with HNSW, but we can't. My sharded GPU indexing code can build an IVF + PQ index in 10 minutes on 20 million vectors. Still, 3h seems slow even for the CPU.\r\n\r\nIt looks like HF is adding only 1000 vectors at a time by default [2], whereas the faiss benchmarks adds 1 million vectors at a time (effectively) [3]. It's possible the runtime could be reduced with a larger batch. Also, it looks like project dependencies ultimately use OpenBLAS, but this is known to have issues when combined with OpenMP, which HNSW does [3]. A workaround is to set the environment variable `OMP_WAIT_POLICY=PASSIVE` via `os.environ` or similar.\r\n\r\nReferences:\r\n[1] https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wiki_dpr\/wiki_dpr.py\r\n[2] https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/search.py\r\n[3] https:\/\/github.com\/facebookresearch\/faiss\/blob\/master\/benchs\/bench_hnsw.py\r\n[4] https:\/\/github.com\/facebookresearch\/faiss\/issues\/422","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1669\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1668","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1668\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1668\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1668\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1668","id":776552854,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ3MDIxODI0","number":1668,"title":"xed_en_fi dataset Cleanup","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609348278000,"updated_at":1609348964000,"closed_at":1609348963000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1668","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1668","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1668.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1668.patch"},"body":"Fix ClassLabel feature type and minor mistakes in the dataset card","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1668\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1667","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1667\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1667\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1667\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1667","id":776446658,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2OTM4MjAy","number":1667,"title":"Fix NER metric example in Overview notebook","user":{"login":"jungwhank","id":53588015,"node_id":"MDQ6VXNlcjUzNTg4MDE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53588015?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jungwhank","html_url":"https:\/\/github.com\/jungwhank","followers_url":"https:\/\/api.github.com\/users\/jungwhank\/followers","following_url":"https:\/\/api.github.com\/users\/jungwhank\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jungwhank\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jungwhank\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jungwhank\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jungwhank\/orgs","repos_url":"https:\/\/api.github.com\/users\/jungwhank\/repos","events_url":"https:\/\/api.github.com\/users\/jungwhank\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jungwhank\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609333519000,"updated_at":1609377128000,"closed_at":1609348911000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1667","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1667","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1667.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1667.patch"},"body":"Fix errors in `NER metric example` section in `Overview.ipynb`.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\n<ipython-input-37-ee559b166e25> in <module>()\r\n----> 1 ner_metric = load_metric('seqeval')\r\n 2 references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\n 3 predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\n 4 ner_metric.compute(predictions, references)\r\n\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 340 if needs_to_be_installed:\r\n 341 raise ImportError(\r\n--> 342 f\"To be able to use this {module_type}, you need to install the following dependencies\"\r\n 343 f\"{[lib_name for lib_name, lib_path in needs_to_be_installed]} using 'pip install \"\r\n 344 f\"{' '.join([lib_path for lib_name, lib_path in needs_to_be_installed])}' for instance'\"\r\n\r\nImportError: To be able to use this metric, you need to install the following dependencies['seqeval'] using 'pip install seqeval' for instance'\r\n```\r\n\r\n```\r\nValueError Traceback (most recent call last)\r\n<ipython-input-39-ee559b166e25> in <module>()\r\n 2 references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\n 3 predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\n----> 4 ner_metric.compute(predictions, references)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/metric.py in compute(self, *args, **kwargs)\r\n 378 \"\"\"\r\n 379 if args:\r\n--> 380 raise ValueError(\"Please call `compute` using keyword arguments.\")\r\n 381 \r\n 382 predictions = kwargs.pop(\"predictions\", None)\r\n\r\nValueError: Please call `compute` using keyword arguments.\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1667\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1666","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1666\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1666\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1666\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1666","id":776432006,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2OTI2MzQw","number":1666,"title":"Add language to dataset card for Makhzan dataset.","user":{"login":"arkhalid","id":14899066,"node_id":"MDQ6VXNlcjE0ODk5MDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14899066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arkhalid","html_url":"https:\/\/github.com\/arkhalid","followers_url":"https:\/\/api.github.com\/users\/arkhalid\/followers","following_url":"https:\/\/api.github.com\/users\/arkhalid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arkhalid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arkhalid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arkhalid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arkhalid\/orgs","repos_url":"https:\/\/api.github.com\/users\/arkhalid\/repos","events_url":"https:\/\/api.github.com\/users\/arkhalid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arkhalid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609331152000,"updated_at":1609348835000,"closed_at":1609348835000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1666","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1666","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1666.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1666.patch"},"body":"Add language to dataset card.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1666\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1665","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1665\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1665\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1665\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1665","id":776431087,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2OTI1NTgw","number":1665,"title":"Add language to dataset card for Counter dataset.","user":{"login":"arkhalid","id":14899066,"node_id":"MDQ6VXNlcjE0ODk5MDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14899066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arkhalid","html_url":"https:\/\/github.com\/arkhalid","followers_url":"https:\/\/api.github.com\/users\/arkhalid\/followers","following_url":"https:\/\/api.github.com\/users\/arkhalid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arkhalid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arkhalid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arkhalid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arkhalid\/orgs","repos_url":"https:\/\/api.github.com\/users\/arkhalid\/repos","events_url":"https:\/\/api.github.com\/users\/arkhalid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arkhalid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609331000000,"updated_at":1609348820000,"closed_at":1609348820000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1665","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1665","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1665.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1665.patch"},"body":"Add language.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1665\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1664","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1664\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1664\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1664\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1664","id":775956441,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2NTM1NDcy","number":1664,"title":"removed \\n in labels","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609256503000,"updated_at":1609348729000,"closed_at":1609348729000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1664","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1664","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1664.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1664.patch"},"body":"updated social_i_qa labels as per #1633 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1664\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1663","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1663\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1663\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1663\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1663","id":775914320,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2NTAzMjg5","number":1663,"title":"update saving and loading methods for faiss index so to accept path l\u2026","user":{"login":"tslott","id":11614798,"node_id":"MDQ6VXNlcjExNjE0Nzk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11614798?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tslott","html_url":"https:\/\/github.com\/tslott","followers_url":"https:\/\/api.github.com\/users\/tslott\/followers","following_url":"https:\/\/api.github.com\/users\/tslott\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tslott\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tslott\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tslott\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tslott\/orgs","repos_url":"https:\/\/api.github.com\/users\/tslott\/repos","events_url":"https:\/\/api.github.com\/users\/tslott\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tslott\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Seems ok for me, what do you think @lhoestq ?"],"created_at":1609251337000,"updated_at":1610962043000,"closed_at":1610962043000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1663","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1663","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1663.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1663.patch"},"body":"- Update saving and loading methods for faiss index so to accept path like objects from pathlib\r\n\r\nThe current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https:\/\/docs.python.org\/3\/library\/pathlib.html). The codes becomes a more intuitive this way I think.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1663\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1662","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1662\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1662\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1662\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1662","id":775890154,"node_id":"MDU6SXNzdWU3NzU4OTAxNTQ=","number":1662,"title":"Arrow file is too large when saving vector data","user":{"login":"weiwangthu","id":22360336,"node_id":"MDQ6VXNlcjIyMzYwMzM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22360336?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/weiwangthu","html_url":"https:\/\/github.com\/weiwangthu","followers_url":"https:\/\/api.github.com\/users\/weiwangthu\/followers","following_url":"https:\/\/api.github.com\/users\/weiwangthu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/weiwangthu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/weiwangthu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/weiwangthu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/weiwangthu\/orgs","repos_url":"https:\/\/api.github.com\/users\/weiwangthu\/repos","events_url":"https:\/\/api.github.com\/users\/weiwangthu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/weiwangthu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nThe arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is\r\n\r\n20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB\r\n\r\nIf you want to reduce the size you can consider using quantization for example, or maybe using dimension reduction techniques.\r\n","Thanks for your reply @lhoestq.\r\nI want to save original embedding for these sentences for subsequent calculations. So does arrow have a way to save in a compressed format to reduce the size of the file?","Arrow doesn't have compression since it is designed to have no serialization overhead","I see. Thank you."],"created_at":1609248192000,"updated_at":1611238359000,"closed_at":1611238359000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1662\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1661","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1661\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1661\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1661\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1661","id":775840801,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2NDQzNjYx","number":1661,"title":"updated dataset cards","user":{"login":"Nilanshrajput","id":28673745,"node_id":"MDQ6VXNlcjI4NjczNzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28673745?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nilanshrajput","html_url":"https:\/\/github.com\/Nilanshrajput","followers_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/followers","following_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/repos","events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609240840000,"updated_at":1609348516000,"closed_at":1609348516000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1661","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1661","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1661.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1661.patch"},"body":"added dataset instance in the card.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1661\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1660","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1660\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1660\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1660\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1660","id":775831423,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2NDM2MDg1","number":1660,"title":"add dataset info","user":{"login":"harshalmittal4","id":24206326,"node_id":"MDQ6VXNlcjI0MjA2MzI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24206326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/harshalmittal4","html_url":"https:\/\/github.com\/harshalmittal4","followers_url":"https:\/\/api.github.com\/users\/harshalmittal4\/followers","following_url":"https:\/\/api.github.com\/users\/harshalmittal4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/harshalmittal4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/harshalmittal4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/harshalmittal4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/harshalmittal4\/orgs","repos_url":"https:\/\/api.github.com\/users\/harshalmittal4\/repos","events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609239499000,"updated_at":1609347870000,"closed_at":1609347870000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1660","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1660","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1660.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1660.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1660\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1659","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1659\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1659\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1659\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1659","id":775831288,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2NDM1OTcy","number":1659,"title":"update dataset info","user":{"login":"harshalmittal4","id":24206326,"node_id":"MDQ6VXNlcjI0MjA2MzI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24206326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/harshalmittal4","html_url":"https:\/\/github.com\/harshalmittal4","followers_url":"https:\/\/api.github.com\/users\/harshalmittal4\/followers","following_url":"https:\/\/api.github.com\/users\/harshalmittal4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/harshalmittal4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/harshalmittal4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/harshalmittal4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/harshalmittal4\/orgs","repos_url":"https:\/\/api.github.com\/users\/harshalmittal4\/repos","events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609239481000,"updated_at":1609347307000,"closed_at":1609347307000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1659","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1659","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1659.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1659.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1659\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1658","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1658\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1658\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1658\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1658","id":775651085,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2Mjg4Njg4","number":1658,"title":"brwac dataset: add instances and data splits info","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609205085000,"updated_at":1609347266000,"closed_at":1609347266000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1658","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1658","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1658.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1658.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1658\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1657","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1657\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1657\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1657\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1657","id":775647000,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2Mjg1NjU2","number":1657,"title":"mac_morpho dataset: add data splits info","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609203921000,"updated_at":1609347084000,"closed_at":1609347084000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1657","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1657","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1657.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1657.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1657\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1656","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1656\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1656\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1656\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1656","id":775645356,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2Mjg0NDI3","number":1656,"title":"assin 2 dataset: add instances and data splits info","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609203471000,"updated_at":1609347056000,"closed_at":1609347056000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1656","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1656","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1656.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1656.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1656\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1655","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1655\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1655\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1655\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1655","id":775643418,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjgyOTM4","number":1655,"title":"assin dataset: add instances and data splits info","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609202876000,"updated_at":1609347023000,"closed_at":1609347023000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1655","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1655","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1655.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1655.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1655\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1654","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1654\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1654\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1654\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1654","id":775640729,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjgwODIy","number":1654,"title":"lener_br dataset: add instances and data splits info","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609202112000,"updated_at":1609346972000,"closed_at":1609346972000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1654","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1654","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1654.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1654.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1654\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1653","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1653\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1653\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1653\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1653","id":775632945,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2Mjc0Njc0","number":1653,"title":"harem dataset: add data splits info","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609199900000,"updated_at":1609346943000,"closed_at":1609346943000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1653","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1653","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1653.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1653.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1653\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1652","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1652\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1652\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1652\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1652","id":775571813,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjI1NTM1","number":1652,"title":"Update dataset cards from previous sprint","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609186847000,"updated_at":1609346884000,"closed_at":1609346884000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1652","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1652","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1652.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1652.patch"},"body":"This PR updates the dataset cards\/readmes for the 4 approved PRs I submitted in the previous sprint.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1652\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1651","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1651\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1651\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1651\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1651","id":775554319,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjExMjQw","number":1651,"title":"Add twi wordsim353","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Well actually it looks like it was already added in #1428 \r\n\r\nMaybe we can close this one ? Or you wanted to make changes to this dataset ?","Thank you, it's just a modification of Readme. I added the missing citation.","Indeed thanks"],"created_at":1609183915000,"updated_at":1609753179000,"closed_at":1609753178000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1651","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1651","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1651.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1651.patch"},"body":"Added the citation information to the README file","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1651\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1650","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1650\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1650\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1650\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1650","id":775545912,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjA0MzYy","number":1650,"title":"Update README.md","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609182545000,"updated_at":1609238594000,"closed_at":1609238594000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1650","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1650","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1650.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1650.patch"},"body":"added dataset summary","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1650\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1649","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1649\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1649\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1649\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1649","id":775544487,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjAzMjE1","number":1649,"title":"Update README.md","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609182300000,"updated_at":1609239058000,"closed_at":1609238583000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1649","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1649","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1649.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1649.patch"},"body":"Added information in the dataset card","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1649\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1648","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1648\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1648\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1648\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1648","id":775542360,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MjAxNTQ0","number":1648,"title":"Update README.md","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609181946000,"updated_at":1609238354000,"closed_at":1609238354000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1648","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1648","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1648.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1648.patch"},"body":"added dataset summary","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1648\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1647","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1647\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1647\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1647\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1647","id":775525799,"node_id":"MDU6SXNzdWU3NzU1MjU3OTk=","number":1647,"title":"NarrativeQA fails to load with `load_dataset`","user":{"login":"eric-mitchell","id":56408839,"node_id":"MDQ6VXNlcjU2NDA4ODM5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56408839?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eric-mitchell","html_url":"https:\/\/github.com\/eric-mitchell","followers_url":"https:\/\/api.github.com\/users\/eric-mitchell\/followers","following_url":"https:\/\/api.github.com\/users\/eric-mitchell\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eric-mitchell\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eric-mitchell\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eric-mitchell\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eric-mitchell\/orgs","repos_url":"https:\/\/api.github.com\/users\/eric-mitchell\/repos","events_url":"https:\/\/api.github.com\/users\/eric-mitchell\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eric-mitchell\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @eric-mitchell,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`","@bhavitvyamalik Great, thanks for this! Confirmed that the problem is resolved on master at [cbbda53](https:\/\/github.com\/huggingface\/datasets\/commit\/cbbda53ac1520b01f0f67ed6017003936c41ec59).","Update: HuggingFace did an intermediate release yesterday just before the v2.0.\r\n\r\nTo load it you can just update `datasets`\r\n\r\n`pip install --upgrade datasets`"],"created_at":1609179369000,"updated_at":1609848308000,"closed_at":1609696685000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https:\/\/huggingface.co\/datasets\/narrativeqa), I receive a cascade of exceptions, ending with\r\n\r\n FileNotFoundError: Couldn't find file locally at narrativeqa\/narrativeqa.py, or remotely at \r\n https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/narrativeqa\/narrativeqa.py or \r\n https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/narrativeqa\/narrativeqa.py\r\n\r\nWorkaround: manually copy the `narrativeqa.py` builder into my local directory with \r\n\r\n curl https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/narrativeqa\/narrativeqa.py -o narrativeqa.py\r\n\r\nand load the dataset as `load_dataset('narrativeqa.py')` everything works fine. I'm on datasets v1.1.3 using Python 3.6.10.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1647\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1646","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1646\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1646\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1646\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1646","id":775499344,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MTY4MTk3","number":1646,"title":"Add missing homepage in some dataset cards","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609175388000,"updated_at":1609769337000,"closed_at":1609769336000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1646","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1646","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1646.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1646.patch"},"body":"In some dataset cards the homepage field in the `Dataset Description` section was missing\/empty","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1646\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1645","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1645\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1645\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1645\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1645","id":775473106,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ2MTQ4OTUx","number":1645,"title":"Rename \"part-of-speech-tagging\" tag in some dataset cards","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609171749000,"updated_at":1610014094000,"closed_at":1610014093000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1645","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1645","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1645.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1645.patch"},"body":"`part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1645\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1644","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1644\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1644\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1644\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1644","id":775375880,"node_id":"MDU6SXNzdWU3NzUzNzU4ODA=","number":1644,"title":"HoVeR dataset fails to load","user":{"login":"urikz","id":1473778,"node_id":"MDQ6VXNlcjE0NzM3Nzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1473778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/urikz","html_url":"https:\/\/github.com\/urikz","followers_url":"https:\/\/api.github.com\/users\/urikz\/followers","following_url":"https:\/\/api.github.com\/users\/urikz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/urikz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/urikz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/urikz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/urikz\/orgs","repos_url":"https:\/\/api.github.com\/users\/urikz\/repos","events_url":"https:\/\/api.github.com\/users\/urikz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/urikz\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hover was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `hover` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"hover\")\r\n```"],"created_at":1609158427000,"updated_at":1609785991000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library.\r\n\r\nSteps to reproduce the error:\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"hover\")\r\nTraceback (most recent call last):\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/hover\/hover.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 278, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/hover\/hover.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/Users\/urikz\/anaconda\/envs\/mentionmemory\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 282, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at hover\/hover.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/hover\/hover.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/hover\/hover.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1644\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1643","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1643\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1643\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1643\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1643","id":775280046,"node_id":"MDU6SXNzdWU3NzUyODAwNDY=","number":1643,"title":"Dataset social_bias_frames 404","user":{"login":"atemate","id":7501517,"node_id":"MDQ6VXNlcjc1MDE1MTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7501517?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/atemate","html_url":"https:\/\/github.com\/atemate","followers_url":"https:\/\/api.github.com\/users\/atemate\/followers","following_url":"https:\/\/api.github.com\/users\/atemate\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/atemate\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/atemate\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/atemate\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/atemate\/orgs","repos_url":"https:\/\/api.github.com\/users\/atemate\/repos","events_url":"https:\/\/api.github.com\/users\/atemate\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/atemate\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I see, master is already fixed in https:\/\/github.com\/huggingface\/datasets\/commit\/9e058f098a0919efd03a136b9b9c3dec5076f626"],"created_at":1609144534000,"updated_at":1609144687000,"closed_at":1609144687000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"social_bias_frames\")\r\n...\r\nDownloading and preparing dataset social_bias_frames\/default\r\n...\r\n~\/.pyenv\/versions\/3.7.6\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)\r\n 484 )\r\n 485 elif response is not None and response.status_code == 404:\r\n--> 486 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n 487 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 488 \r\n\r\nFileNotFoundError: Couldn't find file at https:\/\/homes.cs.washington.edu\/~msap\/social-bias-frames\/SocialBiasFrames_v2.tgz\r\n```\r\n[Here](https:\/\/homes.cs.washington.edu\/~msap\/social-bias-frames\/) we find button `Download data` with the correct URL for the data: https:\/\/homes.cs.washington.edu\/~msap\/social-bias-frames\/SBIC.v2.tgz","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1643\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1642","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1642\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1642\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1642\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1642","id":775159568,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1ODk1MzY1","number":1642,"title":"Ollie dataset","user":{"login":"ontocord","id":8900094,"node_id":"MDQ6VXNlcjg5MDAwOTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8900094?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ontocord","html_url":"https:\/\/github.com\/ontocord","followers_url":"https:\/\/api.github.com\/users\/ontocord\/followers","following_url":"https:\/\/api.github.com\/users\/ontocord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ontocord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ontocord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ontocord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ontocord\/orgs","repos_url":"https:\/\/api.github.com\/users\/ontocord\/repos","events_url":"https:\/\/api.github.com\/users\/ontocord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ontocord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609123417000,"updated_at":1609767325000,"closed_at":1609767324000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1642","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1642","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1642.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1642.patch"},"body":"This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http:\/\/knowitall.github.io\/ollie\/ for more details.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1642\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1641","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1641\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1641\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1641\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1641","id":775110872,"node_id":"MDU6SXNzdWU3NzUxMTA4NzI=","number":1641,"title":"muchocine dataset cannot be dowloaded","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"},{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"squad\") # Works\r\ndataset = load_dataset(\"code_search_net\", \"python\") # Error\r\ndataset = load_dataset(\"covid_qa_deepset\") # Error\r\n\r\npath = \"\/huggingface\/datasets\/datasets\/{}\/\"\r\ndataset = load_dataset(path.format(\"code_search_net\"), \"python\") # Works\r\ndataset = load_dataset(path.format(\"covid_qa_deepset\")) # Works\r\n```\r\n\r\n","Hi @mrm8488 and @amoux!\r\n The datasets you are trying to load have been added to the library during the community sprint for v2 last month. They will be available with the v2 release!\r\nFor now, there are still a couple of solutions to load the datasets:\r\n1. As suggested by @amoux, you can clone the git repo and pass the local path to the script\r\n2. You can also install the latest (master) version of `datasets` using pip: `pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`","If you don't want to clone entire `datasets` repo, just download the `muchocine` directory and pass the local path to the directory. Cheers!","Muchocine was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `muchocine` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"muchocine\", split=\"train\")\r\n```","Thanks @lhoestq "],"created_at":1609104388000,"updated_at":1627967249000,"closed_at":1627967249000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```python\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 267 try:\r\n--> 268 local_path = cached_path(file_path, download_config=download_config)\r\n 269 except FileNotFoundError:\r\n\r\n7 frames\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/muchocine\/muchocine.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/muchocine\/muchocine.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 281 raise FileNotFoundError(\r\n 282 \"Couldn't find file locally at {}, or remotely at {} or {}\".format(\r\n--> 283 combined_path, github_file_path, file_path\r\n 284 )\r\n 285 )\r\n\r\nFileNotFoundError: Couldn't find file locally at muchocine\/muchocine.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/muchocine\/muchocine.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/muchocine\/muchocine.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1641\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1640","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1640\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1640\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1640\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1640","id":774921836,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1NzI2NzY4","number":1640,"title":"Fix \"'BertTokenizerFast' object has no attribute 'max_len'\"","user":{"login":"mflis","id":15031715,"node_id":"MDQ6VXNlcjE1MDMxNzE1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15031715?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mflis","html_url":"https:\/\/github.com\/mflis","followers_url":"https:\/\/api.github.com\/users\/mflis\/followers","following_url":"https:\/\/api.github.com\/users\/mflis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mflis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mflis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mflis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mflis\/orgs","repos_url":"https:\/\/api.github.com\/users\/mflis\/repos","events_url":"https:\/\/api.github.com\/users\/mflis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mflis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1609010741000,"updated_at":1609176395000,"closed_at":1609176395000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1640","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1640","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1640.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1640.patch"},"body":"Tensorflow 2.3.0 gives:\r\n FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.\r\n\r\nTensorflow 2.4.0 gives:\r\nAttributeError 'BertTokenizerFast' object has no attribute 'max_len'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1640\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1639","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1639\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1639\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1639\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1639","id":774903472,"node_id":"MDU6SXNzdWU3NzQ5MDM0NzI=","number":1639,"title":"bug with sst2 in glue ","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Maybe you can use nltk's treebank detokenizer ?\r\n```python\r\nfrom nltk.tokenize.treebank import TreebankWordDetokenizer\r\n\r\nTreebankWordDetokenizer().detokenize(\"it 's a charming and often affecting journey . \".split())\r\n# \"it's a charming and often affecting journey.\"\r\n```","I am looking for alternative file URL here instead of adding extra processing code: https:\/\/github.com\/huggingface\/datasets\/blob\/171f2bba9dd8b92006b13cf076a5bf31d67d3e69\/datasets\/glue\/glue.py#L174","I don't know if there exists a detokenized version somewhere. Even the version on kaggle is tokenized"],"created_at":1609001843000,"updated_at":1630076603000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.\r\nIs there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on this dataset. thank you for your help. @lhoestq \r\n \r\n```\r\n>>> a = datasets.load_dataset('glue', 'sst2', split=\"validation\", script_version=\"master\")\r\nReusing dataset glue (\/julia\/datasets\/glue\/sst2\/1.0.0\/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4)\r\n>>> a[:10]\r\n{'idx': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'label': [1, 0, 1, 1, 0, 1, 0, 0, 1, 0], 'sentence': [\"it 's a charming and often affecting journey . \", 'unflinchingly bleak and desperate ', 'allows us to hope that nolan is poised to embark a major career as a commercial yet inventive filmmaker . ', \"the acting , costumes , music , cinematography and sound are all astounding given the production 's austere locales . \", \"it 's slow -- very , very slow . \", 'although laced with humor and a few fanciful touches , the film is a refreshingly serious look at young women . ', 'a sometimes tedious film . ', \"or doing last year 's taxes with your ex-wife . \", \"you do n't have to know about music to appreciate the film 's easygoing blend of comedy and romance . \", \"in exactly 89 minutes , most of which passed as slowly as if i 'd been sitting naked on an igloo , formula 51 sank from quirky to jerky to utter turkey . \"]}\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1639\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1638","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1638\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1638\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1638\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1638","id":774869184,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1Njg5ODQ5","number":1638,"title":"Add id_puisi dataset","user":{"login":"ilhamfp","id":31740013,"node_id":"MDQ6VXNlcjMxNzQwMDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31740013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ilhamfp","html_url":"https:\/\/github.com\/ilhamfp","followers_url":"https:\/\/api.github.com\/users\/ilhamfp\/followers","following_url":"https:\/\/api.github.com\/users\/ilhamfp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ilhamfp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ilhamfp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ilhamfp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ilhamfp\/orgs","repos_url":"https:\/\/api.github.com\/users\/ilhamfp\/repos","events_url":"https:\/\/api.github.com\/users\/ilhamfp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ilhamfp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608986515000,"updated_at":1609346057000,"closed_at":1609346057000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1638","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1638","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1638.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1638.patch"},"body":"Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1638\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1637","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1637\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1637\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1637\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1637","id":774710014,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1NTc1NTMw","number":1637,"title":"Added `pn_summary` dataset","user":{"login":"m3hrdadfi","id":2601833,"node_id":"MDQ6VXNlcjI2MDE4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2601833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/m3hrdadfi","html_url":"https:\/\/github.com\/m3hrdadfi","followers_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/followers","following_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/orgs","repos_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/repos","events_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As always, I got stuck in the correct order of imports \ud83d\ude05\r\n@lhoestq, It's finished!","@lhoestq, It's done! Is there anything else that needs changes?"],"created_at":1608894084000,"updated_at":1609767799000,"closed_at":1609767799000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1637","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1637","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1637.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1637.patch"},"body":"#1635 \r\n\r\nYou did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1637\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1636","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1636\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1636\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1636\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1636","id":774574378,"node_id":"MDU6SXNzdWU3NzQ1NzQzNzg=","number":1636,"title":"winogrande cannot be dowloaded ","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have same issue for other datasets (`myanmar_news` in my case).\r\n\r\nA version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at \r\n```\r\nhttps:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/myanmar_news\/myanmar_news.py\r\n```\r\n\r\nMeanwhile, other version runs on Colab (**with GPU**) failed to download the dataset. It try to find the dataset at `1.1.3` instead of `master` . If I disable GPU on my Colab, the code can load the dataset without any problem.\r\n\r\nMaybe there is some version missmatch with the GPU and CPU version of code for these datasets?","It looks like they're two different issues\r\n\r\n----------\r\n\r\nFirst for `myanmar_news`: \r\n\r\nIt must come from the way you installed `datasets`.\r\nIf you install `datasets` from source, then the `myanmar_news` script will be loaded from `master`.\r\nHowever if you install from `pip` it will get it using the version of the lib (here `1.1.3`) and `myanmar_news` is not available in `1.1.3`.\r\n\r\nThe difference between your GPU and CPU executions must be the environment, one seems to have installed `datasets` from source and not the other.\r\n\r\n----------\r\n\r\nThen for `winogrande`:\r\n\r\nThe errors says that the url https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/winogrande\/winogrande.py is not reachable.\r\nHowever it works fine on my side.\r\n\r\nDoes your machine have an internet connection ? Are connections to github blocked by some sort of proxy ?\r\nCan you also try again in case github had issues when you tried the first time ?\r\n"],"created_at":1608848902000,"updated_at":1609163629000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq \r\n\r\n```\r\n File \".\/finetune_trainer.py\", line 318, in <module>\r\n main()\r\n File \".\/finetune_trainer.py\", line 148, in main\r\n for task in data_args.tasks]\r\n File \".\/finetune_trainer.py\", line 148, in <listcomp>\r\n for task in data_args.tasks]\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 65, in get_dataset\r\n dataset = self.load_dataset(split=split)\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 466, in load_dataset\r\n return datasets.load_dataset('winogrande', 'winogrande_l', split=split)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 487, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/winogrande\/winogrande.py\r\nyo\/0 I1224 14:17:46.419031 31226 main shadow.py:122 > Traceback (most recent call last):\r\n File \"\/usr\/lib\/python3.6\/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"\/usr\/lib\/python3.6\/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/distributed\/launch.py\", line 260, in <module>\r\n main()\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/distributed\/launch.py\", line 256, in main\r\n cmd=cmd)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1636\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1635","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1635\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1635\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1635\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1635","id":774524492,"node_id":"MDU6SXNzdWU3NzQ1MjQ0OTI=","number":1635,"title":"Persian Abstractive\/Extractive Text Summarization","user":{"login":"m3hrdadfi","id":2601833,"node_id":"MDQ6VXNlcjI2MDE4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2601833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/m3hrdadfi","html_url":"https:\/\/github.com\/m3hrdadfi","followers_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/followers","following_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/orgs","repos_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/repos","events_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/m3hrdadfi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608832032000,"updated_at":1609773064000,"closed_at":1609773064000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included.\r\n\r\n## Adding a Dataset\r\n- **Name:** *pn-summary*\r\n- **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abstractive\/Extractive tasks (like cnn_dailymail for English). It can also be used in other scopes like Text Generation, Title Generation, and News Category Classification.*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/2012.11204*\r\n- **Data:** *https:\/\/github.com\/hooshvare\/pn-summary\/#download*\r\n- **Motivation:** *It is the first Persian abstractive\/extractive Text summarization dataset (like cnn_dailymail for English)!*\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1635\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1634","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1634\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1634\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1634\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1634","id":774487934,"node_id":"MDU6SXNzdWU3NzQ0ODc5MzQ=","number":1634,"title":"Inspecting datasets per category","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That's interesting, can you tell me what you think would be useful to access to inspect a dataset?\r\n\r\nYou can filter them in the hub with the search by the way: https:\/\/huggingface.co\/datasets have you seen it?","Hi @thomwolf \r\nthank you, I was not aware of this, I was looking into the data viewer linked into readme page. \r\n\r\nThis is exactly what I was looking for, but this does not work currently, please see the attached \r\nI am selecting to see all nli datasets in english and it retrieves none. thanks\r\n\r\n![5tarDHn9CP6ngeM](https:\/\/user-images.githubusercontent.com\/53898419\/103107612-1509aa80-4638-11eb-85b5-0c995a189969.png)\r\n\r\n\r\n\r\n","I see 4 results for NLI in English but indeed some are not tagged yet and missing (GLUE), we will focus on that in January (cc @yjernite): https:\/\/huggingface.co\/datasets?filter=task_ids:natural-language-inference,languages:en"],"created_at":1608823594000,"updated_at":1610098084000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nIs there a way I could get all NLI datasets\/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1634\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1633","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1633\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1633\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1633\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1633","id":774422603,"node_id":"MDU6SXNzdWU3NzQ0MjI2MDM=","number":1633,"title":"social_i_qa wrong format of labels","user":{"login":"ghost","id":10137,"node_id":"MDQ6VXNlcjEwMTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghost","html_url":"https:\/\/github.com\/ghost","followers_url":"https:\/\/api.github.com\/users\/ghost\/followers","following_url":"https:\/\/api.github.com\/users\/ghost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghost\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghost\/repos","events_url":"https:\/\/api.github.com\/users\/ghost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghost\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file","Sure feel free to open a PR thanks !"],"created_at":1608815514000,"updated_at":1609348729000,"closed_at":1609348729000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nthere is extra \"\\n\" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent.\r\nso label is 'label': '1\\n', not '1'\r\nthanks\r\n\r\n```\r\n>>> import datasets \r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\r\n... 'social_i_qa')\r\ncahce dir \/julia\/cache\/datasets\r\nDownloading: 4.72kB [00:00, 3.52MB\/s] \r\ncahce dir \/julia\/cache\/datasets\r\nDownloading: 2.19kB [00:00, 1.81MB\/s] \r\nUsing custom data configuration default\r\nReusing dataset social_i_qa (\/julia\/datasets\/social_i_qa\/default\/0.1.0\/4a4190cc2d2482d43416c2167c0c5dccdd769d4482e84893614bd069e5c3ba06)\r\n>>> dataset['train'][0]\r\n{'answerA': 'like attending', 'answerB': 'like staying home', 'answerC': 'a good friend to have', 'context': 'Cameron decided to have a barbecue and gathered her friends together.', 'label': '1\\n', 'question': 'How would Others feel as a result?'}\r\n\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1633\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1632","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1632\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1632\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1632\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1632","id":774388625,"node_id":"MDU6SXNzdWU3NzQzODg2MjU=","number":1632,"title":"SICK dataset ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608813614000,"updated_at":1612540165000,"closed_at":1612540165000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you. \r\n\r\n## Adding a Dataset\r\n- **Name:** SICK\r\n- **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena. \r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/L14-1314\/\r\n- **Data:** http:\/\/marcobaroni.org\/composes\/sick.html\r\n- **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1632\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1631","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1631\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1631\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1631\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1631","id":774349222,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1Mjc5MTE2","number":1631,"title":"Update README.md","user":{"login":"savasy","id":6584825,"node_id":"MDQ6VXNlcjY1ODQ4MjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6584825?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/savasy","html_url":"https:\/\/github.com\/savasy","followers_url":"https:\/\/api.github.com\/users\/savasy\/followers","following_url":"https:\/\/api.github.com\/users\/savasy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/savasy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/savasy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/savasy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/savasy\/orgs","repos_url":"https:\/\/api.github.com\/users\/savasy\/repos","events_url":"https:\/\/api.github.com\/users\/savasy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/savasy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608810352000,"updated_at":1609176941000,"closed_at":1609175764000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1631","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1631","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1631.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1631.patch"},"body":"I made small change for citation","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1631\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1630","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1630\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1630\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1630\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1630","id":774332129,"node_id":"MDU6SXNzdWU3NzQzMzIxMjk=","number":1630,"title":"Adding UKP Argument Aspect Similarity Corpus","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Adding a link to the guide on adding a dataset if someone want to give it a try: https:\/\/github.com\/huggingface\/datasets#add-a-new-dataset-to-the-hub\r\n\r\nwe should add this guide to the issue template @lhoestq ","thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include it. "],"created_at":1608807691000,"updated_at":1608809418000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, this would be great to have this dataset included.\r\n\r\n## Adding a Dataset\r\n- **Name:** UKP Argument Aspect Similarity Corpus\r\n- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as either \u201chigh similarity\u201d, \u201csome similarity\u201d, \u201cno similarity\u201d or \u201cnot related\u201d with respect to the topic.\r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/P19-1054\/\r\n- **Data:** https:\/\/tudatalib.ulb.tu-darmstadt.de\/handle\/tudatalib\/1998\r\n- **Motivation:** this is one of the datasets currently used frequently in recent adapter papers like https:\/\/arxiv.org\/pdf\/2005.00247.pdf \r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).\r\n\r\nThank you","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1630\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1629","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1629\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1629\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1629\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1629","id":774255716,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1MjAwNTQ3","number":1629,"title":"add wongnai_reviews test set labels","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608796951000,"updated_at":1609176219000,"closed_at":1609176219000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1629","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1629","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1629.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1629.patch"},"body":"- add test set labels provided by @ekapolc\r\n- refactor `star_rating` to a `datasets.features.ClassLabel` field","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1629\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1628","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1628\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1628\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1628\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1628","id":774091411,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ1MDY5NTAy","number":1628,"title":"made suggested changes to hate-speech-and-offensive-language","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608765932000,"updated_at":1609150280000,"closed_at":1609150280000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1628","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1628","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1628.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1628.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1628\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1627","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1627\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1627\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1627\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1627","id":773960255,"node_id":"MDU6SXNzdWU3NzM5NjAyNTU=","number":1627,"title":"`Dataset.map` disable progress bar","user":{"login":"Nickil21","id":8767964,"node_id":"MDQ6VXNlcjg3Njc5NjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8767964?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nickil21","html_url":"https:\/\/github.com\/Nickil21","followers_url":"https:\/\/api.github.com\/users\/Nickil21\/followers","following_url":"https:\/\/api.github.com\/users\/Nickil21\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nickil21\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nickil21\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nickil21\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nickil21\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nickil21\/repos","events_url":"https:\/\/api.github.com\/users\/Nickil21\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nickil21\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Progress bar can be disabled like this:\r\n```python\r\nfrom datasets.utils.logging import set_verbosity_error\r\nset_verbosity_error()\r\n```\r\n\r\nThere is this line in `Dataset.map`:\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nSo any logging level higher than `WARNING` turns off the progress bar."],"created_at":1608746022000,"updated_at":1609012656000,"closed_at":1609012637000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1627\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1626","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1626\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1626\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1626\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1626","id":773840368,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ0ODYxMDE4","number":1626,"title":"Fix dataset_dict.shuffle with single seed","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608734016000,"updated_at":1609754404000,"closed_at":1609754403000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1626","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1626","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1626.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1626.patch"},"body":"Fix #1610 \r\n\r\nI added support for single integer used in `DatasetDict.shuffle`. Previously only a dictionary of seed was allowed.\r\nMoreover I added the missing `seed` parameter. Previously only `seeds` was allowed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1626\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1625","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1625\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1625\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1625\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1625","id":773771596,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ0Nzk4MDM1","number":1625,"title":"Fixed bug in the shape property","user":{"login":"noaonoszko","id":47183162,"node_id":"MDQ6VXNlcjQ3MTgzMTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47183162?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/noaonoszko","html_url":"https:\/\/github.com\/noaonoszko","followers_url":"https:\/\/api.github.com\/users\/noaonoszko\/followers","following_url":"https:\/\/api.github.com\/users\/noaonoszko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/noaonoszko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/noaonoszko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/noaonoszko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/noaonoszko\/orgs","repos_url":"https:\/\/api.github.com\/users\/noaonoszko\/repos","events_url":"https:\/\/api.github.com\/users\/noaonoszko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/noaonoszko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608730401000,"updated_at":1609629772000,"closed_at":1608732793000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1625","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1625","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1625.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1625.patch"},"body":"Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1625\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1624","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1624\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1624\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1624\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1624","id":773669700,"node_id":"MDU6SXNzdWU3NzM2Njk3MDA=","number":1624,"title":"Cannot download ade_corpus_v2","user":{"login":"him1411","id":20259310,"node_id":"MDQ6VXNlcjIwMjU5MzEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20259310?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/him1411","html_url":"https:\/\/github.com\/him1411","followers_url":"https:\/\/api.github.com\/users\/him1411\/followers","following_url":"https:\/\/api.github.com\/users\/him1411\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/him1411\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/him1411\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/him1411\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/him1411\/orgs","repos_url":"https:\/\/api.github.com\/users\/him1411\/repos","events_url":"https:\/\/api.github.com\/users\/him1411\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/him1411\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`","`ade_corpus_v2` was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `ade_corpus_v2` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"ade_corpus_v2\", \"Ade_corpos_v2_drug_ade_relation\")\r\n```\r\n\r\n(looks like there is a typo in the configuration name, we'll fix it for the v2.0 release of `datasets` soon)"],"created_at":1608721094000,"updated_at":1627967334000,"closed_at":1627967334000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I tried this to get the dataset following this url : https:\/\/huggingface.co\/datasets\/ade_corpus_v2\r\n\r\nbut received this error : \r\n\r\n`Traceback (most recent call last):\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/ade_corpus_v2\/ade_corpus_v2.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 278, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 486, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/ade_corpus_v2\/ade_corpus_v2.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/opt\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 282, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at ade_corpus_v2\/ade_corpus_v2.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/ade_corpus_v2\/ade_corpus_v2.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/ade_corpus_v2\/ade_corpus_v2.py`\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1624\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1623","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1623\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1623\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1623\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1623","id":772950710,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ0MTI2ODQ4","number":1623,"title":"Add CLIMATE-FEVER dataset","user":{"login":"tdiggelm","id":1658969,"node_id":"MDQ6VXNlcjE2NTg5Njk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1658969?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tdiggelm","html_url":"https:\/\/github.com\/tdiggelm","followers_url":"https:\/\/api.github.com\/users\/tdiggelm\/followers","following_url":"https:\/\/api.github.com\/users\/tdiggelm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tdiggelm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tdiggelm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tdiggelm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tdiggelm\/orgs","repos_url":"https:\/\/api.github.com\/users\/tdiggelm\/repos","events_url":"https:\/\/api.github.com\/users\/tdiggelm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tdiggelm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you @lhoestq for your comments! \ud83d\ude04 I added your suggested changes, ran the tests and regenerated `dataset_infos.json` and `dummy_data`."],"created_at":1608644045000,"updated_at":1608659633000,"closed_at":1608659633000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1623","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1623","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1623.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1623.patch"},"body":"As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579.\r\n\r\n---\r\n\r\nA dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.\r\n\r\nMore information can be found at:\r\n\r\n* Homepage: http:\/\/climatefever.ai\r\n* Paper: https:\/\/arxiv.org\/abs\/2012.00614","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1623\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1622","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1622\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1622\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1622\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1622","id":772940768,"node_id":"MDU6SXNzdWU3NzI5NDA3Njg=","number":1622,"title":"Can't call shape on the output of select()","user":{"login":"noaonoszko","id":47183162,"node_id":"MDQ6VXNlcjQ3MTgzMTYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47183162?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/noaonoszko","html_url":"https:\/\/github.com\/noaonoszko","followers_url":"https:\/\/api.github.com\/users\/noaonoszko\/followers","following_url":"https:\/\/api.github.com\/users\/noaonoszko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/noaonoszko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/noaonoszko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/noaonoszko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/noaonoszko\/orgs","repos_url":"https:\/\/api.github.com\/users\/noaonoszko\/repos","events_url":"https:\/\/api.github.com\/users\/noaonoszko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/noaonoszko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed that's a typo, do you want to open a PR to fix it?","Yes, created a PR"],"created_at":1608643120000,"updated_at":1608730633000,"closed_at":1608730632000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`.\r\nIt's line 531 in shape in arrow_dataset.py that causes the problem:\r\n``return tuple(self._indices.num_rows, self._data.num_columns)``\r\nThis makes sense, since `tuple(num1, num2)` is not a valid call.\r\n \r\nFull code to reproduce:\r\n\r\n```python\r\ndataset = load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\ntrain_set = dataset[\"train\"]\r\nt = train_set.select(range(10))\r\nprint(t.shape)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1622\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1621","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1621\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1621\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1621\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1621","id":772940417,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQ0MTE4MTAz","number":1621,"title":"updated dutch_social.py for loading jsonl (lines instead of list) files","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608643091000,"updated_at":1608724311000,"closed_at":1608724311000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1621","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1621","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1621.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1621.patch"},"body":"the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records\r\n\r\nPls refer to previous PR #1321 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1621\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1620","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1620\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1620\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1620\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1620","id":772620056,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQzODUxNTY3","number":1620,"title":"Adding myPOS2017 dataset","user":{"login":"hungluumfc","id":69781878,"node_id":"MDQ6VXNlcjY5NzgxODc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/69781878?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hungluumfc","html_url":"https:\/\/github.com\/hungluumfc","followers_url":"https:\/\/api.github.com\/users\/hungluumfc\/followers","following_url":"https:\/\/api.github.com\/users\/hungluumfc\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hungluumfc\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hungluumfc\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hungluumfc\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hungluumfc\/orgs","repos_url":"https:\/\/api.github.com\/users\/hungluumfc\/repos","events_url":"https:\/\/api.github.com\/users\/hungluumfc\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hungluumfc\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've updated the code and Readme to reflect your comments.\r\nThank you very much,","looks like this PR includes changes about many other files than the ones for myPOS2017\r\n\r\nCould you open another branch and another PR please ?\r\n(or fix this branch)","Hi @hungluumfc ! Have you had a chance to fix this PR so that it only includes the changes for `mypos` ? \r\n\r\nFeel free to ping me if you have questions or if I can help :) "],"created_at":1608609895000,"updated_at":1611915817000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1620","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1620","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1620.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1620.patch"},"body":"myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1620\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1619","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1619\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1619\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1619\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1619","id":772508558,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQzNzYyMTUw","number":1619,"title":"data loader for reading comprehension task","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for all the feedback! I have updated the dummy data with a zip under 30KB, which needs to include at least one data instance from both document domain and dialog domain. Please let me know if it is still too big. Thanks!","Thank you again for the feedback! I am not too sure what the preferable style for data instance in readme, but still added my edits. Thanks!"],"created_at":1608590434000,"updated_at":1609151573000,"closed_at":1609151573000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1619","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1619","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1619.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1619.patch"},"body":"added doc2dial data loader and dummy data for reading comprehension task.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1619\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1618","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1618\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1618\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1618\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1618","id":772248730,"node_id":"MDU6SXNzdWU3NzIyNDg3MzA=","number":1618,"title":"Can't filter language:EN on https:\/\/huggingface.co\/datasets","user":{"login":"davidefiocco","id":4547987,"node_id":"MDQ6VXNlcjQ1NDc5ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4547987?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/davidefiocco","html_url":"https:\/\/github.com\/davidefiocco","followers_url":"https:\/\/api.github.com\/users\/davidefiocco\/followers","following_url":"https:\/\/api.github.com\/users\/davidefiocco\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/davidefiocco\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/davidefiocco\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/davidefiocco\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/davidefiocco\/orgs","repos_url":"https:\/\/api.github.com\/users\/davidefiocco\/repos","events_url":"https:\/\/api.github.com\/users\/davidefiocco\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/davidefiocco\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc'ing @mapmeld ","Full language list is now deployed to https:\/\/huggingface.co\/datasets ! Recommend close","Cool @mapmeld ! My 2 cents (for a next iteration), it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime."],"created_at":1608564203000,"updated_at":1608657420000,"closed_at":1608657369000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When visiting https:\/\/huggingface.co\/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge:\r\n\r\n![screenshot](https:\/\/user-images.githubusercontent.com\/4547987\/102792244-892e1f00-43a8-11eb-9e89-4826ca201a87.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1618\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1617","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1617\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1617\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1617\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1617","id":772084764,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQzNDE5MTM5","number":1617,"title":"cifar10 initial commit","user":{"login":"czabo","id":75574105,"node_id":"MDQ6VXNlcjc1NTc0MTA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75574105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/czabo","html_url":"https:\/\/github.com\/czabo","followers_url":"https:\/\/api.github.com\/users\/czabo\/followers","following_url":"https:\/\/api.github.com\/users\/czabo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/czabo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/czabo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/czabo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/czabo\/orgs","repos_url":"https:\/\/api.github.com\/users\/czabo\/repos","events_url":"https:\/\/api.github.com\/users\/czabo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/czabo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yee a Computer Vision dataset!","Yep, the first one ! Thank @czabo "],"created_at":1608549530000,"updated_at":1608632285000,"closed_at":1608631888000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1617","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1617","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1617.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1617.patch"},"body":"CIFAR-10 dataset. Didn't add the tagging since there are no vision related tags.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1617\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1616","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1616\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1616\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1616\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1616","id":772074229,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQzNDEwNDc1","number":1616,"title":"added TurkishMovieSentiment dataset","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> I just generated the dataset_infos.json file\r\n> \r\n> Thanks for adding this one !\r\n\r\nThank you very much for your support."],"created_at":1608548596000,"updated_at":1608793721000,"closed_at":1608742206000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1616","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1616","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1616.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1616.patch"},"body":"This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.**\r\n\r\n- **Homepage:** [https:\/\/www.kaggle.com\/mustfkeskin\/turkish-movie-sentiment-analysis-dataset\/tasks](https:\/\/www.kaggle.com\/mustfkeskin\/turkish-movie-sentiment-analysis-dataset\/tasks)\r\n- **Point of Contact:** [Mustafa Keskin](https:\/\/www.linkedin.com\/in\/mustfkeskin\/)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1616\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1615","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1615\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1615\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1615\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1615","id":771641088,"node_id":"MDU6SXNzdWU3NzE2NDEwODg=","number":1615,"title":"Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir`","user":{"login":"SapirWeissbuch","id":44585792,"node_id":"MDQ6VXNlcjQ0NTg1Nzky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44585792?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SapirWeissbuch","html_url":"https:\/\/github.com\/SapirWeissbuch","followers_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/followers","following_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/orgs","repos_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/repos","events_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @SapirWeissbuch,\r\nWhen you are saying it freezes, at that time it is unzipping the file from the zip file it downloaded. Since it's a very heavy file it'll take some time. It was taking ~11GB after unzipping when it started reading examples for me. Hope that helps!\r\n![Screenshot 2020-12-21 at 23 40 52](https:\/\/user-images.githubusercontent.com\/19718818\/102808355-3b380c00-43e6-11eb-81ab-c31019ae6322.png)\r\n","Hi @bhavitvyamalik \r\nThanks for the reply!\r\nActually I let it run for 30 minutes before I killed the process. In this time, 30GB were extracted (much more than 11GB), I checked the size of the destination directory.\r\n\r\nWhat version of Datasets are you using?\r\n","I'm using datasets version: 1.1.3. I think you should drop `cache_dir` and use only\r\n`dataset = datasets.load_dataset(\"trivia_qa\", \"rc\")`\r\n\r\nTried that on colab and it's working there too\r\n![image](https:\/\/user-images.githubusercontent.com\/19718818\/102814269-4db74300-43f0-11eb-8f26-ecfcf4632002.png)\r\n","Train, Validation, and Test splits contain 138384, 18669, and 17210 samples respectively. It takes some time to read the samples. Even in your colab notebook it was reading the samples before you killed the process. Let me know if it works now!","Hi, it works on colab but it still doesn't work on my computer, same problem as before - overly large and long extraction process.\r\nI have to use a custom 'cache_dir' because I don't have any space left in my home directory where it is defaulted, maybe this could be the issue?","I tried running this again - More details of the problem:\r\nCode:\r\n```\r\ndatasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir=\"\/path\/to\/cache\")\r\n```\r\n\r\nThe output:\r\n```\r\nDownloading and preparing dataset trivia_qa\/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to path\/to\/cache\/trivia_qa\/rc\/1.1.0\/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... \r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.67G\/2.67G [03:38<00:00, 12.2MB\/s]\r\n\r\n```\r\nThe process continues (no progress bar is visible).\r\nI tried `du -sh .` in `path\/to\/cache`, and the size keeps increasing, reached 35G before I killed the process.\r\n\r\nGoogle Colab with custom `cache_dir` has same issue.\r\nhttps:\/\/colab.research.google.com\/drive\/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing#scrollTo=2G2O0AeNIXan","1) You can clear the huggingface folder in your `.cache` directory to use default directory for datasets. Speed of extraction and loading of samples depends a lot on your machine's configurations too.\r\n\r\n2) I tried on colab `dataset = datasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir = \".\/datasets\")`. After memory usage reached around 42GB (starting from 32GB used already), the dataset was loaded in the memory. Even Your colab notebook shows \r\n![image](https:\/\/user-images.githubusercontent.com\/19718818\/102852229-c7c4e780-4443-11eb-91d6-bf21024358a3.png)\r\nwhich means it's loaded now.","Facing the same issue.\r\nI am able to download datasets without `cache_dir`, however, when I specify the `cache_dir`, the process hangs indefinitely after partial download. \r\nTried for `data = load_dataset(\"cnn_dailymail\", \"3.0.0\")`","Hi @ashutoshml,\r\nI tried this and it worked for me:\r\n`data = load_dataset(\"cnn_dailymail\", \"3.0.0\", cache_dir=\".\/dummy\")`\r\n\r\nI'm using datasets==1.8.0. It took around 3-4 mins for dataset to unpack and start loading examples.","Ok. I waited for 20-30 mins, and it still is stuck.\r\nI am using datasets==1.8.0.\r\n\r\nIs there anyway to check what is happening? like a` --verbose` flag?\r\n\r\n![Screenshot 2021-06-25 at 6 37 43 PM](https:\/\/user-images.githubusercontent.com\/2375919\/123429653-cdfb7280-d5e4-11eb-9fa7-ff295800cc86.png)\r\n"],"created_at":1608485258000,"updated_at":1624626693000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\nI'm having issue downloading TriviaQA dataset with `load_dataset`.\r\n\r\n## Environment info\r\n- `datasets` version: 1.1.3\r\n- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1\r\n- Python version: 3.7.3\r\n\r\n## The code I'm running:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir = \".\/datasets\")\r\n```\r\n\r\n## The output:\r\n1. Download begins:\r\n```\r\nDownloading and preparing dataset trivia_qa\/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to \/cs\/labs\/gabis\/sapirweissbuch\/tr\r\nivia_qa\/rc\/1.1.0\/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... \r\nDownloading: 17%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589 | 446M\/2.67G [00:37<04:45, 7.77MB\/s]\r\n```\r\n2. 100% is reached\r\n3. It got stuck here for about an hour, and added additional 30G of data to \".\/datasets\" directory. I killed the process eventually.\r\n\r\nA similar issue can be observed in Google Colab:\r\n\r\nhttps:\/\/colab.research.google.com\/drive\/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing\r\n\r\n## Expected behaviour:\r\nThe dataset \"TriviaQA\" should be successfully downloaded.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1615\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1613","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1613\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1613\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1613\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1613","id":771577050,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQzMDYwNzEx","number":1613,"title":"Add id_clickbait","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608467089000,"updated_at":1608659127000,"closed_at":1608659127000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1613","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1613","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1613.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1613.patch"},"body":"This is the CLICK-ID dataset, a collection of annotated clickbait Indonesian news headlines that was collected from 12 local online news ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1613\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1612","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1612\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1612\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1612\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1612","id":771558160,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQzMDQ3NjQ1","number":1612,"title":"Adding wiki asp dataset as new PR","user":{"login":"katnoria","id":7674948,"node_id":"MDQ6VXNlcjc2NzQ5NDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7674948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/katnoria","html_url":"https:\/\/github.com\/katnoria","followers_url":"https:\/\/api.github.com\/users\/katnoria\/followers","following_url":"https:\/\/api.github.com\/users\/katnoria\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/katnoria\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/katnoria\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/katnoria\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/katnoria\/orgs","repos_url":"https:\/\/api.github.com\/users\/katnoria\/repos","events_url":"https:\/\/api.github.com\/users\/katnoria\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/katnoria\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608459908000,"updated_at":1608560013000,"closed_at":1608560013000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1612","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1612","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1612.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1612.patch"},"body":"Hi @lhoestq, Adding wiki asp as new branch because #1539 has other commits. This version has dummy data for each domain <20\/30KB.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1612\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1611","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1611\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1611\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1611\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1611","id":771486456,"node_id":"MDU6SXNzdWU3NzE0ODY0NTY=","number":1611,"title":"shuffle with torch generator ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks ","@lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large datasets using huggingface library, then I need to define a distributed sampler on top of it, for this I need to shard the datasets and give each shard to each core, but before sharding I need to shuffle the dataset, if you are familiar with distributed sampler in pytorch, this needs to be done based on seed+epoch generator to make it consistent across the cores they do it through defining a torch generator, I was wondering if you could tell me how I can shuffle the data for now, I am unfortunately blocked by this and have a limited time left, and I greatly appreciate your help on this. thanks ","@lhoestq Is there a way I could shuffle the datasets from this library with a custom defined shuffle function? thanks for your help on this. ","Right now the shuffle method only accepts the `seed` (optional int) or `generator` (optional `np.random.Generator`) parameters.\r\n\r\nHere is a suggestion to shuffle the data using your own shuffle method using `select`.\r\n`select` can be used to re-order the dataset samples or simply pick a few ones if you want.\r\nIt's what is used under the hood when you call `dataset.shuffle`.\r\n\r\nTo use `select` you must have the list of re-ordered indices of your samples.\r\n\r\nLet's say you have a `shuffle` methods that you want to use. Then you can first build your shuffled list of indices:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\n```\r\n\r\nThen you can shuffle your dataset using the shuffled indices with \r\n```python\r\nshuffled_dataset = dataset.select(shuffled_indices)\r\n```\r\n\r\nHope that helps","thank you @lhoestq thank you very much for responding to my question, this greatly helped me and remove the blocking for continuing my work, thanks. ","@lhoestq could you confirm the method proposed does not bring the whole data into memory? thanks ","Yes the dataset is not loaded into memory","great. thanks a lot."],"created_at":1608425834000,"updated_at":1608574339000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I really need to make shuffle work with this generator and I was wondering what I can do about this issue, thanks for your help \r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1611\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1610","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1610\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1610\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1610\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1610","id":771453599,"node_id":"MDU6SXNzdWU3NzE0NTM1OTk=","number":1610,"title":"shuffle does not accept seed ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, did you check the doc on `shuffle`?\r\nhttps:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?datasets.Dataset.shuffle#datasets.Dataset.shuffle","Hi Thomas\r\nthanks for reponse, yes, I did checked it, but this does not work for me please see \r\n\r\n```\r\n(internship) rkarimi@italix17:\/idiap\/user\/rkarimi\/dev$ python \r\nPython 3.7.9 (default, Aug 31 2020, 12:42:55) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import datasets \r\n2020-12-20 01:48:50.766004: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-20 01:48:50.766029: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n>>> data = datasets.load_dataset(\"scitail\", \"snli_format\")\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\nReusing dataset scitail (\/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/scitail\/snli_format\/1.1.0\/fd8ccdfc3134ce86eb4ef10ba7f21ee2a125c946e26bb1dd3625fe74f48d3b90)\r\n>>> data.shuffle(seed=2)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nTypeError: shuffle() got an unexpected keyword argument 'seed'\r\n\r\n```\r\n\r\ndatasets version\r\n`datasets 1.1.2 <pip>\r\n`\r\n","Thanks for reporting ! \r\n\r\nIndeed it looks like an issue with `suffle` on `DatasetDict`. We're going to fix that.\r\nIn the meantime you can shuffle each split (train, validation, test) separately:\r\n```python\r\nshuffled_train_dataset = data[\"train\"].shuffle(seed=42)\r\n```\r\n"],"created_at":1608411579000,"updated_at":1609754403000,"closed_at":1609754403000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq\r\n ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1610\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1609","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1609\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1609\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1609\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1609","id":771421881,"node_id":"MDU6SXNzdWU3NzE0MjE4ODE=","number":1609,"title":"Not able to use 'jigsaw_toxicity_pred' dataset","user":{"login":"jassimran","id":7424133,"node_id":"MDQ6VXNlcjc0MjQxMzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7424133?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jassimran","html_url":"https:\/\/github.com\/jassimran","followers_url":"https:\/\/api.github.com\/users\/jassimran\/followers","following_url":"https:\/\/api.github.com\/users\/jassimran\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jassimran\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jassimran\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jassimran\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jassimran\/orgs","repos_url":"https:\/\/api.github.com\/users\/jassimran\/repos","events_url":"https:\/\/api.github.com\/users\/jassimran\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jassimran\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @jassimran,\r\nThe `jigsaw_toxicity_pred` dataset has not been released yet, it will be available with version 2 of `datasets`, coming soon.\r\nYou can still access it by installing the master (unreleased) version of datasets directly :\r\n`pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`\r\nPlease let me know if this helps","Thanks.That works for now."],"created_at":1608399348000,"updated_at":1608655344000,"closed_at":1608655343000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":" When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https:\/\/colab.research.google.com\/drive\/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing):\r\n```\r\nfrom datasets import list_datasets, list_metrics, load_dataset, load_metric\r\n\r\nds = load_dataset(\"jigsaw_toxicity_pred\")\r\n```\r\n \r\nI see below error:\r\n\r\n> FileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/jigsaw_toxicity_pred\/jigsaw_toxicity_pred.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/jigsaw_toxicity_pred\/jigsaw_toxicity_pred.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 280 raise FileNotFoundError(\r\n 281 \"Couldn't find file locally at {}, or remotely at {} or {}\".format(\r\n--> 282 combined_path, github_file_path, file_path\r\n 283 )\r\n 284 )\r\n\r\nFileNotFoundError: Couldn't find file locally at jigsaw_toxicity_pred\/jigsaw_toxicity_pred.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/jigsaw_toxicity_pred\/jigsaw_toxicity_pred.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/jigsaw_toxicity_pred\/jigsaw_toxicity_pred.py","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1609\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1608","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1608\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1608\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1608\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1608","id":771329434,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyODkyMTQ4","number":1608,"title":"adding ted_talks_iwslt","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this with reference to the new approach #1676 "],"created_at":1608363401000,"updated_at":1609602252000,"closed_at":1609602251000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1608","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1608","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1608.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1608.patch"},"body":"UPDATE2: (2nd Jan) Wrote a long writeup on the slack channel. I don't think this approach is correct. Basically this created language pairs (109*108) \r\nRunning the `pytest `went for more than 40+ hours and it was still running! \r\nSo working on a different approach, such that the number of configs = number of languages. Will make a new pull request with that. \r\n\r\nUPDATE: This requires manual download dataset\r\n\r\nThis is a draft version ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1608\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1607","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1607\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1607\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1607\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1607","id":771325852,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyODg5OTky","number":1607,"title":"modified tweets hate speech detection","user":{"login":"darshan-gandhi","id":44197177,"node_id":"MDQ6VXNlcjQ0MTk3MTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44197177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/darshan-gandhi","html_url":"https:\/\/github.com\/darshan-gandhi","followers_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608362020000,"updated_at":1608566928000,"closed_at":1608566928000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1607","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1607","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1607.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1607.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1607\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1606","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1606\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1606\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1606\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1606","id":771116455,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyNzMwNTEw","number":1606,"title":"added Semantic Scholar Open Research Corpus","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think we\u2019ll need complete dataset_infos.json to create YAML tags. I ran the script again with 100 files after going through your comments and it was occupying ~16 GB space. So in total it should take ~960GB and I don\u2019t have this much memory available with me. Also, I'll have to download the whole dataset for generating dummy data, right?"],"created_at":1608319284000,"updated_at":1612344659000,"closed_at":1612344659000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1606","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1606","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1606.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1606.patch"},"body":"I picked up this dataset [Semantic Scholar Open Research Corpus](https:\/\/allenai.org\/data\/s2orc) but it contains 6000 files to be downloaded. I tried the current code with 100 files and it worked fine (took ~15GB space). For 6000 files it would occupy ~900GB space which I don\u2019t have. Can someone from the HF team with that much of disk space help me with generate dataset_infos and dummy_data?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1606\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1605","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1605\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1605\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1605\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1605","id":770979620,"node_id":"MDU6SXNzdWU3NzA5Nzk2MjA=","number":1605,"title":"Navigation version breaking","user":{"login":"mttk","id":3007947,"node_id":"MDQ6VXNlcjMwMDc5NDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3007947?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mttk","html_url":"https:\/\/github.com\/mttk","followers_url":"https:\/\/api.github.com\/users\/mttk\/followers","following_url":"https:\/\/api.github.com\/users\/mttk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mttk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mttk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mttk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mttk\/orgs","repos_url":"https:\/\/api.github.com\/users\/mttk\/repos","events_url":"https:\/\/api.github.com\/users\/mttk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mttk\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608305784000,"updated_at":1608306112000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, \r\n\r\nwhen navigating docs (Chrome, Ubuntu) (e.g. on this page: https:\/\/huggingface.co\/docs\/datasets\/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version: \r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/3007947\/102632187-02cad080-414f-11eb-813b-28f3c8d80def.png)\r\n\r\n**Edit:** this actually happens _only_ if you open a link to a concrete subsection.\r\n\r\nIMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/_static\/js\/custom.js#L112) line to:\r\n```\r\nlet label = (version in versionMapping) ? version : stableVersion\r\n```\r\nwhich delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/_static\/js\/custom.js#L97) which should also fail in this case.\r\n\r\nI'd also suggest swapping this [block](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/docs\/source\/_static\/js\/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :)\r\n\r\nI also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https:\/\/huggingface.co\/docs\/datasets` points to for every stable release & manually create new folders for each released version?\r\nSo far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1605\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1604","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1604\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1604\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1604\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1604","id":770862112,"node_id":"MDU6SXNzdWU3NzA4NjIxMTI=","number":1604,"title":"Add tests for the download functions ?","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608295765000,"updated_at":1608295765000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"AFAIK the download functions in `DownloadManager` are not tested yet. It could be good to add some to ensure behavior is as expected.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1604\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1603","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1603\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1603\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1603\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1603","id":770857221,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyNTIwNDkx","number":1603,"title":"Add retries to HTTP requests","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging this one then :) "],"created_at":1608295291000,"updated_at":1608651247000,"closed_at":1608651247000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1603","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1603","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1603.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1603.patch"},"body":"## What does this PR do ?\r\n\r\nAdding retries to HTTP GET & HEAD requests, when they fail with a `ConnectTimeout` exception.\r\n\r\nThe \"canonical\" way to do this is to use [urllib's Retry class](https:\/\/urllib3.readthedocs.io\/en\/latest\/reference\/urllib3.util.html#urllib3.util.Retry) and wrap it in a [HttpAdapter](https:\/\/requests.readthedocs.io\/en\/master\/api\/#requests.adapters.HTTPAdapter). Seems a bit overkill to me, plus it forces us to use the `requests.Session` object. I prefer this simpler implementation. I'm open to remarks and suggestions @lhoestq @yjernite \r\n\r\nFixes #1102 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1603\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1602","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1602\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1602\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1602\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1602","id":770841810,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyNTA4NTM4","number":1602,"title":"second update of id_newspapers_2018","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608293797000,"updated_at":1608633675000,"closed_at":1608633674000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1602","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1602","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1602.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1602.patch"},"body":"The feature \"url\" is currently set wrongly to data[\"date\"], this PR fix it to data[\"url\"].\r\nI added also an additional POC.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1602\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1601","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1601\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1601\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1601\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1601","id":770758914,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyNDQzNDE3","number":1601,"title":"second update of the id_newspapers_2018","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I close this PR, since it based on 1 week old repo. And I will create a new one"],"created_at":1608286220000,"updated_at":1608293731000,"closed_at":1608293731000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1601","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1601","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1601.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1601.patch"},"body":"The feature \"url\" is currently set wrongly to data[\"date\"], this PR fix it to data[\"url\"].\r\nI added also an additional POC.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1601\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1600","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1600\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1600\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1600\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1600","id":770582960,"node_id":"MDU6SXNzdWU3NzA1ODI5NjA=","number":1600,"title":"AttributeError: 'DatasetDict' object has no attribute 'train_test_split'","user":{"login":"david-waterworth","id":5028974,"node_id":"MDQ6VXNlcjUwMjg5NzQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5028974?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/david-waterworth","html_url":"https:\/\/github.com\/david-waterworth","followers_url":"https:\/\/api.github.com\/users\/david-waterworth\/followers","following_url":"https:\/\/api.github.com\/users\/david-waterworth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/david-waterworth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/david-waterworth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/david-waterworth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/david-waterworth\/orgs","repos_url":"https:\/\/api.github.com\/users\/david-waterworth\/repos","events_url":"https:\/\/api.github.com\/users\/david-waterworth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/david-waterworth\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @david-waterworth!\r\n\r\nAs indicated in the error message, `load_dataset(\"csv\")` returns a `DatasetDict` object, which is mapping of `str` to `Dataset` objects. I believe in this case the behavior is to return a `train` split with all the data.\r\n`train_test_split` is a method of the `Dataset` object, so you will need to do something like this:\r\n```python\r\ndataset_dict = load_dataset(`'csv', data_files='data.txt')\r\ndataset = dataset_dict['split name, eg train']\r\ndataset.train_test_split(test_size=0.1)\r\n```\r\n\r\nPlease let me know if this helps. \ud83d\ude42 ","Thanks, that's working - the same issue also tripped me up with training. \r\n\r\nI also agree https:\/\/github.com\/huggingface\/datasets\/issues\/767 would be a useful addition. ","Closing this now","> ```python\r\n> dataset_dict = load_dataset(`'csv', data_files='data.txt')\r\n> dataset = dataset_dict['split name, eg train']\r\n> dataset.train_test_split(test_size=0.1)\r\n> ```\r\n\r\nI am getting error like\r\nKeyError: 'split name, eg train'\r\nCould you please tell me how to solve this?","dataset = load_dataset('csv', data_files=['files\/datasets\/dataset.csv'])\r\ndataset = dataset['train']\r\ndataset = dataset.train_test_split(test_size=0.1)"],"created_at":1608269830000,"updated_at":1623756346000,"closed_at":1608536338000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The following code fails with \"'DatasetDict' object has no attribute 'train_test_split'\" - am I doing something wrong?\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='data.txt')\r\ndataset = dataset.train_test_split(test_size=0.1)\r\n```\r\n\r\n> AttributeError: 'DatasetDict' object has no attribute 'train_test_split'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1600\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1599","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1599\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1599\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1599\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1599","id":770431389,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyMTgwMzI4","number":1599,"title":"add Korean Sarcasm Dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608245396000,"updated_at":1631897672000,"closed_at":1608744359000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1599","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1599","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1599.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1599.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1599\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1598","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1598\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1598\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1598\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1598","id":770332440,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyMDk2NTM4","number":1598,"title":"made suggested changes in fake-news-english","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608235589000,"updated_at":1608284638000,"closed_at":1608284637000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1598","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1598","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1598.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1598.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1598\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1597","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1597\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1597\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1597\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1597","id":770276140,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyMDUwMTc5","number":1597,"title":"adding hate-speech-and-offensive-language","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["made suggested changes and opened PR https:\/\/github.com\/huggingface\/datasets\/pull\/1628"],"created_at":1608230115000,"updated_at":1608766037000,"closed_at":1608766036000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1597","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1597","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1597.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1597.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1597\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1596","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1596\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1596\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1596\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1596","id":770260531,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQyMDM3NTU0","number":1596,"title":"made suggested changes to hate-speech-and-offensive-language","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608228566000,"updated_at":1608230162000,"closed_at":1608230153000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1596","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1596","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1596.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1596.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1596\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1595","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1595\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1595\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1595\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1595","id":770153693,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxOTUwNDk4","number":1595,"title":"Logiqa en","user":{"login":"aclifton314","id":53267795,"node_id":"MDQ6VXNlcjUzMjY3Nzk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53267795?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aclifton314","html_url":"https:\/\/github.com\/aclifton314","followers_url":"https:\/\/api.github.com\/users\/aclifton314\/followers","following_url":"https:\/\/api.github.com\/users\/aclifton314\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aclifton314\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aclifton314\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aclifton314\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aclifton314\/orgs","repos_url":"https:\/\/api.github.com\/users\/aclifton314\/repos","events_url":"https:\/\/api.github.com\/users\/aclifton314\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aclifton314\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm getting an error when I try to create the dummy data:\r\n```python\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ python datasets-cli dummy_data .\/datasets\/logiqa_en\/ --auto_generate \r\n2021-01-07 10:50:12.024791: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-01-07 10:50:12.024814: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nCouldn't generate dummy file 'datasets\/dummy\/1.1.0\/dummy_data\/master.zip\/LogiQA-dataset-master\/README.md'. Ignore that if this file is not useful for dummy data.\r\nDummy data generation done but dummy data test failed since splits ['train', 'test', 'validation'] have 0 examples for config 'default''.\r\nAutomatic dummy data generation failed for some configs of '.\/datasets\/logiqa_en\/'\r\n```","Hi ! Sorry for the delay\r\n\r\nTo fix your issue for the dummy data you must increase the number of lines that will be kept to generate the dummy files. By default it's 5, and as you need at least 8 lines here to yield one example you must increase this.\r\n\r\nYou can increase the number of lines to 32 for example by doing\r\n```\r\ndatasets-cli dummy_data .\/datasets\/logica_en --auto_generate --n_lines 32\r\n```\r\n\r\nAlso it looks like there are changes about other datasets in this PR (imppres). Can you fix that ? You may need to create another branch and another PR.","To fix the branch issue, I went ahead and made a backup of the dataset then deleted my local copy of my fork of `datasets`. I then followed the [detailed guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md) from the beginning to reclone the fork and start a new branch. \r\n\r\nHowever, when it came time to create the dummy data I got the following error:\r\n```python\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ datasets-cli dummy_data .\/datasets\/logiqa_en --auto_generate --n_lines 32\r\n2021-02-03 11:23:23.145885: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-02-03 11:23:23.145914: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nCouldn't generate dummy file 'datasets\/logiqa_en\/dummy\/1.1.0\/dummy_data\/master.zip\/LogiQA-dataset-master\/README.md'. Ignore that if this file is not useful for dummy data.\r\nTraceback (most recent call last):\r\n File \"\/home\/aclifton\/anaconda3\/bin\/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/commands\/dummy_data.py\", line 317, in run\r\n keep_uncompressed=self._keep_uncompressed,\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/commands\/dummy_data.py\", line 355, in _autogenerate_dummy_data\r\n dataset_builder._prepare_split(split_generator)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 905, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 799, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 710, in encode_nested_example\r\n (k, encode_nested_example(sub_schema, sub_obj)) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 710, in <genexpr>\r\n (k, encode_nested_example(sub_schema, sub_obj)) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 737, in encode_nested_example\r\n return schema.encode_example(obj)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 522, in encode_example\r\n example_data = self.str2int(example_data)\r\n File \"\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/datasets\/features.py\", line 481, in str2int\r\n output.append(self._str2int[str(value)])\r\nKeyError: \"Some Cantonese don't like chili, so some southerners don't like chili.\"\r\n```","Hi ! The error happens when the script is verifying that the generated dummy data work fine with the dataset script.\r\nApparently it fails because the text `\"Some Cantonese don't like chili, so some southerners don't like chili.\"` was given in a field that is a ClassLabel feature (probably the `answer` field), while it actually expects \"a\", \"b\", \"c\" or \"d\". Can you fix the script so that it returns the expected labels for this field instead of the text ?\r\n\r\nAlso it would be awesome to rename this field `answerKey` instead of `answer` to have the same column names as the other multiple-choice-QA datasets in the library :) ","Ok getting closer! I got the dummy data to work. However I am now getting the following error:\r\n```python\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en\r\n===================================================================== test session starts ======================================================================\r\nplatform linux -- Python 3.7.6, pytest-5.3.5, py-1.8.1, pluggy-0.13.1\r\nrootdir: \/home\/aclifton\/data\/hf_datasets_sprint\/datasets\r\nplugins: astropy-header-0.1.2, xdist-2.1.0, doctestplus-0.5.0, forked-1.3.0, hypothesis-5.5.4, arraydiff-0.3, remotedata-0.3.2, openfiles-0.4.0\r\ncollected 0 items \/ 1 error \r\n\r\n============================================================================ ERRORS ============================================================================\r\n________________________________________________________ ERROR collecting tests\/test_dataset_common.py _________________________________________________________\r\nImportError while importing test module '\/home\/aclifton\/data\/hf_datasets_sprint\/datasets\/tests\/test_dataset_common.py'.\r\nHint: make sure your test modules\/packages have valid Python names.\r\nTraceback:\r\ntests\/test_dataset_common.py:42: in <module>\r\n from datasets.packaged_modules import _PACKAGED_DATASETS_MODULES\r\nE ModuleNotFoundError: No module named 'datasets.packaged_modules'\r\n----------------------------------------------------------------------- Captured stderr ------------------------------------------------------------------------\r\n2021-02-10 11:06:14.345510: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2021-02-10 11:06:14.345551: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n======================================================================= warnings summary =======================================================================\r\n\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow\/python\/autograph\/utils\/testing.py:21\r\n \/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/tensorflow\/python\/autograph\/utils\/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:693\r\n \/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:693: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n if not isinstance(type_params, collections.Iterable):\r\n\r\n\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:532\r\n \/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:532: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n if not isinstance(type_params, (collections.Sequence, set)):\r\n\r\n\/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/elasticsearch\/compat.py:38\r\n \/home\/aclifton\/anaconda3\/lib\/python3.7\/site-packages\/elasticsearch\/compat.py:38: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working\r\n from collections import Mapping\r\n\r\n-- Docs: https:\/\/docs.pytest.org\/en\/latest\/warnings.html\r\n================================================================= 4 warnings, 1 error in 2.74s =================================================================\r\nERROR: not found: \/home\/aclifton\/data\/hf_datasets_sprint\/datasets\/tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en\r\n(no name '\/home\/aclifton\/data\/hf_datasets_sprint\/datasets\/tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_logiqa_en' in any of [<Module test_dataset_common.py>])\r\n\r\n```","Hi ! It looks like the version of `datasets` that is installed in your environment doesn't match the version of `datasets` you're using for the tests. Can you try uninstalling datasets and reinstall it again ?\r\n```\r\npip uninstall datasets -y\r\npip install -e .\r\n```","Closer still!\r\n```python\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ git commit\r\n[logiqa_en 2664fe7f] fixed several issues with logiqa_en.\r\n 4 files changed, 324 insertions(+)\r\n create mode 100644 datasets\/logiqa_en\/README.md\r\n create mode 100644 datasets\/logiqa_en\/dataset_infos.json\r\n create mode 100644 datasets\/logiqa_en\/dummy\/1.1.0\/dummy_data.zip\r\n create mode 100644 datasets\/logiqa_en\/logiqa_en.py\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ git fetch upstream\r\nremote: Enumerating objects: 1, done.\r\nremote: Counting objects: 100% (1\/1), done.\r\nremote: Total 1 (delta 0), reused 0 (delta 0), pack-reused 0\r\nUnpacking objects: 100% (1\/1), 590 bytes | 590.00 KiB\/s, done.\r\nFrom https:\/\/github.com\/huggingface\/datasets\r\n 6e114a0c..318b09eb master -> upstream\/master\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ git rebase upstream\/master \r\nerror: cannot rebase: You have unstaged changes.\r\nerror: Please commit or stash them.\r\naclifton@pop-os:~\/data\/hf_datasets_sprint\/datasets$ git push -u origin logiqa_en\r\nUsername for 'https:\/\/github.com': aclifton314\r\nPassword for 'https:\/\/aclifton314@github.com': \r\nTo https:\/\/github.com\/aclifton314\/datasets\r\n ! [rejected] logiqa_en -> logiqa_en (non-fast-forward)\r\nerror: failed to push some refs to 'https:\/\/github.com\/aclifton314\/datasets'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```"],"created_at":1608219720000,"updated_at":1612981992000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1595","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1595","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1595.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1595.patch"},"body":"logiqa in english.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1595\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1594","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1594\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1594\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1594\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1594","id":769747767,"node_id":"MDU6SXNzdWU3Njk3NDc3Njc=","number":1594,"title":"connection error ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This happen quite often when they are too many concurrent requests to github.\r\n\r\ni can understand it\u2019s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ?","Yes currently there's no retry afaik. We should add retries","Retries were added in #1603 :) \r\nIt will be available in the next release","Hi @lhoestq thank you for the modification, I will use`script_version=\"master\"` for now :), to my experience, also setting timeout to a larger number like 3*60 which I normally use helps a lot on this.\r\n"],"created_at":1608196714000,"updated_at":1608850653000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am hitting to this error, thanks \r\n\r\n```\r\n> Traceback (most recent call last):\r\n File \"finetune_t5_trainer.py\", line 379, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 208, in main\r\n if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO\r\n File \"finetune_t5_trainer.py\", line 207, in <dictcomp>\r\n for task in data_args.eval_tasks}\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 70, in get_dataset\r\n dataset = self.load_dataset(split=split)\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 66, in load_dataset\r\n return datasets.load_dataset(self.task.name, split=split, script_version=\"master\")\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 267, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 487, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/boolq\/boolq.py\r\nel\/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1594\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1593","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1593\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1593\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1593\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1593","id":769611386,"node_id":"MDU6SXNzdWU3Njk2MTEzODY=","number":1593,"title":"Access to key in DatasetDict map","user":{"login":"ZhaofengWu","id":11954789,"node_id":"MDQ6VXNlcjExOTU0Nzg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11954789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZhaofengWu","html_url":"https:\/\/github.com\/ZhaofengWu","followers_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/followers","following_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/repos","events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed that would be cool\r\n\r\nAlso FYI right now the easiest way to do this is\r\n```python\r\ndataset_dict[\"train\"] = dataset_dict[\"train\"].map(my_transform_for_the_train_set)\r\ndataset_dict[\"test\"] = dataset_dict[\"test\"].map(my_transform_for_the_test_set)\r\n```"],"created_at":1608188540000,"updated_at":1610534283000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It is possible that we want to do different things in the `map` function (and possibly other functions too) of a `DatasetDict`, depending on the key. I understand that `DatasetDict.map` is a really thin wrapper of `Dataset.map`, so it is easy to directly implement this functionality in the client code. Still, it'd be nice if there can be a flag, similar to `with_indices`, that allows the callable to know the key inside `DatasetDict`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1593\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1592","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1592\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1592\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1592\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1592","id":769529421,"node_id":"MDU6SXNzdWU3Njk1Mjk0MjE=","number":1592,"title":"Using datasets.Metric with Trainer()","user":{"login":"YipingNUS","id":5652584,"node_id":"MDQ6VXNlcjU2NTI1ODQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5652584?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/YipingNUS","html_url":"https:\/\/github.com\/YipingNUS","followers_url":"https:\/\/api.github.com\/users\/YipingNUS\/followers","following_url":"https:\/\/api.github.com\/users\/YipingNUS\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/YipingNUS\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/YipingNUS\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/YipingNUS\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/YipingNUS\/orgs","repos_url":"https:\/\/api.github.com\/users\/YipingNUS\/repos","events_url":"https:\/\/api.github.com\/users\/YipingNUS\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/YipingNUS\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We are indeed working on the integration with `Trainer` :)"],"created_at":1608182224000,"updated_at":1608205744000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Using datasets.Metric with Trainer()\r\n\r\nHi team, I was quite surprised in the [Metric documentation](https:\/\/huggingface.co\/docs\/datasets\/using_metrics.html) I don't see how it can be used with `Trainer()`. That would be the most intuitive use case instead of having to iterate the batches and add predictions and references to the metric, then compute the metric manually. Ideally, any pre-built metrics can be added to `compute_metrics` argument of `Trainer()` and they will be calculated at an interval specified by `TrainingArguments.evaluation_strategy`. \r\n\r\nIs this option available but just not mentioned in the documentation or it's not possible at the moment? I notice in the [Transformer | Training and fine-tuning](https:\/\/huggingface.co\/transformers\/training.html) tutorial, you are using custom scripts to calculate the accuracy, P\/R\/F, which are already in the pre-built metrics.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1592\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1591","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1591\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1591\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1591\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1591","id":769383714,"node_id":"MDU6SXNzdWU3NjkzODM3MTQ=","number":1591,"title":"IWSLT-17 Link Broken","user":{"login":"ZhaofengWu","id":11954789,"node_id":"MDQ6VXNlcjExOTU0Nzg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11954789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZhaofengWu","html_url":"https:\/\/github.com\/ZhaofengWu","followers_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/followers","following_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/repos","events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZhaofengWu\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892865,"node_id":"MDU6TGFiZWwxOTM1ODkyODY1","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/duplicate","name":"duplicate","color":"cfd3d7","default":true,"description":"This issue or pull request already exists"},{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.","Closing this since its a duplicate"],"created_at":1608166002000,"updated_at":1608278796000,"closed_at":1608278728000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```\r\nFileNotFoundError: Couldn't find file at https:\/\/wit3.fbk.eu\/archive\/2017-01-trnmted\/\/texts\/DeEnItNlRo\/DeEnItNlRo\/DeEnItNlRo-DeEnItNlRo.tgz\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1591\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1590","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1590\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1590\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1590\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1590","id":769242858,"node_id":"MDU6SXNzdWU3NjkyNDI4NTg=","number":1590,"title":"Add helper to resolve namespace collision","user":{"login":"jramapuram","id":8204807,"node_id":"MDQ6VXNlcjgyMDQ4MDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8204807?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jramapuram","html_url":"https:\/\/github.com\/jramapuram","followers_url":"https:\/\/api.github.com\/users\/jramapuram\/followers","following_url":"https:\/\/api.github.com\/users\/jramapuram\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jramapuram\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jramapuram\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jramapuram\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jramapuram\/orgs","repos_url":"https:\/\/api.github.com\/users\/jramapuram\/repos","events_url":"https:\/\/api.github.com\/users\/jramapuram\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jramapuram\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Do you have an example?","I was thinking about using something like [importlib](https:\/\/docs.python.org\/3\/library\/importlib.html#importing-a-source-file-directly) to over-ride the collision. \r\n\r\n**Reason requested**: I use the [following template](https:\/\/github.com\/jramapuram\/ml_base\/) repo where I house all my datasets as a submodule.","Alternatively huggingface could consider some submodule type structure like:\r\n\r\n`import huggingface.datasets`\r\n`import huggingface.transformers`\r\n\r\n`datasets` is a very common module in ML and should be an end-user decision and not scope all of python \u00af\\_(\u30c4)_\/\u00af \r\n","That's a interesting option indeed. We'll think about it.","It also wasn't initially obvious to me that the samples which contain `import datasets` were in fact importing a huggingface library (in fact all the huggingface imports are very generic - transformers, tokenizers, datasets...)"],"created_at":1608149844000,"updated_at":1608349238000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Many projects use a module called `datasets`, however this is incompatible with huggingface datasets. It would be great if there if there was some helper or similar function to resolve such a common conflict. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1590\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1589","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1589\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1589\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1589\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1589","id":769187141,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMzcwMTM0","number":1589,"title":"Update doc2dial.py","user":{"login":"songfeng","id":2062185,"node_id":"MDQ6VXNlcjIwNjIxODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2062185?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/songfeng","html_url":"https:\/\/github.com\/songfeng","followers_url":"https:\/\/api.github.com\/users\/songfeng\/followers","following_url":"https:\/\/api.github.com\/users\/songfeng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/songfeng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/songfeng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/songfeng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/songfeng\/orgs","repos_url":"https:\/\/api.github.com\/users\/songfeng\/repos","events_url":"https:\/\/api.github.com\/users\/songfeng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/songfeng\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for adding the `doc2dial_rc` config :) \r\n\r\nIt looks like you're missing the dummy data for this config though. Could you add them please ?\r\nAlso to fix the CI you'll need to format the code with `make style`"],"created_at":1608144656000,"updated_at":1608571111000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1589","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1589","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1589.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1589.patch"},"body":"Added data loader for machine reading comprehension tasks proposed in the Doc2Dial EMNLP 2020 paper.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1589\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1588","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1588\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1588\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1588\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1588","id":769068227,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMjg3OTcz","number":1588,"title":"Modified hind encorp","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["welcome, awesome "],"created_at":1608136094000,"updated_at":1608158513000,"closed_at":1608139228000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1588","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1588","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1588.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1588.patch"},"body":"description added, unnecessary comments removed from .py and readme.md reformated \r\n@lhoestq for #1584","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1588\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1587","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1587\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1587\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1587\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1587","id":768929877,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMjAwMDk3","number":1587,"title":"Add nq_open question answering dataset ","user":{"login":"Nilanshrajput","id":28673745,"node_id":"MDQ6VXNlcjI4NjczNzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28673745?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nilanshrajput","html_url":"https:\/\/github.com\/Nilanshrajput","followers_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/followers","following_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/repos","events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@SBrandeis all checks passing"],"created_at":1608128528000,"updated_at":1608221230000,"closed_at":1608221230000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1587","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1587","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1587.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1587.patch"},"body":"this is pr is a copy of #1506 due to messed up git history in that pr.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1587\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1586","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1586\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1586\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1586\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1586","id":768864502,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMTY0MDc2","number":1586,"title":"added irc disentangle dataset","user":{"login":"dhruvjoshi1998","id":32560035,"node_id":"MDQ6VXNlcjMyNTYwMDM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32560035?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dhruvjoshi1998","html_url":"https:\/\/github.com\/dhruvjoshi1998","followers_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/followers","following_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/orgs","repos_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/repos","events_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq sorry, this was the only way I was able to fix the pull request ","@lhoestq Thank you for the feedback. I wondering whether I should be passing an 'id' field in the dictionary since the 'connections' reference the 'id' of the linked messages. This 'id' would just be the same as the id_ that is in the yielded tuple.","Yes indeed it would be cool to have the ids in the dictionary. This way the dataset can be shuffled and all without losing information about the connections. Can you add it if you don't mind ?","Thanks :) could you also add the ids in the dictionary since they're useful for the connection links ?","Thanks !\r\nAlso it looks like the dummy_data.zip were regenerated and are now back to being too big (300KB each).\r\nCan you reduce their sizes ? You can actually just revert to the ones you had before the last commit"],"created_at":1608125158000,"updated_at":1611916133000,"closed_at":1611916133000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1586","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1586","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1586.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1586.patch"},"body":"added irc disentanglement dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1586\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1585","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1585\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1585\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1585\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1585","id":768831171,"node_id":"MDU6SXNzdWU3Njg4MzExNzE=","number":1585,"title":"FileNotFoundError for `amazon_polarity`","user":{"login":"phtephanx","id":24647404,"node_id":"MDQ6VXNlcjI0NjQ3NDA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24647404?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/phtephanx","html_url":"https:\/\/github.com\/phtephanx","followers_url":"https:\/\/api.github.com\/users\/phtephanx\/followers","following_url":"https:\/\/api.github.com\/users\/phtephanx\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/phtephanx\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/phtephanx\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/phtephanx\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/phtephanx\/orgs","repos_url":"https:\/\/api.github.com\/users\/phtephanx\/repos","events_url":"https:\/\/api.github.com\/users\/phtephanx\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/phtephanx\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @phtephanx , the `amazon_polarity` dataset has not been released yet. It will be available in the coming soon v2of `datasets` :) \r\n\r\nYou can still access it now if you want, but you will need to install datasets via the master branch:\r\n`pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`"],"created_at":1608123065000,"updated_at":1608134576000,"closed_at":1608134576000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Version: `datasets==v1.1.3`\r\n\r\n### Reproduction\r\n```python\r\nfrom datasets import load_dataset\r\ndata = load_dataset(\"amazon_polarity\")\r\n```\r\ncrashes with\r\n```bash\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/amazon_polarity\/amazon_polarity.py\r\n```\r\nand \r\n```bash\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/amazon_polarity\/amazon_polarity.py\r\n```\r\nand\r\n```bash\r\nFileNotFoundError: Couldn't find file locally at amazon_polarity\/amazon_polarity.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/amazon_polarity\/amazon_polarity.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/amazon_polarity\/amazon_polarity.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1585\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1584","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1584\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1584\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1584\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1584","id":768820406,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMTM2OTQ5","number":1584,"title":"Load hind encorp","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608122318000,"updated_at":1608258444000,"closed_at":1608258444000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1584","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1584","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1584.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1584.patch"},"body":"reformated well documented, yaml tags added, code","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1584\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1583","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1583\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1583\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1583\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1583","id":768795986,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMTIyODEz","number":1583,"title":"Update metrics docstrings.","user":{"login":"Fraser-Greenlee","id":8402500,"node_id":"MDQ6VXNlcjg0MDI1MDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8402500?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Fraser-Greenlee","html_url":"https:\/\/github.com\/Fraser-Greenlee","followers_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/followers","following_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/orgs","repos_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/repos","events_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608120858000,"updated_at":1608316746000,"closed_at":1608316746000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1583","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1583","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1583.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1583.patch"},"body":"#1478 Correcting the argument descriptions for metrics.\r\n\r\nLet me know if there's any issues.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1583\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1582","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1582\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1582\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1582\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1582","id":768776617,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQxMTEwODU1","number":1582,"title":"Adding wiki lingua dataset as new branch","user":{"login":"katnoria","id":7674948,"node_id":"MDQ6VXNlcjc2NzQ5NDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7674948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/katnoria","html_url":"https:\/\/github.com\/katnoria","followers_url":"https:\/\/api.github.com\/users\/katnoria\/followers","following_url":"https:\/\/api.github.com\/users\/katnoria\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/katnoria\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/katnoria\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/katnoria\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/katnoria\/orgs","repos_url":"https:\/\/api.github.com\/users\/katnoria\/repos","events_url":"https:\/\/api.github.com\/users\/katnoria\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/katnoria\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608119587000,"updated_at":1608228406000,"closed_at":1608228405000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1582","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1582","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1582.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1582.patch"},"body":"Adding the dataset as new branch as advised here: #1470\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1582\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1581","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1581\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1581\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1581\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1581","id":768320594,"node_id":"MDU6SXNzdWU3NjgzMjA1OTQ=","number":1581,"title":"Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers'","user":{"login":"eduardofv","id":702586,"node_id":"MDQ6VXNlcjcwMjU4Ng==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/702586?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eduardofv","html_url":"https:\/\/github.com\/eduardofv","followers_url":"https:\/\/api.github.com\/users\/eduardofv\/followers","following_url":"https:\/\/api.github.com\/users\/eduardofv\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eduardofv\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eduardofv\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eduardofv\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eduardofv\/orgs","repos_url":"https:\/\/api.github.com\/users\/eduardofv\/repos","events_url":"https:\/\/api.github.com\/users\/eduardofv\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eduardofv\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nYou can override the directory in which cache file are stored using for example\r\n```\r\nENV HF_HOME=\"\/root\/cache\/hf_cache_home\"\r\n```\r\n\r\nThis way both `transformers` and `datasets` will use this directory instead of the default `.cache`","Great, thanks. I didn't see documentation about than ENV variable, looks like an obvious solution. ","> Thanks for reporting !\r\n> You can override the directory in which cache file are stored using for example\r\n> \r\n> ```\r\n> ENV HF_HOME=\"\/root\/cache\/hf_cache_home\"\r\n> ```\r\n> \r\n> This way both `transformers` and `datasets` will use this directory instead of the default `.cache`\r\n\r\ncan we disable caching directly?","Hi ! Unfortunately no since we need this directory to load datasets.\r\nWhen you load a dataset, it downloads the raw data files in the cache directory inside <cache_dir>\/downloads. Then it builds the dataset and saves it as arrow data inside <cache_dir>\/<dataset_name>.\r\n\r\nHowever you can specify the directory of your choice, and it can be a temporary directory if you want to clean everything up at one point.","I'm closing this to keep issues a bit cleaner"],"created_at":1608076941000,"updated_at":1623944445000,"closed_at":1623944445000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `\/.cache`:\r\n\r\n```\r\n$ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)\/data:\/root\/data -v $(pwd):\/root -v $(pwd)\/models\/:\/root\/models -v $(pwd)\/saved_models\/:\/root\/saved_models -e \"HOST_HOSTNAME=$(hostname)\" hf-error:latest \/bin\/bash\r\n\r\n________ _______________ \r\n___ __\/__________________________________ ____\/__ \/________ __\r\n__ \/ _ _ \\_ __ \\_ ___\/ __ \\_ ___\/_ \/_ __ \/_ __ \\_ | \/| \/ \/\r\n_ \/ \/ __\/ \/ \/ \/(__ )\/ \/_\/ \/ \/ _ __\/ _ \/ \/ \/_\/ \/_ |\/ |\/ \/ \r\n\/_\/ \\___\/\/_\/ \/_\/\/____\/ \\____\/\/_\/ \/_\/ \/_\/ \\____\/____\/|__\/\r\n\r\n\r\nYou are running this container as user with ID 1000 and group 1000,\r\nwhich should map to the ID and group for your user on the Docker host. Great!\r\n\r\ntf-docker \/root > python\r\nPython 3.6.9 (default, Oct 8 2020, 12:12:24) \r\n[GCC 8.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n2020-12-15 23:53:21.165827: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/__init__.py\", line 22, in <module>\r\n from .integrations import ( # isort:skip\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/integrations.py\", line 5, in <module>\r\n from .trainer_utils import EvaluationStrategy\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/trainer_utils.py\", line 25, in <module>\r\n from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/file_utils.py\", line 88, in <module>\r\n import datasets # noqa: F401\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/__init__.py\", line 26, in <module>\r\n from .arrow_dataset import Dataset, concatenate_datasets\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_dataset.py\", line 40, in <module>\r\n from .arrow_reader import ArrowReader\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_reader.py\", line 31, in <module>\r\n from .utils import cached_path, logging\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/__init__.py\", line 20, in <module>\r\n from .download_manager import DownloadManager, GenerateMode\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/download_manager.py\", line 25, in <module>\r\n from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 118, in <module>\r\n os.makedirs(HF_MODULES_CACHE, exist_ok=True)\r\n File \"\/usr\/lib\/python3.6\/os.py\", line 210, in makedirs\r\n makedirs(head, mode, exist_ok)\r\n File \"\/usr\/lib\/python3.6\/os.py\", line 210, in makedirs\r\n makedirs(head, mode, exist_ok)\r\n File \"\/usr\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\nPermissionError: [Errno 13] Permission denied: '\/.cache'\r\n```\r\nI've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile.\r\n\r\n```\r\nFROM tensorflow\/tensorflow:latest-gpu-jupyter\r\nWORKDIR \/root\r\n\r\nEXPOSE 80\r\nEXPOSE 8888\r\nEXPOSE 6006\r\n\r\nENV SHELL \/bin\/bash\r\nENV PATH=\"\/root\/.local\/bin:${PATH}\"\r\n\r\nENV CUDA_CACHE_PATH=\"\/root\/cache\/cuda\"\r\nENV CUDA_CACHE_MAXSIZE=\"4294967296\"\r\n\r\nENV TFHUB_CACHE_DIR=\"\/root\/cache\/tfhub\"\r\n\r\nRUN pip install --upgrade pip\r\n\r\nRUN apt update -y && apt upgrade -y\r\n\r\nRUN pip install transformers\r\n\r\n#Installing datasets will throw the error, try commenting and rebuilding\r\nRUN pip install datasets\r\n\r\n#Another workaround is creating the directory and give permissions explicitly\r\n#RUN mkdir \/.cache\r\n#RUN chmod 777 \/.cache\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1581\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1580","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1580\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1580\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1580\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1580","id":768111377,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQwNjQxNDQ3","number":1580,"title":"made suggested changes in diplomacy_detection.py","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608061920000,"updated_at":1608114472000,"closed_at":1608114472000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1580","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1580","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1580.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1580.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1580\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1579","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1579\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1579\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1579\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1579","id":767808465,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQwMzk5OTY5","number":1579,"title":"Adding CLIMATE-FEVER dataset","user":{"login":"tdiggelm","id":1658969,"node_id":"MDQ6VXNlcjE2NTg5Njk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1658969?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tdiggelm","html_url":"https:\/\/github.com\/tdiggelm","followers_url":"https:\/\/api.github.com\/users\/tdiggelm\/followers","following_url":"https:\/\/api.github.com\/users\/tdiggelm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tdiggelm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tdiggelm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tdiggelm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tdiggelm\/orgs","repos_url":"https:\/\/api.github.com\/users\/tdiggelm\/repos","events_url":"https:\/\/api.github.com\/users\/tdiggelm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tdiggelm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I `git rebase`ed my branch to `upstream\/master` as suggested in point 7 of <https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html> and subsequently used `git pull` to be able to push to my remote branch. However, I think this messed up the history.\r\n\r\nPlease let me know if I should create a clean new PR with my changes.\r\n\r\nUpdate: I also fixed the dataset name in the Dataset Card.","Dear @SBrandeis , @lhoestq . I am not sure how to fix the PR with respect to the additional files that are currently included in the commits. Could you provide me with an example? Otherwise I would be happy to close\/re-open another PR. Please let me know if anything is missing for the review.","Hi @tdiggelm, thanks for the contribution! This dataset is really awesome.\r\nI believe creating a new branch from master and opening a new PR with your changes is the simplest option since no review has been done yet. Feel free to ping us when it's done.","> Hi @tdiggelm, thanks for the contribution! This dataset is really awesome.\r\n> I believe creating a new branch from master and opening a new PR with your changes is the simplest option since no review has been done yet. Feel free to ping us when it's done.\r\n\r\nThank you very much for your quick reply! Will do ASAP and ping you when done.","closing in favor of #1623"],"created_at":1608050962000,"updated_at":1608644596000,"closed_at":1608644595000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1579","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1579","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1579.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1579.patch"},"body":"This PR request the addition of the CLIMATE-FEVER dataset:\r\nA dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.\r\n\r\nMore information can be found at:\r\n- Homepage: <http:\/\/climatefever.ai>\r\n- Paper: <https:\/\/arxiv.org\/abs\/2012.00614>\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1579\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1578","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1578\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1578\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1578\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1578","id":767760513,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQwMzY1NzYz","number":1578,"title":"update multiwozv22 checksums","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1608048832000,"updated_at":1608051989000,"closed_at":1608051989000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1578","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1578","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1578.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1578.patch"},"body":"a file was updated on the GitHub repo for the dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1578\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1577","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1577\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1577\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1577\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1577","id":767342432,"node_id":"MDExOlB1bGxSZXF1ZXN0NTQwMDg2MzY5","number":1577,"title":"Add comet metric","user":{"login":"ricardorei","id":17256847,"node_id":"MDQ6VXNlcjE3MjU2ODQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17256847?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ricardorei","html_url":"https:\/\/github.com\/ricardorei","followers_url":"https:\/\/api.github.com\/users\/ricardorei\/followers","following_url":"https:\/\/api.github.com\/users\/ricardorei\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ricardorei\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ricardorei\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ricardorei\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ricardorei\/orgs","repos_url":"https:\/\/api.github.com\/users\/ricardorei\/repos","events_url":"https:\/\/api.github.com\/users\/ricardorei\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ricardorei\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I also thought a bit about the fact that \"sources\" can't be added to the batch.. but changing that would require a lot more changes. And I agree that the idea of adding them as part of the references is not ideal. Conceptually they are not references.\r\n\r\nI would keep it like this for now.. And in the future, work on a more consistent batch interface."],"created_at":1608022560000,"updated_at":1610631190000,"closed_at":1610631190000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1577","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1577","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1577.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1577.patch"},"body":"Hey! I decided to add our new Crosslingual Optimized Metric for Evaluation of Translation (COMET) to the list of the available metrics.\r\n\r\nCOMET was [presented at EMNLP20](https:\/\/www.aclweb.org\/anthology\/2020.emnlp-main.213\/) and it is the highest performing metric, so far, on the WMT19 benchmark.\r\n\r\nWe also participated in the [WMT20 Metrics shared task ](http:\/\/www.statmt.org\/wmt20\/pdf\/2020.wmt-1.101.pdf) where once again COMET was validated as a top-performing metric. \r\n\r\n\r\nI hope that this metric will help researcher's and industry workers to better validate their MT systems in the future \ud83e\udd17 !\r\n\r\nCheers,\r\nRicardo\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1577\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1576","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1576\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1576\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1576\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1576","id":767080645,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5OTE3MTA0","number":1576,"title":"Remove the contributors section","user":{"login":"clmnt","id":821155,"node_id":"MDQ6VXNlcjgyMTE1NQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/821155?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/clmnt","html_url":"https:\/\/github.com\/clmnt","followers_url":"https:\/\/api.github.com\/users\/clmnt\/followers","following_url":"https:\/\/api.github.com\/users\/clmnt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/clmnt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/clmnt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/clmnt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/clmnt\/orgs","repos_url":"https:\/\/api.github.com\/users\/clmnt\/repos","events_url":"https:\/\/api.github.com\/users\/clmnt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/clmnt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607996835000,"updated_at":1608036827000,"closed_at":1608036826000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1576","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1576","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1576.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1576.patch"},"body":"sourcerer is down","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1576\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1575","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1575\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1575\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1575\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1575","id":767076374,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5OTEzNzgx","number":1575,"title":"Hind_Encorp all done","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["ALL TEST PASSED locally @yjernite ","@rahul-art kindly run the following from the datasets folder \r\n\r\n```\r\nmake style \r\nflake8 datasets\r\n\r\n```\r\n","@skyprince999 I did that before it says all done \r\n","I did that again it gives the same output all done and then I synchronized my changes with this branch ","@lhoestq i did all the changes you suggested but at the time of load_dataset it is giving me error\r\n`**`datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76591256, num_examples=1, dataset_name='hind_encorp'), 'recorded': SplitInfo(name='train', num_bytes=78945714, num_examples=273885, dataset_name='hind_encorp')}]`**`","\r\n\r\n\r\nI cloned the branch and it seems to work fine at my end. try to clear the cache - \r\n\r\n```\r\nrm -rf \/home\/ubuntu\/.cache\/huggingface\/datasets\/\r\nrm -rf \/home\/ubuntu\/.cache\/huggingface\/modules\/datasets_modules\/\/datasets\/\r\n```\r\nBut the dataset has only one record. Is that correct? \r\n![image](https:\/\/user-images.githubusercontent.com\/9033954\/102331376-c7929b00-3fb0-11eb-8a6c-81b2cf47bc2a.png)\r\n","> @lhoestq i did all the changes you suggested but at the time of load_dataset it is giving me error\r\n> `**`datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=76591256, num_examples=1, dataset_name='hind_encorp'), 'recorded': SplitInfo(name='train', num_bytes=78945714, num_examples=273885, dataset_name='hind_encorp')}]`**`\r\n\r\nYou can ignore this error by adding `ignore_verifications=True` to `load_dataset`.\r\n\r\nThis error is raised because you're loading a dataset that you've already loaded once in the past. Therefore the library does some verifications to make sure it's generated the same way. \r\n\r\nHowever since you've done changes in the dataset script you should ignore these verifications.\r\n\r\nYou can regenerate the dataset_infos.json with\r\n```\r\ndatasets-cli test .\/datasets\/hindi_encorp --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n\r\n> I cloned the branch and it seems to work fine at my end. try to clear the cache -\r\n> \r\n> ```\r\n> rm -rf \/home\/ubuntu\/.cache\/huggingface\/datasets\/\r\n> rm -rf \/home\/ubuntu\/.cache\/huggingface\/modules\/datasets_modules\/\/datasets\/\r\n> ```\r\n> \r\n> But the dataset has only one record. Is that correct?\r\n> ![image](https:\/\/user-images.githubusercontent.com\/9033954\/102331376-c7929b00-3fb0-11eb-8a6c-81b2cf47bc2a.png)\r\n\r\nYes the current parsing is wrong, I've already given @rahul-art some suggestions and it looks like it works way better now (num_examples=273885).\r\n\r\nThanks for fixing the parsing @rahul-art !\r\nFeel free to commit and push your changes once it's ready :) ","i ran the command you provided datasets-cli test .\/datasets\/hindi_encorp --save_infos --all_configs --ignore_verifications \r\nbut now its giving this error @lhoestq \r\n\r\nFileNotFoundError: Couldn't find file locally at .\/datasets\/hindi_encorp\/hindi_encorp.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/master\/datasets\/.\/datasets\/hindi_encorp\/hindi_encorp.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/.\/datasets\/hindi_encorp\/hindi_encorp.py.\r\nIf the dataset was added recently, you may need to to pass script_version=\"master\" to find the loading script on the master branch.\r\n","whoops I meant `hind_encorp` instead of `hindi_encorp` sorry","@lhoestq all changes have done successfully in this PR #1584","Ok thanks ! closing this one in favor of #1584 "],"created_at":1607996162000,"updated_at":1608131717000,"closed_at":1608131717000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1575","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1575","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1575.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1575.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1575\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1574","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1574\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1574\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1574\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1574","id":767015317,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5ODY1Mzcy","number":1574,"title":"Diplomacy detection 3","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607988531000,"updated_at":1607988572000,"closed_at":1607988572000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1574","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1574","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1574.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1574.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1574\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1573","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1573\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1573\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1573\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1573","id":767011938,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5ODYyNjcx","number":1573,"title":"adding dataset for diplomacy detection-2","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607988097000,"updated_at":1607989017000,"closed_at":1607989017000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1573","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1573","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1573.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1573.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1573\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1572","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1572\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1572\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1572\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1572","id":767008470,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5ODU5OTgx","number":1572,"title":"add Gnad10 dataset ","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607987702000,"updated_at":1631897677000,"closed_at":1608137550000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1572","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1572","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1572.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1572.patch"},"body":"reference [PR#1317](https:\/\/github.com\/huggingface\/datasets\/pull\/1317)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1572\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1571","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1571\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1571\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1571\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1571","id":766981721,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5ODM5OTEw","number":1571,"title":"Fixing the KILT tasks to match our current standards","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607984772000,"updated_at":1607987261000,"closed_at":1607987261000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1571","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1571","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1571.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1571.patch"},"body":"This introduces a few changes to the Knowledge Intensive Learning task benchmark to bring it more in line with our current datasets, including adding the (minimal) dataset card and having one config per sub-task","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1571\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1570","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1570\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1570\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1570\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1570","id":766830545,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5NzM1MDY2","number":1570,"title":"Documentation for loading CSV datasets misleads the user","user":{"login":"onurgu","id":56893,"node_id":"MDQ6VXNlcjU2ODkz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/onurgu","html_url":"https:\/\/github.com\/onurgu","followers_url":"https:\/\/api.github.com\/users\/onurgu\/followers","following_url":"https:\/\/api.github.com\/users\/onurgu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/onurgu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/onurgu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/onurgu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/onurgu\/orgs","repos_url":"https:\/\/api.github.com\/users\/onurgu\/repos","events_url":"https:\/\/api.github.com\/users\/onurgu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/onurgu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607972677000,"updated_at":1608665412000,"closed_at":1608558429000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1570","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1570","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1570.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1570.patch"},"body":"Documentation for loading CSV datasets misleads the user into thinking setting `quote_char' to False will disable quoting.\r\n\r\nThere are two problems here:\r\n i) `quote_char' is misspelled, must be `quotechar'\r\n ii) the documentation should mention `quoting'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1570\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1569","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1569\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1569\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1569\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1569","id":766758895,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5NjkwMjc2","number":1569,"title":"added un_ga dataset","user":{"login":"param087","id":26374564,"node_id":"MDQ6VXNlcjI2Mzc0NTY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26374564?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/param087","html_url":"https:\/\/github.com\/param087","followers_url":"https:\/\/api.github.com\/users\/param087\/followers","following_url":"https:\/\/api.github.com\/users\/param087\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/param087\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/param087\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/param087\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/param087\/orgs","repos_url":"https:\/\/api.github.com\/users\/param087\/repos","events_url":"https:\/\/api.github.com\/users\/param087\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/param087\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607967724000,"updated_at":1608046138000,"closed_at":1608046138000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1569","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1569","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1569.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1569.patch"},"body":"Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http:\/\/opus.nlpl.eu\/UN.php) dataset.\r\nWith suggested changes in #1330 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1569\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1568","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1568\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1568\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1568\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1568","id":766722994,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5NjY2ODg1","number":1568,"title":"Added the dataset clickbait_news_bg","user":{"login":"tsvm","id":1083319,"node_id":"MDQ6VXNlcjEwODMzMTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1083319?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tsvm","html_url":"https:\/\/github.com\/tsvm","followers_url":"https:\/\/api.github.com\/users\/tsvm\/followers","following_url":"https:\/\/api.github.com\/users\/tsvm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tsvm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tsvm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tsvm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tsvm\/orgs","repos_url":"https:\/\/api.github.com\/users\/tsvm\/repos","events_url":"https:\/\/api.github.com\/users\/tsvm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tsvm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @tsvm Great work! \r\nSince you have raised a clean PR could you close the earlier one - #1445 ? \r\n","> Hi @tsvm Great work!\r\n> Since you have raised a clean PR could you close the earlier one - #1445 ?\r\n\r\nDone."],"created_at":1607965380000,"updated_at":1608056936000,"closed_at":1608056936000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1568","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1568","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1568.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1568.patch"},"body":"There was a problem with my [previous PR 1445](https:\/\/github.com\/huggingface\/datasets\/pull\/1445) after rebasing, so I'm copying the dataset code into a new branch and submitting a new PR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1568\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1567","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1567\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1567\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1567\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1567","id":766382609,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5NDE3NzI5","number":1567,"title":"[wording] Update Readme.md","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607949292000,"updated_at":1608036847000,"closed_at":1608036846000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1567","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1567","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1567.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1567.patch"},"body":"Make the features of the library clearer.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1567\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1566","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1566\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1566\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1566\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1566","id":766354236,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5Mzk5NTg4","number":1566,"title":"Add Microsoft Research Sequential Question Answering (SQA) Dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I proposed something a few weeks ago in #898 (un-merged) but I think that the way that @mattbui added the dataset in the present PR is smarter and simpler should replace my PR #898.\r\n\r\n(Narrator voice: *And it was around that time that Thomas realized that the community was now a lot smarter than him and he should hand-over the library he had started with @lhoestq to the community and stop pretending he knew everything about it.*)"],"created_at":1607947350000,"updated_at":1608045862000,"closed_at":1608045862000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1566","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1566","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1566.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1566.patch"},"body":"For more information: https:\/\/msropendata.com\/datasets\/b25190ed-0f59-47b1-9211-5962858142c2","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1566\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1565","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1565\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1565\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1565\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1565","id":766333940,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5Mzg2MzEx","number":1565,"title":"Create README.md","user":{"login":"ManuelFay","id":43467008,"node_id":"MDQ6VXNlcjQzNDY3MDA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43467008?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ManuelFay","html_url":"https:\/\/github.com\/ManuelFay","followers_url":"https:\/\/api.github.com\/users\/ManuelFay\/followers","following_url":"https:\/\/api.github.com\/users\/ManuelFay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ManuelFay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ManuelFay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ManuelFay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ManuelFay\/orgs","repos_url":"https:\/\/api.github.com\/users\/ManuelFay\/repos","events_url":"https:\/\/api.github.com\/users\/ManuelFay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ManuelFay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@ManuelFay thanks you so much for adding a dataset card, this is such a cool contribution!\r\n\r\nThis looks like it uses an old template for the card we've moved things around a bit and we have an app you should be using to get the tags and the structure of the Data Fields paragraph :) Would you mind moving your text to the newer format (we're also asking contributors to keep the full template structure, even if some sections still have [More Information Needed] for the time being)\r\n\r\nHere's the link to the instructions:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nOut of curiosity, what was your landing point for filling out the card? Did you follow the \"Update on Github\" when navigating the datasets? Trying to make the instructions as clear as possible :) ","@yjernite \r\n\r\nPerfect, I'll follow the instructions when I have a bit more time tomorrow ! I was actually browsing the new contributions after the dataset sprint and realized most of the \"old\" datasets were not tagged, so I just copied and pasted the readme from another dataset and was not aware there was precise instructions... Will fix !\r\n\r\nBTW, amazing job with the retriBert work, I used the contrastive + in-batch negative quite a bit for various projects. Probably neither the time nor place to talk about that but I was curious as to why, in your original work, you prefered using a simple projection in the last layer to differentiate the question vs answer embedding, rather than allowing for bias in the dense layer or even just to fine-tune 2 different embedders for question + answer ? ","Cool! Looking forward to the next version!\r\n\r\nQuick answer for retriBERT is that I expected a simple projection to generalize better and more importantly only having to store the gradients for the proj means training with larger batches :) If you want to keep chatting about it, feel free to send me an email!","Hi @ManuelFay ! \r\nIf you're still interested in completing the FQuAD dataset card, note that we've generated one that is pre-filled.\r\nTherefore feel free to complete it with the content you already have in your README.md.\r\nThis would be awesome ! And thanks again for your contribution :)","Yo @lhoestq , just not sure about the tag table at the top, I used @yjernite eli5 template so hope it's okay ! Also want to signal the streamlit app for dataset tagging has a weird behavior with the size categories when filling in the form. \r\n\r\nThanks to you guys for doing that and sorry about the time it took, i completely forgot about it ! \r\n"],"created_at":1607946023000,"updated_at":1616680909000,"closed_at":1616680909000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1565","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1565","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1565.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1565.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1565\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1564","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1564\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1564\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1564\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1564","id":766266609,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MzQzMjAy","number":1564,"title":"added saudinewsnet","user":{"login":"abdulelahsm","id":28743265,"node_id":"MDQ6VXNlcjI4NzQzMjY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28743265?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abdulelahsm","html_url":"https:\/\/github.com\/abdulelahsm","followers_url":"https:\/\/api.github.com\/users\/abdulelahsm\/followers","following_url":"https:\/\/api.github.com\/users\/abdulelahsm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abdulelahsm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abdulelahsm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abdulelahsm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abdulelahsm\/orgs","repos_url":"https:\/\/api.github.com\/users\/abdulelahsm\/repos","events_url":"https:\/\/api.github.com\/users\/abdulelahsm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abdulelahsm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @abdulelahsm - This is an interesting dataset! But there are multiple issues with the PR. Some of them are listed below: \r\n- default builder config is not defined. There should be atleast one builder config \r\n- URL is incorrectly constructed so the data files are not being downloaded \r\n- dataset_info.json file was not created\r\n\r\nPlease have a look at some existing merged datasets to get a reference on building the data loader. If you are still stuck, reach out. \r\n","@skyprince999 I totally agree. Thx for the feedback!","Hi @abdulelahsm ! Thanks for adding this one :) \r\nyou don't actually have to add builder configurations if you don't need them. It's fine as it is now\r\n\r\nAnd as @skyprince999 noticed, the current URLs don't work. to download files.\r\nYou can use this one for example for the first batch instead:\r\nhttps:\/\/github.com\/parallelfold\/SaudiNewsNet\/raw\/master\/dataset\/2015-07-21.zip\r\n\r\nFeel free to ping me if you have questions or if you're ready for a review :) ","@lhoestq Hey, I tried using the first batch instead, the data was downloaded but I got this error, not sure why it can't find the path?\r\n\r\nfor content, I ran ``` \".\/datasets\/saudinewsnet\/test.py\"```\r\n\r\nwhich is a local test I'm running for the dataset, it contains the following code\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndata = load_dataset(\".\/datasets\/saudinewsnet\", split= \"train\")\r\n\r\nprint(data)\r\n\r\nprint(data[1])\r\n```\r\n\r\nthis is the error I got \r\n\r\n```\r\n2020-12-18 21:45:39.403908: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2020-12-18 21:45:39.403953: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\nDownloading and preparing dataset saudi_news_net\/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/mesfas\/.cache\/huggingface\/datasets\/saudi_news_net\/default\/0.0.0\/62ece5ef0a991415352d4b1efac681d75b5b3404064fd4f6a1d659499dab18f4...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.42M\/3.42M [00:03<00:00, 1.03MB\/s]\r\nTraceback (most recent call last):\r\n File \"\/home\/mesfas\/opensource\/datasets\/src\/datasets\/builder.py\", line 604, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/mesfas\/opensource\/datasets\/src\/datasets\/builder.py\", line 902, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"\/home\/mesfas\/environments\/ar_res_reviews\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/mesfas\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/saudinewsnet\/62ece5ef0a991415352d4b1efac681d75b5b3404064fd4f6a1d659499dab18f4\/saudinewsnet.py\", line 108, in _generate_examples\r\n with open(filepath, encoding=\"utf-8\").read() as f:\r\nIsADirectoryError: [Errno 21] Is a directory: '\/home\/mesfas\/.cache\/huggingface\/datasets\/downloads\/extracted\/314fd983aa07d3dada9429911a805270c3285f48759d3584a1343c2d86260765'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \".\/datasets\/saudinewsnet\/test.py\", line 3, in <module>\r\n data = load_dataset(\".\/datasets\/saudinewsnet\", split= \"train\")\r\n File \"\/home\/mesfas\/opensource\/datasets\/src\/datasets\/load.py\", line 607, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/mesfas\/opensource\/datasets\/src\/datasets\/builder.py\", line 526, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/mesfas\/opensource\/datasets\/src\/datasets\/builder.py\", line 606, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file. \r\nOriginal error:\r\n[Errno 21] Is a directory: '\/home\/mesfas\/.cache\/huggingface\/datasets\/downloads\/extracted\/314fd983aa07d3dada9429911a805270c3285f48759d3584a1343c2d86260765'\r\n```\r\n\r\n\r\nthis is the split code \r\n\r\n```\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n my_urls = _URL\r\n datadir = dl_manager.download_and_extract(my_urls)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": datadir,\r\n \"split\": \"train\"\r\n },\r\n ),\r\n ]\r\n```\r\nand this is how I'm generating the examples\r\n\r\n```\r\n def _generate_examples(self, filepath, split):\r\n \r\n #logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n articles = json.load(f)\r\n for article in articles:\r\n title = article.get(\"title\", \"\").strip()\r\n source = article.get(\"source\", \"\").strip()\r\n date = article.get(\"date_extracted\", \"\").strip()\r\n link = article.get(\"url\", \"\").strip()\r\n author = article.get(\"author\", \"\").strip()\r\n content = article.get(\"content\", \"\").strip()\r\n\r\n yield id_, {\r\n \"title\": title,\r\n \"source\": source,\r\n \"date\": date,\r\n \"link\": link,\r\n \"author\": author,\r\n \"content\": content\r\n }\r\n```","What's `_URL` ?\r\n\r\nIt looks like you are downloading an archive.\r\nTherefore you may need to get to the file path using `filepath = os.path.join(datadir, \"actual_file_name_inside_the_downloaded_archive\")`","@lhoestq you were 100% right. Thank you. All fixed","@lhoestq ping!","@lhoestq added the remaining 17 batches and modified the readme.md to reflect that + resolved the camel case comment","merging since the CI is fixed on master"],"created_at":1607942109000,"updated_at":1608630664000,"closed_at":1608630664000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1564","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1564","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1564.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1564.patch"},"body":"I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1564\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1563","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1563\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1563\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1563\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1563","id":766211931,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MzA4Mzg4","number":1563,"title":"adding tmu-gfm-dataset","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for your code review! I think I could do the necessary corrections. Could you please check it again when you have time?","Thank you for merging!"],"created_at":1607939130000,"updated_at":1608546064000,"closed_at":1608545233000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1563","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1563","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1563.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1563.patch"},"body":"Adding TMU-GFM-Dataset for Grammatical Error Correction.\r\n\r\nhttps:\/\/github.com\/tmu-nlp\/TMU-GFM-Dataset\r\n\r\nA dataset for GEC metrics with manual evaluations of grammaticality, fluency, and meaning preservation for system outputs.\r\nMore detail about the creation of the dataset can be found in [Yoshimura et al. (2020)](https:\/\/www.aclweb.org\/anthology\/2020.coling-main.573.pdf).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1563\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1562","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1562\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1562\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1562\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1562","id":765981749,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MTc5ODc3","number":1562,"title":"Add dataset COrpus of Urdu News TExt Reuse (COUNTER).","user":{"login":"arkhalid","id":14899066,"node_id":"MDQ6VXNlcjE0ODk5MDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14899066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arkhalid","html_url":"https:\/\/github.com\/arkhalid","followers_url":"https:\/\/api.github.com\/users\/arkhalid\/followers","following_url":"https:\/\/api.github.com\/users\/arkhalid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arkhalid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arkhalid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arkhalid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arkhalid\/orgs","repos_url":"https:\/\/api.github.com\/users\/arkhalid\/repos","events_url":"https:\/\/api.github.com\/users\/arkhalid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arkhalid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just a small revision from simon's review: 20KB for the dummy_data.zip is fine, you can keep them this way.","Also the CI is failing because of an error `tests\/test_file_utils.py::TempSeedTest::test_tensorflow` that is not related to your dataset and is fixed on master. You can ignore it","merging since the Ci is fixed on master"],"created_at":1607927568000,"updated_at":1608556486000,"closed_at":1608556486000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1562","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1562","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1562.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1562.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1562\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1561","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1561\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1561\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1561\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1561","id":765831436,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MTAwNjAy","number":1561,"title":"Lama","user":{"login":"ontocord","id":8900094,"node_id":"MDQ6VXNlcjg5MDAwOTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8900094?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ontocord","html_url":"https:\/\/github.com\/ontocord","followers_url":"https:\/\/api.github.com\/users\/ontocord\/followers","following_url":"https:\/\/api.github.com\/users\/ontocord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ontocord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ontocord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ontocord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ontocord\/orgs","repos_url":"https:\/\/api.github.com\/users\/ontocord\/repos","events_url":"https:\/\/api.github.com\/users\/ontocord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ontocord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Let me know why the pyarrow test is failing. For one of the config \"trex\", I had to load an initial datafile for a dictionary which is used to augment the rest of the datasets. In the dummy data, the dictionary file was truncated so I had to fudge that. I'm not sure if that is the issue.\r\n","@ontocord it just needs a rerun and it will be good to go.","THanks @tanmoyio. How do I do a rerun?","@ontocord contributor can\u2019t rerun it, the maintainers will rerun it, it may take lil bit of time as there are so many PRs left to be reviewed and merged ","@lhoestq not sure why it is failing. i've made all modifications. ","merging since the CI is fixed on master"],"created_at":1607916430000,"updated_at":1609149107000,"closed_at":1609149107000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1561","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1561","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1561.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1561.patch"},"body":"This the LAMA dataset for probing facts and common sense from language models. \r\n\r\nSee https:\/\/github.com\/facebookresearch\/LAMA for more details.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1561\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1560","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1560\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1560\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1560\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1560","id":765814964,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDkzMzky","number":1560,"title":"Adding the BrWaC dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607915036000,"updated_at":1608307016000,"closed_at":1608307015000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1560","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1560","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1560.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1560.patch"},"body":"Adding the BrWaC dataset, a large corpus of Portuguese language texts","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1560\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1559","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1559\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1559\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1559\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1559","id":765714183,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDQ5MTky","number":1559,"title":"adding dataset card information to CONTRIBUTING.md","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607904523000,"updated_at":1607968503000,"closed_at":1607968503000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1559","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1559","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1559.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1559.patch"},"body":"Added a documentation line and link to the full sprint guide in the \"How to add a dataset\" section, and a section on how to contribute to the dataset card of an existing dataset.\r\n\r\nAnd a thank you note at the end :hugs: ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1559\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1558","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1558\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1558\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1558\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1558","id":765707907,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDQ2MzA4","number":1558,"title":"Adding Igbo NER data ","user":{"login":"purvimisal","id":22298787,"node_id":"MDQ6VXNlcjIyMjk4Nzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22298787?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/purvimisal","html_url":"https:\/\/github.com\/purvimisal","followers_url":"https:\/\/api.github.com\/users\/purvimisal\/followers","following_url":"https:\/\/api.github.com\/users\/purvimisal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/purvimisal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/purvimisal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/purvimisal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/purvimisal\/orgs","repos_url":"https:\/\/api.github.com\/users\/purvimisal\/repos","events_url":"https:\/\/api.github.com\/users\/purvimisal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/purvimisal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the PR @purvimisal. \r\n\r\nFew comments below. ","Hi, @lhoestq Thank you for the review. I have made all the changes. PTAL! ","the CI error is not related to your dataset, merging"],"created_at":1607903531000,"updated_at":1608561500000,"closed_at":1608561500000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1558","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1558","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1558.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1558.patch"},"body":"This PR adds the Igbo NER dataset.\r\nData: https:\/\/github.com\/IgnatiusEzeani\/IGBONLP\/tree\/master\/ig_ner ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1558\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1557","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1557\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1557\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1557\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1557","id":765693927,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDM5MDY0","number":1557,"title":"HindEncorp again commited","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes this has the right files!!!\r\n\r\nI'll close the previous one then :) \r\n\r\nNow to pass the tests, you will need to:\r\n- `make style` and run `flake8 datasets` from your repository root directory\r\n- fix the dummy data\r\n\r\nDid you generate the dummy data with the auto-generation tool (see the guide) or manually?","manually with the tool, it is not able to create","Cool, in that case you need to pay special attention to the directory structure given to you by the tool, most failures are because the files are in the wrong directory or at the wrong level :) \r\n\r\nAlso, make sure that the tests pass locally before pushing to the branch, it should help you get the structure right ;) ","yes I have give proper directory structure datasets\/hind_encorp\/dummy\/0.0.0\/dummy_data.zip but in my dummy_data.zip only 1 file hind_encorp.plaintext is present because the dataset I got has only 1 file with both English and Hindi languages on 1 file itself may be this is causing issue","Looks like the name of the file is the issue here: you have a file called `hindencorp05.plaintext`, but it should be called `hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy`. You just have to rename it to pass the test:\r\n```\r\ncd datasets\/hind_encorp\/dummy\/0.0.0\r\nrm -rf dummy_data\r\nunzip dummy_data.zip\r\nrm dummy_data.zip\r\nmv dummy_data\/hindencorp05.plaintext \"dummy_data\/hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy\"\r\nzip -r dummy_data.zip dummy_data \r\n```\r\n\r\nThen **once you pass the tests locally** you just have to remember to `make style` and `flake8 datasets` to pass the style checks, and you should be good to go :hugs: \r\n\r\nFor reference, here are the instructions given by the tool:\r\n```\r\n$ python datasets-cli dummy_data datasets\/hind_encorp\/\r\n2020-12-14 13:16:26.824828: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\r\n2020-12-14 13:16:26.824846: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nUsing custom data configuration default\r\n\r\n==============================DUMMY DATA INSTRUCTIONS==============================\r\n- In order to create the dummy data for , please go into the folder 'datasets\/hind_encorp\/dummy\/0.0.0' with `cd datasets\/hind_encorp\/dummy\/0.0.0` . \r\n\r\n- Please create the following dummy data files 'dummy_data\/hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy' from the folder 'datasets\/hind_encorp\/dummy\/0.0.0'\r\n\r\n- For each of the splits 'train', make sure that one or more of the dummy data files provide at least one example \r\n\r\n- If the method `_generate_examples(...)` includes multiple `open()` statements, you might have to create other files in addition to 'dummy_data\/hindencorp05.plaintext.gz%3Fsequence%3D3%26isAllowed%3Dy'. In this case please refer to the `_generate_examples(...)` method \r\n\r\n-After all dummy data files are created, they should be zipped recursively to 'dummy_data.zip' with the command `zip -r dummy_data.zip dummy_data\/` \r\n\r\n-You can now delete the folder 'dummy_data' with the command `rm -r dummy_data` \r\n\r\n- To get the folder 'dummy_data' back for further changes to the dummy data, simply unzip dummy_data.zip with the command `unzip dummy_data.zip` \r\n\r\n- Make sure you have created the file 'dummy_data.zip' in 'datasets\/hind_encorp\/dummy\/0.0.0' \r\n===================================================================================\r\n```\r\n","all test passed locally. my new PR #1575 ","Closing this one in favor of #1575 "],"created_at":1607900942000,"updated_at":1608028625000,"closed_at":1608028624000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1557","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1557","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1557.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1557.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1557\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1556","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1556\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1556\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1556\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1556","id":765689730,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDM2OTYz","number":1556,"title":"add bswac","user":{"login":"IvanZidov","id":11391118,"node_id":"MDQ6VXNlcjExMzkxMTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11391118?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/IvanZidov","html_url":"https:\/\/github.com\/IvanZidov","followers_url":"https:\/\/api.github.com\/users\/IvanZidov\/followers","following_url":"https:\/\/api.github.com\/users\/IvanZidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/IvanZidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/IvanZidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/IvanZidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/IvanZidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/IvanZidov\/repos","events_url":"https:\/\/api.github.com\/users\/IvanZidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/IvanZidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607900135000,"updated_at":1608304468000,"closed_at":1608304467000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1556","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1556","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1556.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1556.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1556\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1555","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1555\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1555\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1555\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1555","id":765681607,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDMzMzIw","number":1555,"title":"Added Opus TedTalks","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.","merging since the CI is fixed on master"],"created_at":1607898573000,"updated_at":1608284683000,"closed_at":1608284683000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1555","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1555","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1555.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1555.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/TedTalks.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1555\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1554","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1554\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1554\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1554\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1554","id":765675148,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDMwNDU2","number":1554,"title":"Opus CAPES added","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.","Hi @rkc007 , thanks for the contribution.\r\nUnfortunately, the CAPES dataset has already been added here: #1307\r\nI'm closing the PR ","@lhoestq FYI"],"created_at":1607897494000,"updated_at":1608285297000,"closed_at":1608281219000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1554","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1554","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1554.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1554.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/CAPES.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1554\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1553","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1553\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1553\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1553\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1553","id":765670083,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDI4MzM3","number":1553,"title":"added air_dialogue","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607896742000,"updated_at":1608722440000,"closed_at":1608722439000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1553","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1553","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1553.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1553.patch"},"body":"UPDATE2 (3797ce5): Updated for multi-configs \r\n\r\nUPDATE (7018082): manually created the dummy_datasets. All tests were cleared locally. Pushed it to origin\/master\r\n\r\nDRAFT VERSION (57fdb20): (_no longer draft_)\r\nUploaded the air_dialogue database. \r\ndummy_data creation was failing in local, since the original downloaded file has some nested folders. Pushing it since the tests with real data was cleared. Will re-check & update via manually creating some dummy_data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1553\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1552","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1552\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1552\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1552\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1552","id":765664411,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDI2MzAx","number":1552,"title":"Added OPUS ParaCrawl","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.","@rkc007 @lhoestq just noticed a dataset named para_crawl has been added a long time ago: #91.","They're not exactly the same so it's ok to have both.\r\n\r\nEspecially the `para_crawl` that already exists only uses the text from the ParaCrawl release 4.","Could you regenerate the dataset_infos.json @rkc007 please ?\r\nIt looks like it has some issues due to the dataset class name change","@SBrandeis Thank you for suggesting changes. I made the changes you suggested. \r\n\r\n@lhoestq I generated `dataset_infos.json` again. I ran both tests(Dummy & Real data) and it passed. Can you please review it again?","merging since the CI is fixed on master"],"created_at":1607895869000,"updated_at":1608544226000,"closed_at":1608544225000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1552","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1552","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1552.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1552.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/ParaCrawl.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1552\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1551","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1551\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1551\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1551\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1551","id":765621879,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDEwNDAy","number":1551,"title":"Monero","user":{"login":"iliemihai","id":2815308,"node_id":"MDQ6VXNlcjI4MTUzMDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2815308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliemihai","html_url":"https:\/\/github.com\/iliemihai","followers_url":"https:\/\/api.github.com\/users\/iliemihai\/followers","following_url":"https:\/\/api.github.com\/users\/iliemihai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliemihai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliemihai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliemihai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliemihai\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliemihai\/repos","events_url":"https:\/\/api.github.com\/users\/iliemihai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliemihai\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @iliemihai - you need to add the Readme file! Otherwise seems good. \r\nAlso don't forget to run `make style` & `flake8 datasets` locally, from the datasets folder","@skyprince999 I will add the README.d for it. Thank you :D "],"created_at":1607889408000,"updated_at":1608302533000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1551","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1551","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1551.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1551.patch"},"body":"Biomedical Romanian dataset :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1551\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1550","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1550\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1550\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1550\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1550","id":765620925,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDEwMDY1","number":1550,"title":"Add offensive langauge dravidian dataset","user":{"login":"jamespaultg","id":7421838,"node_id":"MDQ6VXNlcjc0MjE4Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7421838?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jamespaultg","html_url":"https:\/\/github.com\/jamespaultg","followers_url":"https:\/\/api.github.com\/users\/jamespaultg\/followers","following_url":"https:\/\/api.github.com\/users\/jamespaultg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jamespaultg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jamespaultg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jamespaultg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jamespaultg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jamespaultg\/repos","events_url":"https:\/\/api.github.com\/users\/jamespaultg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jamespaultg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks much!"],"created_at":1607889259000,"updated_at":1608306769000,"closed_at":1608301530000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1550","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1550","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1550.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1550.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1550\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1549","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1549\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1549\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1549\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1549","id":765612905,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDA3MTU4","number":1549,"title":"Generics kb new branch","user":{"login":"bpatidar","id":12439573,"node_id":"MDQ6VXNlcjEyNDM5NTcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12439573?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bpatidar","html_url":"https:\/\/github.com\/bpatidar","followers_url":"https:\/\/api.github.com\/users\/bpatidar\/followers","following_url":"https:\/\/api.github.com\/users\/bpatidar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bpatidar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bpatidar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bpatidar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bpatidar\/orgs","repos_url":"https:\/\/api.github.com\/users\/bpatidar\/repos","events_url":"https:\/\/api.github.com\/users\/bpatidar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bpatidar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607887990000,"updated_at":1608558909000,"closed_at":1608558909000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1549","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1549","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1549.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1549.patch"},"body":"Datasets need manual downloads. Have thus created dummy data as well. But pytest on real and dummy data are failing.\r\nI have completed the readme , tags and other required things. I need to create the metadata json once tests get successful.\r\nOpening a PR while working with Yacine Jernite to resolve my pytest issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1549\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1548","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1548\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1548\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1548\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1548","id":765592336,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM5MDAwMjIy","number":1548,"title":"Fix `\ud83e\udd17Datasets` - `tfds` differences link + a few aesthetics","user":{"login":"VIVelev","id":22171622,"node_id":"MDQ6VXNlcjIyMTcxNjIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22171622?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VIVelev","html_url":"https:\/\/github.com\/VIVelev","followers_url":"https:\/\/api.github.com\/users\/VIVelev\/followers","following_url":"https:\/\/api.github.com\/users\/VIVelev\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VIVelev\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VIVelev\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VIVelev\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VIVelev\/orgs","repos_url":"https:\/\/api.github.com\/users\/VIVelev\/repos","events_url":"https:\/\/api.github.com\/users\/VIVelev\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VIVelev\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607885301000,"updated_at":1608036927000,"closed_at":1608036927000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1548","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1548","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1548.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1548.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1548\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1547","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1547\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1547\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1547\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1547","id":765562792,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTkwOTMy","number":1547,"title":"Adding PolEval2019 Machine Translation Task dataset","user":{"login":"vrindaprabhu","id":16264631,"node_id":"MDQ6VXNlcjE2MjY0NjMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16264631?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vrindaprabhu","html_url":"https:\/\/github.com\/vrindaprabhu","followers_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/followers","following_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/orgs","repos_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/repos","events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["**NOTE:**\r\n\r\n- Train and Dev: Manually downloaded (auto download is repeatedly giving `ConnectionError` for one of the files), Test: Auto Download\r\n- Dummy test is passing\r\n- The json file has been created with hard-coded paths for the manual downloads _(hardcoding has been removed from the final uploaded script)_\r\n- datasets-cli is still **failing** . It is not picking the right directory for the config. For instance, my folder structure is as below:\r\n ```\r\n ~\/Downloads\/Data\/\r\n |--- English-to-Polish\r\n |--- (corresponding files) \r\n |--- Russian-Polish\r\n |--- (corresponding files) \r\n```\r\n\r\nWhen ru-pl is selected, ideally it has to search in Russian-Polish folder, but it is searching in '\/Downloads\/Data\/' folder and hence getting a FileNotFound error.\r\n\r\nThe command run is \r\n`python datasets-cli test datasets\/poleval2019_mt\/ --save_infos --all_configs --data_dir ~\/Downloads\/Data\/\r\n`\r\n","Hi !\r\nThanks for the changes :)\r\n\r\nThe only error left is the dummy data. Since we changed for standard downloads instead of manual downloads its structure changed. Fortunately you can auto-generate the dummy data with this command:\r\n\r\n```\r\ndatasets-cli dummy_data .\/datasets\/poleval2019_mt --auto_generate --match_text_files \"*\"\r\n```\r\n\r\nCan you regenerate the dummy data using this command please ?","Thank you for the help @lhoestq !! I was generating the dummy dataset in a wrong way! That _--match_text_files \"*\"_ did the trick! Now all the tests have passed! :-)"],"created_at":1607881803000,"updated_at":1613453276000,"closed_at":1608567201000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1547","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1547","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1547.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1547.patch"},"body":"Facing an error with pytest in training. Dummy data is passing.\r\nREADME has to be updated.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1547\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1546","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1546\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1546\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1546\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1546","id":765559923,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTkwMjgw","number":1546,"title":"Add persian ner dataset","user":{"login":"KMFODA","id":35491698,"node_id":"MDQ6VXNlcjM1NDkxNjk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35491698?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KMFODA","html_url":"https:\/\/github.com\/KMFODA","followers_url":"https:\/\/api.github.com\/users\/KMFODA\/followers","following_url":"https:\/\/api.github.com\/users\/KMFODA\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KMFODA\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KMFODA\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KMFODA\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KMFODA\/orgs","repos_url":"https:\/\/api.github.com\/users\/KMFODA\/repos","events_url":"https:\/\/api.github.com\/users\/KMFODA\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KMFODA\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["HI @SBrandeis. Thanks for all the comments - very helpful. I realised that the tests had failed and had been trying to figure out what was causing them to do so. All the tests pass when I run the load_real_dataset test however when I run `RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_persian_ner` I get the below error. One thing to note is that the automated dummy data file generation failed when I tried to run it so I manually created the dummy data and ensured that the last line in the file was an empty line as per your comments. Would appreciate your thoughts on what might be causing this:\r\n\r\n```\r\n__________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_persian_ner __________________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_persian_ner>, dataset_name = 'persian_ner'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests\/test_dataset_common.py:237: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n--------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------\r\nDownloading and preparing dataset persian_ner\/fold1 (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to \/var\/folders\/nk\/yp5_m5c95cnc0cm_vbd7h7g80000gn\/T\/tmpzh495aac\/persian_ner\/fold1\/1.1.0...\r\nDataset persian_ner downloaded and prepared to \/var\/folders\/nk\/yp5_m5c95cnc0cm_vbd7h7g80000gn\/T\/tmpzh495aac\/persian_ner\/fold1\/1.1.0. Subsequent calls will reuse this data.\r\n--------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------\r\n \r\n======================================================================= warnings summary =======================================================================\r\nenv\/lib\/python3.7\/site-packages\/tensorflow\/python\/autograph\/utils\/testing.py:21\r\n \/Users\/karimfoda\/Documents\/STUDIES\/PYTHON\/DATASETS\/env\/lib\/python3.7\/site-packages\/tensorflow\/python\/autograph\/utils\/testing.py:21: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\nenv\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:693\r\n \/Users\/karimfoda\/Documents\/STUDIES\/PYTHON\/DATASETS\/env\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:693: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(type_params, collections.Iterable):\r\n\r\nenv\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:532\r\n \/Users\/karimfoda\/Documents\/STUDIES\/PYTHON\/DATASETS\/env\/lib\/python3.7\/site-packages\/apache_beam\/typehints\/typehints.py:532: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n if not isinstance(type_params, (collections.Sequence, set)):\r\n\r\nenv\/lib\/python3.7\/site-packages\/elasticsearch\/compat.py:38\r\n \/Users\/karimfoda\/Documents\/STUDIES\/PYTHON\/DATASETS\/env\/lib\/python3.7\/site-packages\/elasticsearch\/compat.py:38: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\r\n from collections import Mapping\r\n\r\n-- Docs: https:\/\/docs.pytest.org\/en\/stable\/warnings.html\r\n=================================================================== short test summary info ====================================================================\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_persian_ner - AssertionError: False is not true\r\n```","Thanks @SBrandeis. It turns out the error was because I had to manually increase the n_lines variable to get the dummy data generation to cover at least one example. Should all be working okay now.","Great, thanks!\r\nIt looks good to me, I'll let @lhoestq take over"],"created_at":1607881548000,"updated_at":1608717183000,"closed_at":1608717183000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1546","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1546","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1546.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1546.patch"},"body":"Adding the following dataset:\r\n\r\nhttps:\/\/github.com\/HaniehP\/PersianNER\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1546\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1545","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1545\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1545\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1545\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1545","id":765550283,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTg3OTY0","number":1545,"title":"add hrwac","user":{"login":"IvanZidov","id":11391118,"node_id":"MDQ6VXNlcjExMzkxMTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11391118?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/IvanZidov","html_url":"https:\/\/github.com\/IvanZidov","followers_url":"https:\/\/api.github.com\/users\/IvanZidov\/followers","following_url":"https:\/\/api.github.com\/users\/IvanZidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/IvanZidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/IvanZidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/IvanZidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/IvanZidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/IvanZidov\/repos","events_url":"https:\/\/api.github.com\/users\/IvanZidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/IvanZidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607880714000,"updated_at":1608298517000,"closed_at":1608298517000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1545","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1545","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1545.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1545.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1545\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1544","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1544\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1544\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1544\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1544","id":765514828,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTc5MjIz","number":1544,"title":"Added Wiki Summary Dataset","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq why my tests are not running?","Maybe an issue with CircleCI, let me try to make them run","The CI error `tests\/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to this dataset and is fixed on master, you can ignore it","what I need to do now","Now the delimiter of the csv reader is fixed, thanks :) \r\n\r\nI just added a comment suggesting to try using actual URLS instead of a manual download if possible.\r\nThis would make things more convenient for the users. Can you try using the `dl_manager` to download the train\/dev\/test csv files instead of requiring manual download ?","Also pinging @m3hrdadfi , since I just noticed that there's already a dataset script that was created 3 weeks ago for this dataset here: https:\/\/github.com\/m3hrdadfi\/wiki-summary\/tree\/master\/datasets\/wiki_summary_persian","@lhoestq I am getting this error while generating the dummy data\r\n![Screenshot (181)](https:\/\/user-images.githubusercontent.com\/33005287\/102628819-50a40080-4170-11eb-9e96-efce74b45ff4.png)\r\n","Can you try by adding the flag `--match_text_files \"*\"` ?","now it worked","@lhoestq pytest on dummy data passed, but on real data raising this issue\r\n![Screenshot (196)](https:\/\/user-images.githubusercontent.com\/33005287\/102630784-fa848c80-4172-11eb-9f7e-e5a58dcf7abe.png)\r\nhow to resolve it\r\n","I see ! This is because the library did some verification to make sure it downloads the same files as in the first time you ran the `datasets-cli test` command with `--save_infos`. Since we're now downloading files, the verification fails. \r\n\r\nTo fix that you just need to regenerate the dataset_infos.json file:\r\n```\r\ndatasets-cli test .\/datasets\/wiki_summary --save_infos --all_configs --ignore_verifications\r\n```","@lhoestq I have modified everything and It worked fine, dont know why it is not passing the tests ","Awesome thank you !\r\n\r\nThe CI error `tests\/test_file_utils.py::TempSeedTest::test_tensorflow` is not related to your dataset and is fixed on master.\r\nYou can ignore it :) ","@lhoestq anything left to do ?","The dataset script is all good now ! The dummy data and the dataset_infos.json file are good too :) ","@lhoestq yay, thanks for helping me out , ","merging since the CI is fixed on master","@tanmoyio @lhoestq \r\n\r\nThank you both!"],"created_at":1607877226000,"updated_at":1608308406000,"closed_at":1608308238000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1544","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1544","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1544.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1544.patch"},"body":"Wiki Summary: Dataset extracted from Persian Wikipedia into the form of articles and highlights.\r\nLink: https:\/\/github.com\/m3hrdadfi\/wiki-summary","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1544\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1543","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1543\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1543\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1543\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1543","id":765476196,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTcwOTU5","number":1543,"title":"adding HindEncorp","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I have created a new PR by reforking and creating a new branch ","@rahul-art unfortunately this didn't quite work, here's how you can try again:\r\n- `git checkout master` to go back to the main branch\r\n- `git pull upstream master` to make it up to date\r\n- `git checkout -b add_hind_encorp` to create a new branch\r\n\r\nThen add the dataset script, `README.md`, `dummy_data.zip`, and `dataset_infos.json` to the tracked files for the branch with `git add` (please add all of these files individually, NOT the whole directory as we don't want the other data files)\r\nThen after you have passed the style checks and the local tests, do:\r\n- `git commit . -m initial_commit`\r\n- `git push --set-upstream origin add_hind_encorp`\r\n\r\nThen you can go to this branch on the WebApp and open a new PR","@yjernite #1557 created new PR"],"created_at":1607873947000,"updated_at":1607902553000,"closed_at":1607902553000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1543","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1543","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1543.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1543.patch"},"body":"adding Hindi Wikipedia corpus","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1543\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1542","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1542\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1542\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1542\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1542","id":765439746,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTYyMjAx","number":1542,"title":"fix typo readme","user":{"login":"clmnt","id":821155,"node_id":"MDQ6VXNlcjgyMTE1NQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/821155?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/clmnt","html_url":"https:\/\/github.com\/clmnt","followers_url":"https:\/\/api.github.com\/users\/clmnt\/followers","following_url":"https:\/\/api.github.com\/users\/clmnt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/clmnt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/clmnt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/clmnt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/clmnt\/orgs","repos_url":"https:\/\/api.github.com\/users\/clmnt\/repos","events_url":"https:\/\/api.github.com\/users\/clmnt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/clmnt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607870482000,"updated_at":1607879801000,"closed_at":1607879800000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1542","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1542","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1542.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1542.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1542\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1541","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1541\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1541\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1541\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1541","id":765430586,"node_id":"MDU6SXNzdWU3NjU0MzA1ODY=","number":1541,"title":"connection issue while downloading data","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks\r\n@lhoestq ","Does your instance have an internet connection ?\r\n\r\nIf you don't have an internet connection you'll need to have the dataset on the instance disk.\r\nTo do so first download the dataset on another machine using `load_dataset` and then you can save it in a folder using `my_dataset.save_to_disk(\"path\/to\/folder\")`. Once the folder is copied on your instance you can reload the dataset with `datasets.load_from_disk(\"path\/to\/folder\")`"],"created_at":1607869620000,"updated_at":1608028471000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune_t5_trainer.py\", line 361, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 269, in main\r\n add_prefix=False if training_args.train_adapters else True)\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 70, in get_dataset\r\n dataset = self.load_dataset(split=split)\r\n File \"\/workdir\/seq2seq\/data\/tasks.py\", line 306, in load_dataset\r\n return datasets.load_dataset('glue', 'cola', split=split)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 589, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py\", line 263, in prepare_module\r\n head_hf_s3(path, filename=name, dataset=dataset)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 200, in head_hf_s3\r\n return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py\", line 403, in http_head\r\n url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/api.py\", line 104, in head\r\n return request('head', url, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/api.py\", line 61, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/sessions.py\", line 542, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/sessions.py\", line 655, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/requests\/adapters.py\", line 504, in send\r\n raise ConnectTimeout(e, request=request)\r\nrequests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/glue\/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)'))\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1541\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1540","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1540\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1540\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1540\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1540","id":765357702,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTQ1NDc2","number":1540,"title":"added TTC4900: A Benchmark Data for Turkish Text Categorization dataset","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, can you help with creating dummy_data?\r\n","Hi @yavuzKomecoglu did you manage to build the dummy data ?","> Hi @yavuzKomecoglu did you manage to build the dummy data ?\r\n\r\nHi, sorry for the return. I've created dummy_data.zip manually.","> Nice thank you !\r\n> \r\n> Before we merge can you fill the two sections of the dataset card I suggested ?\r\n> And also remove one remaining print statement\r\n\r\nI updated your suggestions. Thank you very much for your support.","I think you accidentally pushed the readme of another dataset (name_to_nation).\r\nI removed it so you have to `git pull`\r\n\r\nBecause of that I guess your changes about the ttc4900 was not included.\r\nFeel free to ping me once they're added\r\n\r\n\r\n","> I think you accidentally pushed the readme of another dataset (name_to_nation).\r\n> I removed it so you have to `git pull`\r\n> \r\n> Because of that I guess your changes about the ttc4900 was not included.\r\n> Feel free to ping me once they're added\r\n\r\nI did `git pull` and updated readme **ttc4900**.","merging since the Ci is fixed on master"],"created_at":1607863413000,"updated_at":1608286141000,"closed_at":1608286141000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1540","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1540","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1540.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1540.patch"},"body":"This PR adds the TTC4900 dataset which is a Turkish Text Categorization dataset by me and @basakbuluz. \r\n\r\nHomepage: [https:\/\/www.kaggle.com\/savasy\/ttc4900](https:\/\/www.kaggle.com\/savasy\/ttc4900)\r\nPoint of Contact: [Sava\u015f Y\u0131ld\u0131r\u0131m](mailto:savasy@gmail.com) \/ @savasy\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1540\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1539","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1539\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1539\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1539\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1539","id":765338910,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4OTQyMTU4","number":1539,"title":"Added Wiki Asp dataset","user":{"login":"katnoria","id":7674948,"node_id":"MDQ6VXNlcjc2NzQ5NDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7674948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/katnoria","html_url":"https:\/\/github.com\/katnoria","followers_url":"https:\/\/api.github.com\/users\/katnoria\/followers","following_url":"https:\/\/api.github.com\/users\/katnoria\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/katnoria\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/katnoria\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/katnoria\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/katnoria\/orgs","repos_url":"https:\/\/api.github.com\/users\/katnoria\/repos","events_url":"https:\/\/api.github.com\/users\/katnoria\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/katnoria\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Awesome thank you !\r\n> \r\n> I just left one comment.\r\n> \r\n> Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> \r\n> To do so feel free to take a look inside them and in the jsonl files only keep 1 or 2 samples instead of 5 and also remove big chunks of text to only keep a few passages.\r\n\r\nThanks, I have updated the dummy data to keep each domain <20\/30KB.","> > Awesome thank you !\r\n> > I just left one comment.\r\n> > Also it looks like the dummy_data.zip files are quite big (around 500KB each)\r\n> > Can you try to reduce their sizes please ? Ideally they should be <20KB each\r\n> > To do so feel free to take a look inside them and in the jsonl files only keep 1 or 2 samples instead of 5 and also remove big chunks of text to only keep a few passages.\r\n> \r\n> Thanks, I have updated the dummy data to keep each domain <20\/30KB.\r\n\r\nLooks like this branch has other commits. I will open a new PR with suggested changes.","opened a new PR #1612 "],"created_at":1607861914000,"updated_at":1608632161000,"closed_at":1608632161000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1539","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1539","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1539.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1539.patch"},"body":"Hello,\r\n\r\nI have added Wiki Asp dataset. Please review the PR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1539\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1538","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1538\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1538\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1538\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1538","id":765139739,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4ODkxOTE3","number":1538,"title":"tweets_hate_speech_detection","user":{"login":"darshan-gandhi","id":44197177,"node_id":"MDQ6VXNlcjQ0MTk3MTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44197177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/darshan-gandhi","html_url":"https:\/\/github.com\/darshan-gandhi","followers_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq I have added this new dataset for tweet's hate speech detection. \r\n\r\nPlease if u could review it. \r\n\r\nThank you","Hi @darshan-gandhi have you add a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me when you're ready for the final review","Closing in favor of #1607"],"created_at":1607845073000,"updated_at":1608566068000,"closed_at":1608566067000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1538","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1538","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1538.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1538.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1538\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1537","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1537\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1537\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1537\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1537","id":765095210,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4ODY1NzIz","number":1537,"title":"added ohsumed ","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607842703000,"updated_at":1608229696000,"closed_at":1608229696000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1537","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1537","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1537.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1537.patch"},"body":"UPDATE2: PR passed all tests. Now waiting for review.\r\n\r\nUPDATE: pushed a new version. cross fingers that it should complete all the tests! :) \r\n If it passes all tests then it's not a draft version. \r\n\r\nThis is a draft version ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1537\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1536","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1536\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1536\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1536\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1536","id":765043121,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4ODM2MDM3","number":1536,"title":"Add Hippocorpus Dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Before we merge can you try to reduce the size of the dummy_data.zip file ?\r\n> \r\n> To do so feel free to only keep a few lines of the csv files ans also remove unnecessary chunks of texts (for example keep only the first sentences of a story).\r\n\r\nHi @lhoestq, I have reduced the size of the dummy_data.zip file by making the necessary changes you had suggested. ","merging since the CI is fixed on master"],"created_at":1607839982000,"updated_at":1608039677000,"closed_at":1608039611000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1536","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1536","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1536.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1536.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1536\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1535","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1535\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1535\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1535\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1535","id":764977542,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4ODAwMDUw","number":1535,"title":"Adding Igbo monolingual dataset","user":{"login":"purvimisal","id":22298787,"node_id":"MDQ6VXNlcjIyMjk4Nzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22298787?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/purvimisal","html_url":"https:\/\/github.com\/purvimisal","followers_url":"https:\/\/api.github.com\/users\/purvimisal\/followers","following_url":"https:\/\/api.github.com\/users\/purvimisal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/purvimisal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/purvimisal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/purvimisal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/purvimisal\/orgs","repos_url":"https:\/\/api.github.com\/users\/purvimisal\/repos","events_url":"https:\/\/api.github.com\/users\/purvimisal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/purvimisal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for the review. I have made all the changes you mentioned. PTAL! "],"created_at":1607836597000,"updated_at":1608561589000,"closed_at":1608561589000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1535","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1535","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1535.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1535.patch"},"body":"This PR adds the Igbo Monolingual dataset.\r\nData: https:\/\/github.com\/IgnatiusEzeani\/IGBONLP\/tree\/master\/ig_monoling\r\nPaper: https:\/\/arxiv.org\/abs\/2004.00648 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1535\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1534","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1534\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1534\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1534\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1534","id":764934681,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4Nzc1Njky","number":1534,"title":"adding dataset for diplomacy detection","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Requested changes made and new PR submitted here: https:\/\/github.com\/huggingface\/datasets\/pull\/1580 "],"created_at":1607834323000,"updated_at":1608061972000,"closed_at":1608061945000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1534","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1534","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1534.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1534.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1534\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1533","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1533\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1533\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1533\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1533","id":764835913,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NzE4MDAz","number":1533,"title":"add id_panl_bppt, a parallel corpus for en-id","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, thanks for the review. I will have a look and update it accordingly.","Strange error message :-)\r\n\r\n```\r\n> tf_context = tf.python.context.context() # eager mode context\r\nE AttributeError: module 'tensorflow' has no attribute 'python'\r\n```\r\n"],"created_at":1607829087000,"updated_at":1608547236000,"closed_at":1608547236000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1533","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1533","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1533.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1533.patch"},"body":"Parallel Text Corpora for English - Indonesian","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1533\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1532","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1532\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1532\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1532\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1532","id":764772184,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NjgxODcz","number":1532,"title":"adding hate-speech-and-offensive-language","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["made suggested changes and a new PR created here : https:\/\/github.com\/huggingface\/datasets\/pull\/1597"],"created_at":1607825791000,"updated_at":1608230214000,"closed_at":1608228605000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1532","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1532","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1532.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1532.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1532\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1531","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1531\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1531\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1531\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1531","id":764752882,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NjcwNzcz","number":1531,"title":"adding hate-speech-and-offensive-language","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607824747000,"updated_at":1607825822000,"closed_at":1607825822000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1531","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1531","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1531.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1531.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1531\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1530","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1530\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1530\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1530\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1530","id":764749507,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NjY4ODI3","number":1530,"title":"add indonlu benchmark datasets","user":{"login":"yasirabd","id":6518504,"node_id":"MDQ6VXNlcjY1MTg1MDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6518504?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yasirabd","html_url":"https:\/\/github.com\/yasirabd","followers_url":"https:\/\/api.github.com\/users\/yasirabd\/followers","following_url":"https:\/\/api.github.com\/users\/yasirabd\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yasirabd\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yasirabd\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yasirabd\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yasirabd\/orgs","repos_url":"https:\/\/api.github.com\/users\/yasirabd\/repos","events_url":"https:\/\/api.github.com\/users\/yasirabd\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yasirabd\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607824569000,"updated_at":1608117103000,"closed_at":1608117103000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1530","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1530","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1530.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1530.patch"},"body":"The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.\r\n\r\nThis is a new clean PR from [#1322](https:\/\/github.com\/huggingface\/datasets\/pull\/1322)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1530\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1529","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1529\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1529\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1529\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1529","id":764748410,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NjY4MjU4","number":1529,"title":"Ro sent","user":{"login":"iliemihai","id":2815308,"node_id":"MDQ6VXNlcjI4MTUzMDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2815308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliemihai","html_url":"https:\/\/github.com\/iliemihai","followers_url":"https:\/\/api.github.com\/users\/iliemihai\/followers","following_url":"https:\/\/api.github.com\/users\/iliemihai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliemihai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliemihai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliemihai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliemihai\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliemihai\/repos","events_url":"https:\/\/api.github.com\/users\/iliemihai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliemihai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @iliemihai, it looks like this PR holds changes from your previous PR #1493 .\r\nWould you mind removing them from the branch please ?","@SBrandeis I am sorry. Yes I will remove them. Thank you :D ","Hi @lhoestq @SBrandeis @iliemihai\r\n\r\nIs this still in progress or can I take over this one?\r\n\r\nThanks,\r\nGunjan","Hi,\r\nWhile trying to add this dataset, I found some potential issues. \r\nThe homepage mentioned is : https:\/\/github.com\/katakonst\/sentiment-analysis-tensorflow\/tree\/master\/datasets\/ro\/, where the dataset is different from the URLs: https:\/\/raw.githubusercontent.com\/dumitrescustefan\/Romanian-Transformers\/examples\/examples\/sentiment_analysis\/ro\/train.csv. It is unclear which dataset is \"correct\". I checked the total examples (train+test) in both places and they do not match.","We should use the data from dumitrescustefan and set the homepage to his repo IMO, since he's first author of the paper of the dataset.","Hi @lhoestq,\r\n\r\nCool, I'll get working on it.\r\n\r\nThanks","Hi @lhoestq, \r\n\r\nThis PR can be closed.","Closing in favor of #2011 \r\nThanks again for adding it !"],"created_at":1607824502000,"updated_at":1616149963000,"closed_at":1616149962000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1529","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1529","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1529.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1529.patch"},"body":"Movies reviews dataset for Romanian language.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1529\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1528","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1528\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1528\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1528\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1528","id":764724035,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NjU0ODU0","number":1528,"title":"initial commit for Common Crawl Domain Names","user":{"login":"Karthik-Bhaskar","id":13200370,"node_id":"MDQ6VXNlcjEzMjAwMzcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13200370?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar","html_url":"https:\/\/github.com\/Karthik-Bhaskar","followers_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/followers","following_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/repos","events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you :)"],"created_at":1607823169000,"updated_at":1608299498000,"closed_at":1608286952000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1528","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1528","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1528.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1528.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1528\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1527","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1527\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1527\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1527\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1527","id":764638504,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NjA3MjQw","number":1527,"title":"Add : Conv AI 2 (Messed up original PR)","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607818874000,"updated_at":1607886864000,"closed_at":1607886864000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1527","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1527","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1527.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1527.patch"},"body":"@lhoestq Sorry I messed up the previous 2 PR's -> https:\/\/github.com\/huggingface\/datasets\/pull\/1462 -> https:\/\/github.com\/huggingface\/datasets\/pull\/1383. So created a new one. Also, everything is fixed in this PR. Can you please review it ?\r\nThanks in advance. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1527\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1526","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1526\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1526\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1526\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1526","id":764591243,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NTgxNDg4","number":1526,"title":"added Hebrew thisworld corpus","user":{"login":"imvladikon","id":10088963,"node_id":"MDQ6VXNlcjEwMDg4OTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10088963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/imvladikon","html_url":"https:\/\/github.com\/imvladikon","followers_url":"https:\/\/api.github.com\/users\/imvladikon\/followers","following_url":"https:\/\/api.github.com\/users\/imvladikon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/imvladikon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/imvladikon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/imvladikon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/imvladikon\/orgs","repos_url":"https:\/\/api.github.com\/users\/imvladikon\/repos","events_url":"https:\/\/api.github.com\/users\/imvladikon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/imvladikon\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607816572000,"updated_at":1608288450000,"closed_at":1608288450000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1526","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1526","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1526.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1526.patch"},"body":"added corpus from https:\/\/thisworld.online\/ , https:\/\/github.com\/thisworld1\/thisworld.online","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1526\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1525","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1525\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1525\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1525\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1525","id":764530582,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NTUwMzI2","number":1525,"title":"Adding a second branch for Atomic to fix git errors","user":{"login":"ontocord","id":8900094,"node_id":"MDQ6VXNlcjg5MDAwOTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8900094?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ontocord","html_url":"https:\/\/github.com\/ontocord","followers_url":"https:\/\/api.github.com\/users\/ontocord\/followers","following_url":"https:\/\/api.github.com\/users\/ontocord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ontocord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ontocord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ontocord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ontocord\/orgs","repos_url":"https:\/\/api.github.com\/users\/ontocord\/repos","events_url":"https:\/\/api.github.com\/users\/ontocord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ontocord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607813690000,"updated_at":1609170671000,"closed_at":1609170671000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1525","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1525","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1525.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1525.patch"},"body":"Adding the Atomic common sense dataset.\r\nSee https:\/\/homes.cs.washington.edu\/~msap\/atomic\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1525\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1524","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1524\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1524\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1524\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1524","id":764521672,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NTQ2MjI0","number":1524,"title":"ADD: swahili dataset for language modeling","user":{"login":"akshayb7","id":29649801,"node_id":"MDQ6VXNlcjI5NjQ5ODAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29649801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akshayb7","html_url":"https:\/\/github.com\/akshayb7","followers_url":"https:\/\/api.github.com\/users\/akshayb7\/followers","following_url":"https:\/\/api.github.com\/users\/akshayb7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akshayb7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akshayb7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akshayb7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akshayb7\/orgs","repos_url":"https:\/\/api.github.com\/users\/akshayb7\/repos","events_url":"https:\/\/api.github.com\/users\/akshayb7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akshayb7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607813238000,"updated_at":1608223036000,"closed_at":1608223036000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1524","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1524","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1524.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1524.patch"},"body":"Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1524\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1523","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1523\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1523\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1523\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1523","id":764359524,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NDYyMTE4","number":1523,"title":"Add eHealth Knowledge Discovery dataset","user":{"login":"mariagrandury","id":57645283,"node_id":"MDQ6VXNlcjU3NjQ1Mjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57645283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariagrandury","html_url":"https:\/\/github.com\/mariagrandury","followers_url":"https:\/\/api.github.com\/users\/mariagrandury\/followers","following_url":"https:\/\/api.github.com\/users\/mariagrandury\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariagrandury\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariagrandury\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariagrandury\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariagrandury\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariagrandury\/repos","events_url":"https:\/\/api.github.com\/users\/mariagrandury\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariagrandury\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you very much for your review @lewtun ! \r\n\r\nI've updated the script metadata, created the README and fixed the two details you commented.\r\n\r\nReady for another review! \ud83e\udd17 ","I've updated the task tag as we discussed and also added a couple of lines of code to make sure I include all the available examples.\r\n\r\nThank you again!"],"created_at":1607805858000,"updated_at":1608224561000,"closed_at":1608223736000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1523","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1523","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1523.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1523.patch"},"body":"This Spanish dataset can be used to mine knowledge from unstructured health texts. \r\n\r\nIn particular, for:\r\n- Entity recognition\r\n- Relation extraction\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1523\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1522","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1522\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1522\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1522\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1522","id":764341594,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NDUzNjg4","number":1522,"title":"Add semeval 2020 task 11","user":{"login":"ZacharySBrown","id":7950786,"node_id":"MDQ6VXNlcjc5NTA3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7950786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZacharySBrown","html_url":"https:\/\/github.com\/ZacharySBrown","followers_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/followers","following_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/repos","events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@SBrandeis : Thanks for the feedback! Just updated to use context manager for the `open`s and removed the placeholder text from the `README`!","Great, thanks @ZacharySBrown !\r\nFailing tests seem to be unrelated to your changes, merging the current master branch into yours should fix them.\r\n"],"created_at":1607805134000,"updated_at":1608050932000,"closed_at":1608050932000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1522","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1522","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1522.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1522.patch"},"body":"Adding in propaganda detection task (task 11) from Sem Eval 2020","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1522\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1521","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1521\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1521\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1521\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1521","id":764320841,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4NDQzOTgz","number":1521,"title":"Atomic","user":{"login":"ontocord","id":8900094,"node_id":"MDQ6VXNlcjg5MDAwOTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8900094?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ontocord","html_url":"https:\/\/github.com\/ontocord","followers_url":"https:\/\/api.github.com\/users\/ontocord\/followers","following_url":"https:\/\/api.github.com\/users\/ontocord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ontocord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ontocord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ontocord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ontocord\/orgs","repos_url":"https:\/\/api.github.com\/users\/ontocord\/repos","events_url":"https:\/\/api.github.com\/users\/ontocord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ontocord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I had to create a new PR to fix git errors. See: https:\/\/github.com\/huggingface\/datasets\/pull\/1525\r\n\r\nI'm closing this PR. "],"created_at":1607804288000,"updated_at":1607813808000,"closed_at":1607813808000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1521","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1521","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1521.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1521.patch"},"body":"This is the ATOMIC common sense dataset. More info can be found here:\r\n\r\n* README.md still to be created.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1521\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1520","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1520\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1520\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1520\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1520","id":764140938,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MzU5MTA5","number":1520,"title":"ru_reviews dataset adding","user":{"login":"darshan-gandhi","id":44197177,"node_id":"MDQ6VXNlcjQ0MTk3MTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44197177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/darshan-gandhi","html_url":"https:\/\/github.com\/darshan-gandhi","followers_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\n\r\nI have added the readme as well \r\n\r\nPlease do have a look at it when suitable ","Chatted with @darshan-gandhi on Slack about parsing examples into a separate text and sentiment field"],"created_at":1607796786000,"updated_at":1608025815000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1520","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1520","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1520.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1520.patch"},"body":"RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1520\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1519","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1519\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1519\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1519\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1519","id":764107360,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MzM5OTg5","number":1519,"title":"Initial commit for AQuaMuSe","user":{"login":"Karthik-Bhaskar","id":13200370,"node_id":"MDQ6VXNlcjEzMjAwMzcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13200370?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar","html_url":"https:\/\/github.com\/Karthik-Bhaskar","followers_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/followers","following_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/repos","events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@yjernite Thank you for your help, generating the dummy data \ud83e\udd17 Having that all the tests have passed \ud83d\udc4d\ud83c\udffb","merging since the CI is fixed on master","Thank you :)"],"created_at":1607795176000,"updated_at":1608299442000,"closed_at":1608224610000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1519","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1519","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1519.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1519.patch"},"body":"There is an issue in generation of dummy data. Tests on real data have passed locally.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1519\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1518","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1518\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1518\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1518\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1518","id":764045722,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MzAyNzYy","number":1518,"title":"Add twi text","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hii please follow me","thank you"],"created_at":1607791922000,"updated_at":1607885617000,"closed_at":1607885617000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1518","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1518","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1518.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1518.patch"},"body":"Add Twi texts","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1518\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1517","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1517\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1517\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1517\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1517","id":764045214,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MzAyNDM1","number":1517,"title":"Kd conv smangrul","user":{"login":"pacman100","id":13534540,"node_id":"MDQ6VXNlcjEzNTM0NTQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13534540?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pacman100","html_url":"https:\/\/github.com\/pacman100","followers_url":"https:\/\/api.github.com\/users\/pacman100\/followers","following_url":"https:\/\/api.github.com\/users\/pacman100\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pacman100\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pacman100\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pacman100\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pacman100\/orgs","repos_url":"https:\/\/api.github.com\/users\/pacman100\/repos","events_url":"https:\/\/api.github.com\/users\/pacman100\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pacman100\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hii please follow me","merging since the CI is fixed on master"],"created_at":1607791890000,"updated_at":1608130574000,"closed_at":1608130574000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1517","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1517","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1517.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1517.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1517\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1516","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1516\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1516\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1516\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1516","id":764032327,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MjkzOTMw","number":1516,"title":"adding wrbsc","user":{"login":"kldarek","id":15803781,"node_id":"MDQ6VXNlcjE1ODAzNzgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15803781?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kldarek","html_url":"https:\/\/github.com\/kldarek","followers_url":"https:\/\/api.github.com\/users\/kldarek\/followers","following_url":"https:\/\/api.github.com\/users\/kldarek\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kldarek\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kldarek\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kldarek\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kldarek\/orgs","repos_url":"https:\/\/api.github.com\/users\/kldarek\/repos","events_url":"https:\/\/api.github.com\/users\/kldarek\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kldarek\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated. ","merging since the CI is fixed on master"],"created_at":1607791120000,"updated_at":1608284493000,"closed_at":1608284493000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1516","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1516","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1516.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1516.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1516\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1515","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1515\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1515\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1515\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1515","id":764022753,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4Mjg3NDc0","number":1515,"title":"Add yoruba text","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["closing since #1379 got merged"],"created_at":1607790570000,"updated_at":1607884678000,"closed_at":1607884678000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1515","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1515","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1515.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1515.patch"},"body":"Adding Yoruba text C3","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1515\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1514","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1514\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1514\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1514\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1514","id":764017148,"node_id":"MDU6SXNzdWU3NjQwMTcxNDg=","number":1514,"title":"how to get all the options of a property in datasets ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes).","I think the `features` attribute of the dataset object is what you are looking for:\r\n```\r\n>>> dataset.features\r\n{'sentence1': Value(dtype='string', id=None),\r\n 'sentence2': Value(dtype='string', id=None),\r\n 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),\r\n 'idx': Value(dtype='int32', id=None)\r\n}\r\n>>> dataset.features[\"label\"].names\r\n['not_equivalent', 'equivalent']\r\n```\r\n\r\nFor reference: https:\/\/huggingface.co\/docs\/datasets\/exploring.html"],"created_at":1607790248000,"updated_at":1608278937000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\ncould you tell me how I can get all unique options of a property of dataset?\r\nfor instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1514\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1513","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1513\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1513\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1513\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1513","id":764016850,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MjgzNDUz","number":1513,"title":"app_reviews_by_users","user":{"login":"darshan-gandhi","id":44197177,"node_id":"MDQ6VXNlcjQ0MTk3MTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44197177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/darshan-gandhi","html_url":"https:\/\/github.com\/darshan-gandhi","followers_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq \r\n\r\nI have added the readme file as well, please if you could check it once \r\n\r\nThank you "],"created_at":1607790229000,"updated_at":1607978724000,"closed_at":1607978724000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1513","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1513","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1513.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1513.patch"},"body":"Software Applications User Reviews ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1513\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1512","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1512\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1512\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1512\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1512","id":764010722,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4Mjc5MzIy","number":1512,"title":"Add Hippocorpus Dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607789873000,"updated_at":1607836148000,"closed_at":1607836138000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1512","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1512","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1512.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1512.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1512\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1511","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1511\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1511\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1511\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1511","id":764006477,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4Mjc2NDM5","number":1511,"title":"poleval cyberbullying","user":{"login":"czabo","id":75574105,"node_id":"MDQ6VXNlcjc1NTc0MTA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75574105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/czabo","html_url":"https:\/\/github.com\/czabo","followers_url":"https:\/\/api.github.com\/users\/czabo\/followers","following_url":"https:\/\/api.github.com\/users\/czabo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/czabo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/czabo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/czabo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/czabo\/orgs","repos_url":"https:\/\/api.github.com\/users\/czabo\/repos","events_url":"https:\/\/api.github.com\/users\/czabo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/czabo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607789624000,"updated_at":1608222059000,"closed_at":1608221998000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1511","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1511","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1511.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1511.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1511\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1510","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1510\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1510\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1510\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1510","id":763980369,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MjU4NDg3","number":1510,"title":"Add Dataset for (qa_srl)Question-Answer Driven Semantic Role Labeling","user":{"login":"bpatidar","id":12439573,"node_id":"MDQ6VXNlcjEyNDM5NTcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12439573?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bpatidar","html_url":"https:\/\/github.com\/bpatidar","followers_url":"https:\/\/api.github.com\/users\/bpatidar\/followers","following_url":"https:\/\/api.github.com\/users\/bpatidar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bpatidar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bpatidar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bpatidar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bpatidar\/orgs","repos_url":"https:\/\/api.github.com\/users\/bpatidar\/repos","events_url":"https:\/\/api.github.com\/users\/bpatidar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bpatidar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hii please follow me","merging since the CI is fixed on master"],"created_at":1607788091000,"updated_at":1608221182000,"closed_at":1608221182000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1510","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1510","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1510.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1510.patch"},"body":"- Added tags, Readme file\r\n- Added code changes","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1510\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1509","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1509\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1509\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1509\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1509","id":763964857,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MjQ4NTgx","number":1509,"title":"Added dataset Makhzan","user":{"login":"arkhalid","id":14899066,"node_id":"MDQ6VXNlcjE0ODk5MDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14899066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arkhalid","html_url":"https:\/\/github.com\/arkhalid","followers_url":"https:\/\/api.github.com\/users\/arkhalid\/followers","following_url":"https:\/\/api.github.com\/users\/arkhalid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arkhalid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arkhalid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arkhalid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arkhalid\/orgs","repos_url":"https:\/\/api.github.com\/users\/arkhalid\/repos","events_url":"https:\/\/api.github.com\/users\/arkhalid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arkhalid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The only CI error comes from \r\n```\r\nFAILED tests\/test_file_utils.py::TempSeedTest::test_tensorflow\r\n```\r\n\r\nwhich is not related to your PR and is fixed on master.\r\n\r\nYou can ignore it","@lhoestq I've made the changes. Please review and merge. \r\n\r\nI have a similar PR https:\/\/github.com\/huggingface\/datasets\/pull\/1562 for another dataset. I'll incorporate your comment about sorting and reducing dummy dataset size there.","The CI raises an error `FAILED tests\/test_file_utils.py::TempSeedTest::test_tensorflow` but it's not related to this dataset.\r\nThis issue is fixed on master","You did all the work ;) thanks\r\n\r\nmerging since the CI is fixed on master"],"created_at":1607787247000,"updated_at":1608131092000,"closed_at":1608131092000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1509","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1509","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1509.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1509.patch"},"body":"Need help with the dummy data.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1509\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1508","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1508\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1508\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1508\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1508","id":763908724,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MjEyODUy","number":1508,"title":"Fix namedsplit docs","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hii please follow me","Thanks @mariosasko!"],"created_at":1607784218000,"updated_at":1615429119000,"closed_at":1608037068000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1508","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1508","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1508.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1508.patch"},"body":"Fixes a broken link and `DatasetInfoMixin.split`'s docstring.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1508\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1507","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1507\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1507\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1507\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1507","id":763857872,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MTgyMzE2","number":1507,"title":"Add SelQA Dataset","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hii please follow me","The CI error `FAILED tests\/test_file_utils.py::TempSeedTest::test_tensorflow` is not related with this dataset and is fixed on master. You can ignore it","merging since the Ci is fixed on master"],"created_at":1607781487000,"updated_at":1608137363000,"closed_at":1608137363000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1507","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1507","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1507.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1507.patch"},"body":"Add the SelQA Dataset, a new benchmark for selection-based question answering tasks\r\nRepo: https:\/\/github.com\/emorynlp\/selqa\/\r\nPaper: https:\/\/arxiv.org\/pdf\/1606.08513.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1507\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1506","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1506\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1506\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1506\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1506","id":763846074,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MTc1ODEz","number":1506,"title":"Add nq_open question answering dataset","user":{"login":"Nilanshrajput","id":28673745,"node_id":"MDQ6VXNlcjI4NjczNzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28673745?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nilanshrajput","html_url":"https:\/\/github.com\/Nilanshrajput","followers_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/followers","following_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/repos","events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@SBrandeis thanks for the review, I applied your suggested changes, but CI is failing now not sure about the error.","Many thanks @Nilanshrajput !\r\nThe failing tests on CI are not related to your changes, merging master on your branch should fix them :)\r\nIf you're interested in what causes the CI to fail, checkout [this commit](https:\/\/github.com\/huggingface\/datasets\/commit\/9a0f1e20ca1e783cb14c1ab1cc2f54b0b5b201e8)","@SBrandeis done!\r\n","Hello @Nilanshrajput, your PR includes changes from other branches too now (485 files changed)\r\nWould you mind creating another branch from master with your changes and opening a new PR?","@SBrandeis sorry i messed up the git history, #1587 I opened this new pr! ","closing in favor of #1587 "],"created_at":1607780808000,"updated_at":1608219290000,"closed_at":1608219290000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1506","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1506","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1506.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1506.patch"},"body":"Added nq_open Open-domain question answering dataset.\r\n\r\nThe NQ-Open task is currently being used to evaluate submissions to the EfficientQA competition, which is part of the NeurIPS 2020 competition track.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1506\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1505","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1505\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1505\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1505\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1505","id":763750773,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MTEyMTk5","number":1505,"title":"add ilist dataset","user":{"login":"vasudevgupta7","id":53136577,"node_id":"MDQ6VXNlcjUzMTM2NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53136577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vasudevgupta7","html_url":"https:\/\/github.com\/vasudevgupta7","followers_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/followers","following_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/orgs","repos_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/repos","events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607777052000,"updated_at":1608219787000,"closed_at":1608219787000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1505","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1505","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1505.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1505.patch"},"body":"This PR will add Indo-Aryan Language Identification Shared Task Dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1505\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1504","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1504\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1504\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1504\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1504","id":763697231,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MDczMzcw","number":1504,"title":"Add SentiWS dataset for pos-tagging and sentiment-scoring (German)","user":{"login":"harshalmittal4","id":24206326,"node_id":"MDQ6VXNlcjI0MjA2MzI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24206326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/harshalmittal4","html_url":"https:\/\/github.com\/harshalmittal4","followers_url":"https:\/\/api.github.com\/users\/harshalmittal4\/followers","following_url":"https:\/\/api.github.com\/users\/harshalmittal4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/harshalmittal4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/harshalmittal4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/harshalmittal4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/harshalmittal4\/orgs","repos_url":"https:\/\/api.github.com\/users\/harshalmittal4\/repos","events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq @yjernite, requesting you to review this for any changes needed. Thanks! :)","Hi @lhoestq , I have updated the PR"],"created_at":1607775473000,"updated_at":1608057158000,"closed_at":1608057158000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1504","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1504","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1504.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1504.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1504\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1503","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1503\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1503\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1503\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1503","id":763667489,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MDUxNDM2","number":1503,"title":"Adding COVID QA dataset in Chinese and English from UC SanDiego","user":{"login":"vrindaprabhu","id":16264631,"node_id":"MDQ6VXNlcjE2MjY0NjMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16264631?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vrindaprabhu","html_url":"https:\/\/github.com\/vrindaprabhu","followers_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/followers","following_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/orgs","repos_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/repos","events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Changed the pre-processing based on the comments raised in [PR-1482](https:\/\/github.com\/huggingface\/datasets\/pull\/1482).The below command is passing in my local environment:\r\n\r\n`python datasets-cli test datasets\/covid_qa_ucsd\/ --save_infos --all_configs --data_dir ~\/Downloads\/Medical-Dialogue-Dataset\/CovidDailogue\/`\r\n\r\n"],"created_at":1607774568000,"updated_at":1613453358000,"closed_at":1608218966000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1503","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1503","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1503.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1503.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1503\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1502","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1502\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1502\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1502\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1502","id":763658208,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM4MDQ1OTY5","number":1502,"title":"Add Senti_Lex Dataset","user":{"login":"KMFODA","id":35491698,"node_id":"MDQ6VXNlcjM1NDkxNjk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35491698?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KMFODA","html_url":"https:\/\/github.com\/KMFODA","followers_url":"https:\/\/api.github.com\/users\/KMFODA\/followers","following_url":"https:\/\/api.github.com\/users\/KMFODA\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KMFODA\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KMFODA\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KMFODA\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KMFODA\/orgs","repos_url":"https:\/\/api.github.com\/users\/KMFODA\/repos","events_url":"https:\/\/api.github.com\/users\/KMFODA\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KMFODA\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Better will be if you close this PR and make a fresh PR","Feel free to ping me if you also have questions about the dummy data","also it looks like this PR includes changes about dummy_data.zip files in the .\/datasets\/\/un_pc folder. Can you remove them ?","Thanks for all the advice @lhoestq. I've implemented the changes you kindly highlighted and have made sure the scripts pass all the test. I've also marked this as ready for review as I believe it's in a good place to be merged now.","Great suggestion I fixed the dummy data and the file paths."],"created_at":1607774129000,"updated_at":1609164072000,"closed_at":1609164072000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1502","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1502","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1502.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1502.patch"},"body":"TODO:\r\nFix feature format issue\r\nCreate dataset_info.json file\r\nRun pytests\r\nMake Style","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1502\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1501","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1501\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1501\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1501\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1501","id":763517647,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3OTYzMDU5","number":1501,"title":"Adds XED dataset","user":{"login":"harshalmittal4","id":24206326,"node_id":"MDQ6VXNlcjI0MjA2MzI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24206326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/harshalmittal4","html_url":"https:\/\/github.com\/harshalmittal4","followers_url":"https:\/\/api.github.com\/users\/harshalmittal4\/followers","following_url":"https:\/\/api.github.com\/users\/harshalmittal4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/harshalmittal4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/harshalmittal4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/harshalmittal4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/harshalmittal4\/orgs","repos_url":"https:\/\/api.github.com\/users\/harshalmittal4\/repos","events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq @yjernite, requesting you to review this for any changes needed. Thanks! :)"],"created_at":1607766420000,"updated_at":1607980859000,"closed_at":1607980859000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1501","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1501","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1501.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1501.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1501\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1500","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1500\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1500\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1500\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1500","id":763479305,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3OTM0OTI1","number":1500,"title":"adding polsum","user":{"login":"kldarek","id":15803781,"node_id":"MDQ6VXNlcjE1ODAzNzgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15803781?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kldarek","html_url":"https:\/\/github.com\/kldarek","followers_url":"https:\/\/api.github.com\/users\/kldarek\/followers","following_url":"https:\/\/api.github.com\/users\/kldarek\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kldarek\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kldarek\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kldarek\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kldarek\/orgs","repos_url":"https:\/\/api.github.com\/users\/kldarek\/repos","events_url":"https:\/\/api.github.com\/users\/kldarek\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kldarek\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated."],"created_at":1607763929000,"updated_at":1608284623000,"closed_at":1608284623000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1500","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1500","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1500.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1500.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1500\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1499","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1499\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1499\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1499\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1499","id":763464693,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3OTIyNjA3","number":1499,"title":"update the dataset id_newspapers_2018","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607762832000,"updated_at":1607959687000,"closed_at":1607959687000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1499","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1499","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1499.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1499.patch"},"body":"Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1499\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1498","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1498\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1498\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1498\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1498","id":763303606,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3Nzc2MjM5","number":1498,"title":"add stereoset","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607749477000,"updated_at":1608285833000,"closed_at":1608285833000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1498","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1498","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1498.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1498.patch"},"body":"StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1498\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1497","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1497\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1497\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1497\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1497","id":763180824,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NjYxNzY2","number":1497,"title":"adding fake-news-english-5","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["made suggested changes and created a PR here: https:\/\/github.com\/huggingface\/datasets\/pull\/1598"],"created_at":1607739191000,"updated_at":1608235637000,"closed_at":1608235637000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1497","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1497","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1497.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1497.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1497\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1496","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1496\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1496\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1496\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1496","id":763091663,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NTc4NzQw","number":1496,"title":"Add Multi-Dimensional Gender Bias classification data","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607732257000,"updated_at":1607980495000,"closed_at":1607980495000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1496","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1496","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1496.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1496.patch"},"body":"https:\/\/parl.ai\/projects\/md_gender\/\r\n\r\nMostly has the ABOUT dimension since the others are inferred from other datasets in most cases.\r\n\r\nI tried to keep the dummy data small but one of the configs has 140 splits ( > 56KB data)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1496\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1495","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1495\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1495\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1495\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1495","id":763025562,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NTE2ODE4","number":1495,"title":"Opus DGT added","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607727909000,"updated_at":1608215921000,"closed_at":1608215921000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1495","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1495","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1495.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1495.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/DGT.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1495\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1494","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1494\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1494\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1494\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1494","id":762992601,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NDg2MzU4","number":1494,"title":"Added Opus Wikipedia","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607725683000,"updated_at":1608215908000,"closed_at":1608215908000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1494","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1494","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1494.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1494.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/Wikipedia.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1494\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1493","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1493\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1493\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1493\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1493","id":762979415,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NDc0MDc1","number":1493,"title":"Added RONEC dataset.","user":{"login":"iliemihai","id":2815308,"node_id":"MDQ6VXNlcjI4MTUzMDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2815308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliemihai","html_url":"https:\/\/github.com\/iliemihai","followers_url":"https:\/\/api.github.com\/users\/iliemihai\/followers","following_url":"https:\/\/api.github.com\/users\/iliemihai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliemihai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliemihai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliemihai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliemihai\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliemihai\/repos","events_url":"https:\/\/api.github.com\/users\/iliemihai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliemihai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the PR @iliemihai . \r\n\r\nFew comments - \r\n\r\nCan you run - \r\n`python datasets-cli dummy_data .\/datasets\/ronec --auto_generate` to generate dummy data.\r\n\r\nAlso, before committing files run : \r\n`make style`\r\n`flake8 datasets`\r\nthen you can add and commit files.","> Thanks for the PR @iliemihai .\r\n> \r\n> Few comments -\r\n> \r\n> Can you run -\r\n> `python datasets-cli dummy_data .\/datasets\/ronec --auto_generate` to generate dummy data.\r\n> \r\n> Also, before committing files run :\r\n> `make style`\r\n> `flake8 datasets`\r\n> then you can add and commit files.\r\n\r\nSorry, forgot to generate dummy data. I will do it now :D","Awesome, good job @iliemihai !\r\nI think the PR is ready to merge.\r\n@lhoestq would you mind double-checking this ?","Had to regenerate the dummy data since I just found out they were empty files"],"created_at":1607724890000,"updated_at":1608562136000,"closed_at":1608562136000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1493","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1493","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1493.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1493.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1493\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1492","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1492\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1492\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1492\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1492","id":762965239,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NDYxMjc3","number":1492,"title":"OPUS UBUNTU dataset","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607724097000,"updated_at":1608215896000,"closed_at":1608215895000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1492","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1492","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1492.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1492.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/Ubuntu.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1492\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1491","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1491\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1491\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1491\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1491","id":762920920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NDIxMTc3","number":1491,"title":"added opus GNOME data","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the Ci is fixed on master"],"created_at":1607721711000,"updated_at":1608214823000,"closed_at":1608214823000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1491","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1491","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1491.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1491.patch"},"body":"Dataset : http:\/\/opus.nlpl.eu\/GNOME.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1491\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1490","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1490\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1490\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1490\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1490","id":762915346,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NDE2MDU5","number":1490,"title":"ADD: opus_rf dataset for translation","user":{"login":"akshayb7","id":29649801,"node_id":"MDQ6VXNlcjI5NjQ5ODAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29649801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akshayb7","html_url":"https:\/\/github.com\/akshayb7","followers_url":"https:\/\/api.github.com\/users\/akshayb7\/followers","following_url":"https:\/\/api.github.com\/users\/akshayb7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akshayb7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akshayb7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akshayb7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akshayb7\/orgs","repos_url":"https:\/\/api.github.com\/users\/akshayb7\/repos","events_url":"https:\/\/api.github.com\/users\/akshayb7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akshayb7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607721403000,"updated_at":1607886744000,"closed_at":1607886744000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1490","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1490","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1490.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1490.patch"},"body":"Passed all local tests. Hopefully passes all Circle CI tests too. Tried to keep the commit history clean.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1490\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1489","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1489\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1489\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1489\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1489","id":762908763,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3NDA5OTkx","number":1489,"title":"Fake news english 4","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the PR @MisbahKhan789 !\r\n\r\nFew comments to help you along (I'm NOT a maintainer, just offering help to unblock the process) :-\r\n - Could you re-run `make style` and fix the errors related to code quality specific to your dataset in the `datasets\/fake_news_english` folder?\r\n(These seem to show errors that need manual fixes on running `flake8 datasets\/fake_news_english`)\r\n- Please run the local tests and check if they pass\r\n`RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_<your-dataset-name>`\r\n `RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name>`\r\n(If not, you may have to regenerate the dummy data and the `dataset_infos.json` files)\r\n\r\n","Hii please follow me","> Thanks for the PR @MisbahKhan789 !\r\n> \r\n> Few comments to help you along (I'm NOT a maintainer, just offering help to unblock the process) :-\r\n> \r\n> * Could you re-run `make style` and fix the errors related to code quality specific to your dataset in the `datasets\/fake_news_english` folder?\r\n> (These seem to show errors that need manual fixes on running `flake8 datasets\/fake_news_english`)\r\n> * Please run the local tests and check if they pass\r\n> `RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_<your-dataset-name>`\r\n> `RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name>`\r\n> (If not, you may have to regenerate the dummy data and the `dataset_infos.json` files)\r\n\r\nHey Bharat, thanks for the reply. I actually submitted a new PR with the changes that are required :)\r\n"],"created_at":1607721035000,"updated_at":1607801992000,"closed_at":1607801889000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1489","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1489","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1489.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1489.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1489\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1488","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1488\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1488\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1488\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1488","id":762860679,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MzY1ODUz","number":1488,"title":"Adding NELL","user":{"login":"ontocord","id":8900094,"node_id":"MDQ6VXNlcjg5MDAwOTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8900094?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ontocord","html_url":"https:\/\/github.com\/ontocord","followers_url":"https:\/\/api.github.com\/users\/ontocord\/followers","following_url":"https:\/\/api.github.com\/users\/ontocord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ontocord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ontocord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ontocord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ontocord\/orgs","repos_url":"https:\/\/api.github.com\/users\/ontocord\/repos","events_url":"https:\/\/api.github.com\/users\/ontocord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ontocord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["hi @lhoestq, I wanted to push another change to this branch b\/c I found a bug in the parsing. I need to swap arg1 and arg2. I tried to git push -u origin nell but it didn't work. So I tried to do git push --force -u origin nell which seems to work, but nothing is happening to this branch. I think this is because it's closed. Do I need to open another PR?\r\n\r\nThe change should be below in _generate_examples:\r\n best_arg1 = row[9]\r\n best_arg2 = row[8]\r\n","Hi @ontocord !\r\n\r\nYup the easiest thing to do is to open a new PR with a title like \"Fix NELL dataset argument order\". If it's a simple fix we can look at it pretty fast :) "],"created_at":1607718325000,"updated_at":1610008627000,"closed_at":1608561900000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1488","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1488","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1488.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1488.patch"},"body":"NELL is a knowledge base and knowledge graph along with sentences used to create the KB. See http:\/\/rtw.ml.cmu.edu\/rtw\/ for more details.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1488\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1487","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1487\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1487\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1487\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1487","id":762794921,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MzA2MTEx","number":1487,"title":" added conv_ai_3 dataset","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for suggesting changes. I fixed all the changes you suggested. Can you please review it again? ","@lhoestq Thank you for reviewing and suggesting changes. I made the requested changes. Can you please review it again?","Thanks @lhoestq for reviewing it again. I made the required changes. Can you please have a look ?","merging since the CI is fixed on master"],"created_at":1607714786000,"updated_at":1609148320000,"closed_at":1609148319000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1487","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1487","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1487.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1487.patch"},"body":"Dataset : https:\/\/github.com\/aliannejadi\/ClariQ\/\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1487\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1486","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1486\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1486\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1486\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1486","id":762790102,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MzAxODY2","number":1486,"title":"hate speech 18 dataset","user":{"login":"czabo","id":75574105,"node_id":"MDQ6VXNlcjc1NTc0MTA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75574105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/czabo","html_url":"https:\/\/github.com\/czabo","followers_url":"https:\/\/api.github.com\/users\/czabo\/followers","following_url":"https:\/\/api.github.com\/users\/czabo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/czabo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/czabo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/czabo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/czabo\/orgs","repos_url":"https:\/\/api.github.com\/users\/czabo\/repos","events_url":"https:\/\/api.github.com\/users\/czabo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/czabo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The error `tests\/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one","It's fixed on master now :) \r\n\r\nmerging this once"],"created_at":1607714534000,"updated_at":1607974998000,"closed_at":1607974998000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1486","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1486","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1486.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1486.patch"},"body":"This is again a PR instead of #1339, because something went wrong there. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1486\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1485","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1485\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1485\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1485\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1485","id":762774822,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3Mjg4MTg0","number":1485,"title":"Re-added wiki_movies dataset due to previous PR having changes from m\u2026","user":{"login":"aclifton314","id":53267795,"node_id":"MDQ6VXNlcjUzMjY3Nzk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53267795?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aclifton314","html_url":"https:\/\/github.com\/aclifton314","followers_url":"https:\/\/api.github.com\/users\/aclifton314\/followers","following_url":"https:\/\/api.github.com\/users\/aclifton314\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aclifton314\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aclifton314\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aclifton314\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aclifton314\/orgs","repos_url":"https:\/\/api.github.com\/users\/aclifton314\/repos","events_url":"https:\/\/api.github.com\/users\/aclifton314\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aclifton314\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607713668000,"updated_at":1607954902000,"closed_at":1607954902000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1485","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1485","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1485.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1485.patch"},"body":"\u2026any other unassociated files.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1485\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1484","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1484\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1484\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1484\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1484","id":762747096,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MjYzMDc5","number":1484,"title":"Add peer-read dataset","user":{"login":"vinaykudari","id":34424769,"node_id":"MDQ6VXNlcjM0NDI0NzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34424769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vinaykudari","html_url":"https:\/\/github.com\/vinaykudari","followers_url":"https:\/\/api.github.com\/users\/vinaykudari\/followers","following_url":"https:\/\/api.github.com\/users\/vinaykudari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vinaykudari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vinaykudari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vinaykudari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vinaykudari\/orgs","repos_url":"https:\/\/api.github.com\/users\/vinaykudari\/repos","events_url":"https:\/\/api.github.com\/users\/vinaykudari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vinaykudari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Cool thank you !\r\n> \r\n> I left a few comments\r\n\r\nThank you @lhoestq addressed your comments. Haven't changed the code but I see that tests are failing now. Do I need to rebase or something? ","The CI error is not related to your dataset and is fixed on master.\r\nYou can ignore it"],"created_at":1607712224000,"updated_at":1608543650000,"closed_at":1608543650000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1484","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1484","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1484.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1484.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1484\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1483","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1483\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1483\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1483\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1483","id":762712337,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MjMxMzQ4","number":1483,"title":"Added Times of India News Headlines Dataset","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq @abhishekkrthakur what happened here ?\r\n","@lhoestq everything alright here ?","@tanmoyio please have patience. @lhoestq has to look at 150+ PRs and it may take time. The PR looks good to me but we wait for his confirmation :) \ud83e\udd17 "],"created_at":1607710358000,"updated_at":1607969288000,"closed_at":1607969288000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1483","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1483","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1483.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1483.patch"},"body":"Dataset name: Times of India News Headlines\r\nlink: https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/DPQMQH","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1483\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1482","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1482\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1482\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1482\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1482","id":762686820,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MjA4NDk3","number":1482,"title":"Adding medical database chinese and english","user":{"login":"vrindaprabhu","id":16264631,"node_id":"MDQ6VXNlcjE2MjY0NjMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16264631?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vrindaprabhu","html_url":"https:\/\/github.com\/vrindaprabhu","followers_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/followers","following_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/orgs","repos_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/repos","events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Let me know it that helps !\r\nAlso feel free to ping me if you have other questions or if I can help you.","Now I am getting an Assertion Error!\r\n![image](https:\/\/user-images.githubusercontent.com\/16264631\/101943915-f5bf5600-3c11-11eb-84e5-045bbc472162.png)\r\n","All tests have passed. However, PyTest is still failing with the `AssertionError` as before. Also _datasets_info.json_ actually does not seem to provide much info. Please review and let me know what has to be improved.\r\n\r\nThanks!","[PR-1503](https:\/\/github.com\/huggingface\/datasets\/pull\/1503) on the COVID dialog dataset from the same University has similar features. I kept it separate because it is only on COVID qa. Also consists of only single files per language. Kindly let me know if it has to be added as two more configurations to this existing dataset itself. If it has to be added, then can I still freeze the file names like in PR-1503?","It's ok to have them separate"],"created_at":1607709039000,"updated_at":1613453316000,"closed_at":1608056633000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1482","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1482","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1482.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1482.patch"},"body":"Error in creating dummy dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1482\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1481","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1481\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1481\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1481\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1481","id":762579658,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MTEwOTM1","number":1481,"title":"Fix ADD_NEW_DATASET to avoid rebasing once pushed","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607704069000,"updated_at":1610014220000,"closed_at":1610014220000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1481","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1481","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1481.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1481.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1481\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1480","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1480\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1480\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1480\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1480","id":762530805,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM3MDY1NDMx","number":1480,"title":"Adding the Mac-Morpho dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607702498000,"updated_at":1608545017000,"closed_at":1608545017000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1480","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1480","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1480.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1480.patch"},"body":"Adding the Mac-Morpho dataset, a Portuguese language dataset for Part-of-speech tagging tasks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1480\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1479","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1479\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1479\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1479\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1479","id":762320736,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2ODc3NTEz","number":1479,"title":"Add narrativeQA","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq this is now only failing some random windows test (it appears to be somewhere in wnut_17)","This is a connection error, you can ignore it :) \r\nThe level of activity on the lib is quite overwhelming, it stresses a bit the CI ^^"],"created_at":1607691511000,"updated_at":1607693603000,"closed_at":1607693603000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1479","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1479","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1479.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1479.patch"},"body":"Redo of #1368 #309 #499\r\n\r\nIn redoing the dummy data a few times, I ended up adding a load of files to git. Hopefully this should work.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1479\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1478","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1478\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1478\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1478\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1478","id":762293076,"node_id":"MDU6SXNzdWU3NjIyOTMwNzY=","number":1478,"title":"Inconsistent argument names.","user":{"login":"Fraser-Greenlee","id":8402500,"node_id":"MDQ6VXNlcjg0MDI1MDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8402500?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Fraser-Greenlee","html_url":"https:\/\/github.com\/Fraser-Greenlee","followers_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/followers","following_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/orgs","repos_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/repos","events_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Fraser-Greenlee\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also for the `Accuracy` metric the `accuracy_score` method should have its args in the opposite order so `accuracy_score(predictions, references,,,)`.","Thanks for pointing this out ! \ud83d\udd75\ud83c\udffb \r\nPredictions and references should indeed be swapped in the docstring.\r\nHowever, the call to `accuracy_score` should not be changed, it [signature](https:\/\/scikit-learn.org\/stable\/modules\/generated\/sklearn.metrics.accuracy_score.html#sklearn.metrics.accuracy_score) being:\r\n```\r\nsklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None)\r\n```\r\n\r\nFeel free to open a PR if you want to fix this :)"],"created_at":1607689178000,"updated_at":1608390219000,"closed_at":1608390219000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Just find it a wee bit odd that in the transformers library `predictions` are those made by the model:\r\nhttps:\/\/github.com\/huggingface\/transformers\/blob\/master\/src\/transformers\/trainer_utils.py#L51-L61\r\n\r\nWhile in many datasets metrics they are the ground truth labels:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/c3f53792a744ede18d748a1133b6597fdd2d8d18\/metrics\/accuracy\/accuracy.py#L31-L40\r\n\r\nDo you think predictions & references should be swapped? I'd be willing to do some refactoring here if you agree.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1478\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1477","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1477\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1477\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1477\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1477","id":762288811,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2ODQ5NzM4","number":1477,"title":"Jigsaw toxicity pred","user":{"login":"taihim","id":13764071,"node_id":"MDQ6VXNlcjEzNzY0MDcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13764071?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/taihim","html_url":"https:\/\/github.com\/taihim","followers_url":"https:\/\/api.github.com\/users\/taihim\/followers","following_url":"https:\/\/api.github.com\/users\/taihim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/taihim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/taihim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/taihim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/taihim\/orgs","repos_url":"https:\/\/api.github.com\/users\/taihim\/repos","events_url":"https:\/\/api.github.com\/users\/taihim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/taihim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607688800000,"updated_at":1607951975000,"closed_at":1607951975000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1477","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1477","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1477.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1477.patch"},"body":"Managed to mess up my original pull request, opening a fresh one incorporating the changes suggested by @lhoestq.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1477\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1476","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1476\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1476\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1476\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1476","id":762256048,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2ODIxNDI5","number":1476,"title":"Add Spanish Billion Words Corpus","user":{"login":"mariagrandury","id":57645283,"node_id":"MDQ6VXNlcjU3NjQ1Mjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57645283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariagrandury","html_url":"https:\/\/github.com\/mariagrandury","followers_url":"https:\/\/api.github.com\/users\/mariagrandury\/followers","following_url":"https:\/\/api.github.com\/users\/mariagrandury\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariagrandury\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariagrandury\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariagrandury\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariagrandury\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariagrandury\/repos","events_url":"https:\/\/api.github.com\/users\/mariagrandury\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariagrandury\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607685898000,"updated_at":1608224648000,"closed_at":1607951671000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1476","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1476","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1476.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1476.patch"},"body":"Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1476\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1475","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1475\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1475\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1475\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1475","id":762187000,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2NzYxMDQz","number":1475,"title":"Fix XML iterparse in opus_dogc dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607681298000,"updated_at":1608204527000,"closed_at":1608204526000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1475","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1475","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1475.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1475.patch"},"body":"I forgot to add `elem.clear()` to clear the element from memory.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1475\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1474","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1474\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1474\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1474\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1474","id":762083706,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2NjY4MjU3","number":1474,"title":"Create JSON dummy data without loading all dataset in memory","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607676263000,"updated_at":1608207282000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1474","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1474","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1474.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1474.patch"},"body":"See #1442.\r\n\r\nThe statement `json.load()` loads **all the file content in memory**.\r\n\r\nIn order to avoid this, file content should be parsed **iteratively**, by using the library `ijson` e.g.\r\n\r\nI have refactorized the code into a function `_create_json_dummy_data` and I have added some tests.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1474\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1473","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1473\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1473\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1473\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1473","id":762055694,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2NjQyODI5","number":1473,"title":"add srwac","user":{"login":"IvanZidov","id":11391118,"node_id":"MDQ6VXNlcjExMzkxMTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11391118?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/IvanZidov","html_url":"https:\/\/github.com\/IvanZidov","followers_url":"https:\/\/api.github.com\/users\/IvanZidov\/followers","following_url":"https:\/\/api.github.com\/users\/IvanZidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/IvanZidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/IvanZidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/IvanZidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/IvanZidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/IvanZidov\/repos","events_url":"https:\/\/api.github.com\/users\/IvanZidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/IvanZidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Connection error failed. Need rerun","merging since the CI is fixed on master"],"created_at":1607674829000,"updated_at":1608205259000,"closed_at":1608205259000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1473","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1473","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1473.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1473.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1473\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1472","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1472\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1472\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1472\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1472","id":762037907,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2NjI2NjUx","number":1472,"title":"add Srwac","user":{"login":"IvanZidov","id":11391118,"node_id":"MDQ6VXNlcjExMzkxMTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11391118?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/IvanZidov","html_url":"https:\/\/github.com\/IvanZidov","followers_url":"https:\/\/api.github.com\/users\/IvanZidov\/followers","following_url":"https:\/\/api.github.com\/users\/IvanZidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/IvanZidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/IvanZidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/IvanZidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/IvanZidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/IvanZidov\/repos","events_url":"https:\/\/api.github.com\/users\/IvanZidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/IvanZidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607673897000,"updated_at":1607674092000,"closed_at":1607673954000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1472","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1472","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1472.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1472.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1472\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1471","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1471\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1471\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1471\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1471","id":761842512,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2NDUyMzcy","number":1471,"title":"Adding the HAREM dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the changes !\r\n\r\nSorry if I wasn't clear about the suggestion of adding the `raw` dataset as well.\r\nBy `raw` I meant the dataset with its original features, i.e. not tokenized to follow the conll format for NER.\r\nThe `raw` dataset has data fields `doc_text`, `doc_id` and `entities`.","Alright, @lhoestq, now I understand your suggestion, but the JSON files the script downloads aren't actually the \"raw\" HAREM dataset, the [real raw version](https:\/\/www.linguateca.pt\/primeiroHAREM\/harem\/ColeccaoDouradaHAREM.zip) of it is on XML format that needs a lot of preprocessing. Those JSON are just pre-processed versions of the dataset.\r\n\r\nI can make the config of the raw version of the pre-processed dataset, but I'll leave a comment on the dataset summary about those details.","Oh I see ! Sorry I thought there really were the raw (i.e. original\/unprocessed) files","The original xml format doesn't seem very practical to use :\/ \r\nWe can ignore that and only keep the default and selective configs then.\r\nCan you revert to the old configs ?\r\n\r\nThanks again for adding this dataset !","Hi @lhoestq, I've reverted the changes and put more details on README about the dataset."],"created_at":1607656870000,"updated_at":1608633453000,"closed_at":1608633453000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1471","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1471","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1471.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1471.patch"},"body":"Adding the HAREM dataset, a Portuguese language dataset for NER tasks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1471\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1470","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1470\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1470\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1470\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1470","id":761791065,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2NDA2MjQx","number":1470,"title":"Add wiki lingua dataset","user":{"login":"katnoria","id":7674948,"node_id":"MDQ6VXNlcjc2NzQ5NDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7674948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/katnoria","html_url":"https:\/\/github.com\/katnoria","followers_url":"https:\/\/api.github.com\/users\/katnoria\/followers","following_url":"https:\/\/api.github.com\/users\/katnoria\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/katnoria\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/katnoria\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/katnoria\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/katnoria\/orgs","repos_url":"https:\/\/api.github.com\/users\/katnoria\/repos","events_url":"https:\/\/api.github.com\/users\/katnoria\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/katnoria\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["it\u2019s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\nwhich i think is not the dataset you are doing a PR for. Try rebasing with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit push -u -f origin your_branch\r\n```","> it\u2019s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\n> which i think is not the dataset you are doing a PR for. Try rebasing with:\r\n> \r\n> ```\r\n> git fetch upstream\r\n> git rebase upstream\/master\r\n> git push -u -f origin your_branch\r\n> ```\r\n\r\nThanks, my branch seems to be up to date. \r\n```Current branch add-wiki-lingua-dataset is up to date.```","Also where do the google drive urls come from ?","looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n\r\nCan you create another branch and another PR ?\r\n(or you can try to fix this branch with rebase and push force if you're familiar with it)","Thanks for fixing the dummy data and removing the glob call :) ","> looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n> \r\n> Can you create another branch and another PR ?\r\n> (or you can try to fix this branch with rebase and push force if you're familiar with it)\r\n\r\nEasier to create a new branch and submit, I have submitted a new PR #1582 ","Closing this one in favor of #1582 "],"created_at":1607652258000,"updated_at":1608132433000,"closed_at":1608132433000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1470","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1470","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1470.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1470.patch"},"body":"Hello @lhoestq ,\r\n\r\nI am opening a fresh pull request as advised in my original PR https:\/\/github.com\/huggingface\/datasets\/pull\/1308\r\n\r\nThanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1470\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1469","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1469\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1469\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1469\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1469","id":761611315,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MjUzMDk4","number":1469,"title":"ADD: Wino_bias dataset","user":{"login":"akshayb7","id":29649801,"node_id":"MDQ6VXNlcjI5NjQ5ODAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29649801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akshayb7","html_url":"https:\/\/github.com\/akshayb7","followers_url":"https:\/\/api.github.com\/users\/akshayb7\/followers","following_url":"https:\/\/api.github.com\/users\/akshayb7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akshayb7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akshayb7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akshayb7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akshayb7\/orgs","repos_url":"https:\/\/api.github.com\/users\/akshayb7\/repos","events_url":"https:\/\/api.github.com\/users\/akshayb7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akshayb7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607633985000,"updated_at":1607886837000,"closed_at":1607886837000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1469","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1469","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1469.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1469.patch"},"body":"Updated PR to counter messed up history of previous one (https:\/\/github.com\/huggingface\/datasets\/pull\/1235) due to rebase.\r\nRemoved manual downloading of dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1469\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1468","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1468\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1468\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1468\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1468","id":761607531,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MjQ5OTg0","number":1468,"title":"add Indonesian newspapers (id_newspapers_2018)","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like there's a `Path` issue on windows. Could you try switching to\r\n`glob.glob(os.path.join(article_dir, \"*.json\"))`","> Looks like there's a `Path` issue on windows. Could you try switching to\r\n> `glob.glob(os.path.join(article_dir, \"*.json\"))`\r\n\r\nThanks, I replaced it with glob. Let's see if it solves the issue. Anyway, the main directory has a space, could it make the issue on windows? the test on linux don't have this problem.","It seems glob doesn't help also. Btw, one of the failing test tried to connect aws which failed:\r\n```\r\nC:\\tools\\miniconda3\\lib\\site-packages\\urllib3\\connection.py:160: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\naddress = ('s3.amazonaws.com', 443), timeout = 10, source_address = None\r\nsocket_options = [(6, 1, 1)]\r\n\r\n```\r\nWhy did it try to connect to aws? I don't use it.","It seems that the circleci make a test for whole datasets repository, that means if only one of the dataset in the official repository has a download issue, this will also affect the test of a new dataset like mine, isn't it?\r\nI changed the url to my newspaper dataset which contains only few simple json files and simple directory structure. But it still failed. And it failed not only on windows test. This is one of the error message:\r\n```\r\n-- Docs: https:\/\/docs.pytest.org\/en\/stable\/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\n===== 4 failed, 2667 passed, 2052 skipped, 4 warnings in 432.05s (0:07:12) =====\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\nThe test failed on twitter dataset even my dataset has nothing to do with twitter? ","merging since the CI is fixed on master","Hi, thanks for merging the dataset. I create a new PR (#1499) since I need to update the link to the dataset. "],"created_at":1607633652000,"updated_at":1607763051000,"closed_at":1607706281000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1468","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1468","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1468.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1468.patch"},"body":"The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers. The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1468\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1467","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1467\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1467\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1467\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1467","id":761557290,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MjA3NDcx","number":1467,"title":"adding snow_simplified_japanese_corpus","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master","Thank you for the updates and merging!"],"created_at":1607629503000,"updated_at":1608211368000,"closed_at":1608204334000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1467","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1467","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1467.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1467.patch"},"body":"Adding simplified Japanese corpus \"SNOW T15\" and \"SNOW T23\".\r\nThey contain original Japanese, simplified Japanese, and original English (the original text is gotten from en-ja translation corpus). Hence, it can be used not only for Japanese simplification but also for en-ja translation.\r\n\r\n- http:\/\/www.jnlp.org\/SNOW\/T15\r\n- http:\/\/www.jnlp.org\/SNOW\/T23","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1467\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1466","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1466\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1466\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1466\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1466","id":761554357,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MjA0OTMx","number":1466,"title":"Add Turkish News Category Dataset (270K).Updates were made for review\u2026","user":{"login":"basakbuluz","id":41359672,"node_id":"MDQ6VXNlcjQxMzU5Njcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41359672?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/basakbuluz","html_url":"https:\/\/github.com\/basakbuluz","followers_url":"https:\/\/api.github.com\/users\/basakbuluz\/followers","following_url":"https:\/\/api.github.com\/users\/basakbuluz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/basakbuluz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/basakbuluz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/basakbuluz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/basakbuluz\/orgs","repos_url":"https:\/\/api.github.com\/users\/basakbuluz\/repos","events_url":"https:\/\/api.github.com\/users\/basakbuluz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/basakbuluz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@SBrandeis, What exactly is it that makes the tests fail? Can you help me please?","These errors\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\n```\r\nappeared on master 3 hours ago and are now fixed.\r\n(it was due to today's update of `xlrd` that broke two datasets)\r\n\r\nYou can ignore them, they're not related to your dataset","> These errors\r\n> \r\n> ```\r\n> =========================== short test summary info ============================\r\n> FAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\n> FAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\n> FAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\n> FAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\n> ```\r\n> \r\n> appeared on master 3 hours ago and are now fixed.\r\n> (it was due to today's update of `xlrd` that broke two datasets)\r\n> \r\n> You can ignore them, they're not related to your dataset\r\n\r\n**If it is not a problem caused by us, we have already completed the review notes. You can then check it out and confirm?**","merging since the CI is fixed on master"],"created_at":1607629272000,"updated_at":1607696835000,"closed_at":1607696835000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1466","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1466","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1466.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1466.patch"},"body":"This PR adds the **Turkish News Categories Dataset (270K)** dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news in 17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https:\/\/www.interpress.com\/) media monitoring company.\r\n\r\n**Note**: Resubmitted as a clean version of the previous Pull Request(#1419). @SBrandeis @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1466\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1465","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1465\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1465\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1465\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1465","id":761538931,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MTkxNjM1","number":1465,"title":"Add clean menyo20k data","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq rerun the tests "],"created_at":1607628120000,"updated_at":1607941821000,"closed_at":1607941821000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1465","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1465","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1465.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1465.patch"},"body":"New Clean PR for menyo20k_mt","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1465\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1464","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1464\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1464\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1464\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1464","id":761533566,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MTg3MDA0","number":1464,"title":"Reddit jokes","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq would you please rerun the test, ","I re-started the test.\r\n\r\n@lhoestq let's hold off on merging for now though, having a conversation on Slack about some of the offensive content in the dataset and how\/whether we want to present it."],"created_at":1607627719000,"updated_at":1607631240000,"closed_at":1607631240000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1464","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1464","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1464.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1464.patch"},"body":"196k Reddit Jokes dataset\r\nDataset link- https:\/\/raw.githubusercontent.com\/taivop\/joke-dataset\/master\/reddit_jokes.json","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1464\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1463","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1463\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1463\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1463\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1463","id":761510908,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MTY3NTMw","number":1463,"title":"Adding enriched_web_nlg features + handling xml bugs","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607626099000,"updated_at":1608201875000,"closed_at":1608201874000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1463","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1463","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1463.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1463.patch"},"body":"This PR adds features of the enriched_web_nlg dataset that were not present yet (most notably sorted rdf triplet sets), and deals with some xml issues that led to returning no data in cases where surgery could be performed to salvage it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1463\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1462","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1462\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1462\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1462\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1462","id":761489274,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MTQ4Njc1","number":1462,"title":"Added conv ai 2 (Again)","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looking perfect to me, need to rerun the tests\r\n","Thanks, @tanmoyio. \r\nHow do I rerun the tests? Should I change something or push a new commit?","@rkc007 you don't need to rerun it, @lhoestq @yjernite will rerun it, as there are huge number of PRs in the queue it might take lil bit of time. ","ive just re-run the tests","Thank you @abhishekkrthakur. Can you please rerun it again? It seems something was broken in CI during the previous test.","@lhoestq Sorry for the mess. I don't know why this keeps on happening. I tried step by step process of updating the PR but seems something is wrong. This happened for 2nd time with the same PR. Apologies for that. \r\n\r\nNew PR -> https:\/\/github.com\/huggingface\/datasets\/pull\/1527\r\nAlso, I fixed everything in the new PR."],"created_at":1607624515000,"updated_at":1607818892000,"closed_at":1607818891000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1462","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1462","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1462.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1462.patch"},"body":"The original PR -> https:\/\/github.com\/huggingface\/datasets\/pull\/1383\r\n\r\nReason for creating again - \r\n\r\nThe reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1462\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1461","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1461\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1461\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1461\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1461","id":761415420,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MDgzODY5","number":1461,"title":"Adding NewsQA dataset","user":{"login":"rsanjaykamath","id":18527321,"node_id":"MDQ6VXNlcjE4NTI3MzIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18527321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rsanjaykamath","html_url":"https:\/\/github.com\/rsanjaykamath","followers_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/followers","following_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/orgs","repos_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/repos","events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Generate the dummy dataset then regenerate the dataset_info.json file, ","> Generate the dummy dataset then regenerate the dataset_info.json file,\r\n\r\nThe pytest scripts do not accept manual directory inputs for the data provided manually. This is why the tests fail. ","don't use the --auto-generate argument and you will get a brief instructions on how to create dummy data for your dataset,\r\nalso you dont have to run the pytest for main dataset if your data is needed to be downloaded manually, just run the pytest for dummy dataset, and when you will create the json you need to provide the main data directory path by using this argument --data_dir","Thanks for your help, @tanmoyio \r\nI tried with and without --auto_generate flag. \r\nHere are the issues. \r\n\r\n**With --auto_generate**\r\n`python datasets-cli dummy_data datasets\/newsqa\/ --auto_generate `\r\n`Traceback (most recent call last):\r\n File \"datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"\/Users\/sanjaykamath\/Python_Projects\/HuggingFace\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 321, in run\r\n keep_uncompressed=self._keep_uncompressed,\r\n File \"\/Users\/sanjaykamath\/Python_Projects\/HuggingFace\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 340, in _autogenerate_dummy_data\r\n dataset_builder._split_generators(dl_manager)\r\n File \"\/Users\/sanjaykamath\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/newsqa\/7a565b204506c1fd91047290073be54d3ae05fa2b0ab17ae0bc6f709350fcbca\/newsqa.py\", line 180, in _split_generators\r\n path_to_manual_folder = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"\/Users\/sanjaykamath\/anaconda3\/envs\/huggingface\/lib\/python3.7\/posixpath.py\", line 235, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n`\r\n\r\n\r\n**Without --auto_generate**\r\n`python datasets-cli dummy_data datasets\/newsqa\/`\r\n`Dataset with config BuilderConfig(name='combined-csv', version=1.0.0, data_dir=None, data_files=None, description='This part of the dataset covers the whole dataset in the combined format of CSV as mentioned here: https:\/\/github.com\/Maluuba\/newsqa#csv') seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file None.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"\/Users\/sanjaykamath\/Python_Projects\/HuggingFace\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 326, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"\/Users\/sanjaykamath\/Python_Projects\/HuggingFace\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 406, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n`\r\n","Excellent comments. Thanks @lhoestq for your valuable comments. \r\nI've changed everything you had mentioned and the tests pass now. \r\nLet me know if something still needs to be changed. ","Thank you very much @lhoestq @tanmoyio @yjernite @thomwolf for all your support :) "],"created_at":1607619670000,"updated_at":1608229743000,"closed_at":1608229656000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1461","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1461","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1461.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1461.patch"},"body":"Since the dataset has legal restrictions to circulate the original data. It has to be manually downloaded by the user and loaded to the library. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1461\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1460","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1460\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1460\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1460\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1460","id":761349149,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM2MDI3NzYy","number":1460,"title":"add Bengali Hate Speech dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I think you might want to look at the dataset, and the first data instances mentioned in the README.md is very much offensive. Though this dataset is based on hate speech but I found the dataset heavily disturbing as Bengali is my native language.","Hi @tanmoyio indeed you're right.\r\nWe should *at least* add very explicit mentions in the dataset card that the content of this dataset contains very offensive language. We should also put it in perspective with the tasks it tries to solve, the annotation process and the limitations.\r\n\r\nWe have to make sure that nothing is unclear\/misleading nor could lead to bad usage of the dataset.\r\n\r\nWhat do you think @tanmoyio ?\r\nAlso feel free to suggest modifications in the dataset cards if you feel like some sections require corrections or more details","> Hi @tanmoyio indeed you're right.\r\n> We should _at least_ add very explicit mentions in the dataset card that the content of this dataset contains very offensive language. We should also put it in perspective with the tasks it tries to solve, the annotation process and the limitations.\r\n> \r\n> We have to make sure that nothing is unclear\/misleading nor could lead to bad usage of the dataset.\r\n> \r\n> What do you think @tanmoyio ?\r\n> Also feel free to suggest modifications in the dataset cards if you feel like some sections require corrections or more details\r\n\r\nyeah I agree with you. It would be good if \"Personal and Sensitive Information\" and \"Considerations for Using the Data\" is being explained properly in the README.md. @stevhliu ","please let me know if there is anything else you'd like to see!","This looks ok to merge for me. Let me know @stevhliu and @tanmoyio if you want to add something or if it looks good to you","looks good to me @lhoestq \ud83d\udc4d ","merging since the CI is fixed on master"],"created_at":1607614855000,"updated_at":1631897693000,"closed_at":1609769309000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1460","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1460","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1460.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1460.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1460\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1459","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1459\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1459\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1459\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1459","id":761258395,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1OTUxMDY2","number":1459,"title":"Add Google Conceptual Captions Dataset (manual download)","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607608233000,"updated_at":1614001666000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1459","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1459","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1459.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1459.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1459\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1458","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1458\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1458\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1458\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1458","id":761235962,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1OTMyMTA1","number":1458,"title":"Add id_nergrit_corpus","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607606434000,"updated_at":1608201915000,"closed_at":1608201915000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1458","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1458","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1458.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1458.patch"},"body":"Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. \r\nRecently my PR for id_nergrit_ner has been accepted and merged to the main branch. The id_nergrit_ner has only one dataset (NER), and this new PR renamed the dataset from id_nergrit_ner to id_nergrit_corpus and added 2 other remaining datasets (Statement Extraction, and Sentiment Analysis.)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1458\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1457","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1457\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1457\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1457\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1457","id":761232610,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1OTI5Mjg1","number":1457,"title":"add hrenwac_para","user":{"login":"IvanZidov","id":11391118,"node_id":"MDQ6VXNlcjExMzkxMTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11391118?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/IvanZidov","html_url":"https:\/\/github.com\/IvanZidov","followers_url":"https:\/\/api.github.com\/users\/IvanZidov\/followers","following_url":"https:\/\/api.github.com\/users\/IvanZidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/IvanZidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/IvanZidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/IvanZidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/IvanZidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/IvanZidov\/repos","events_url":"https:\/\/api.github.com\/users\/IvanZidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/IvanZidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["duplicate"],"created_at":1607606180000,"updated_at":1607607354000,"closed_at":1607607310000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1457","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1457","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1457.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1457.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1457\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1456","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1456\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1456\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1456\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1456","id":761231296,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1OTI4MTc2","number":1456,"title":"Add CC100 Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607606077000,"updated_at":1607941209000,"closed_at":1607941208000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1456","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1456","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1456.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1456.patch"},"body":"Closes #773 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1456\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1455","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1455\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1455\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1455\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1455","id":761205073,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1OTA1OTQy","number":1455,"title":"Add HEAD-QA: A Healthcare Dataset for Complex Reasoning","user":{"login":"mariagrandury","id":57645283,"node_id":"MDQ6VXNlcjU3NjQ1Mjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57645283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariagrandury","html_url":"https:\/\/github.com\/mariagrandury","followers_url":"https:\/\/api.github.com\/users\/mariagrandury\/followers","following_url":"https:\/\/api.github.com\/users\/mariagrandury\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariagrandury\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariagrandury\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariagrandury\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariagrandury\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariagrandury\/repos","events_url":"https:\/\/api.github.com\/users\/mariagrandury\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariagrandury\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for your review @lhoestq, I've changed the types of `qid` and `ra` and now they are integers as `aid`.\r\n\r\nReady for another review!"],"created_at":1607603816000,"updated_at":1608224612000,"closed_at":1608224291000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1455","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1455","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1455.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1455.patch"},"body":"HEAD-QA is a multi-choice HEAlthcare Dataset, the questions come from exams to access a specialized position in the\r\nSpanish healthcare system.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1455\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1454","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1454\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1454\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1454\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1454","id":761199862,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1OTAxNjk4","number":1454,"title":"Add kinnews_kirnews","user":{"login":"saradhix","id":1351362,"node_id":"MDQ6VXNlcjEzNTEzNjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1351362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saradhix","html_url":"https:\/\/github.com\/saradhix","followers_url":"https:\/\/api.github.com\/users\/saradhix\/followers","following_url":"https:\/\/api.github.com\/users\/saradhix\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saradhix\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saradhix\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saradhix\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saradhix\/orgs","repos_url":"https:\/\/api.github.com\/users\/saradhix\/repos","events_url":"https:\/\/api.github.com\/users\/saradhix\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saradhix\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607603348000,"updated_at":1608230056000,"closed_at":1608230056000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1454","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1454","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1454.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1454.patch"},"body":"Add kinnews and kirnews","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1454\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1453","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1453\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1453\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1453\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1453","id":761188657,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1ODkyNTM5","number":1453,"title":"Adding ethos dataset clean","user":{"login":"iamollas","id":22838900,"node_id":"MDQ6VXNlcjIyODM4OTAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22838900?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iamollas","html_url":"https:\/\/github.com\/iamollas","followers_url":"https:\/\/api.github.com\/users\/iamollas\/followers","following_url":"https:\/\/api.github.com\/users\/iamollas\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iamollas\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iamollas\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iamollas\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iamollas\/orgs","repos_url":"https:\/\/api.github.com\/users\/iamollas\/repos","events_url":"https:\/\/api.github.com\/users\/iamollas\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iamollas\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Thanks !\r\n\r\nThanks as well for your hard work \ud83d\ude0a!!","merging since the CI is fixed on master"],"created_at":1607602401000,"updated_at":1607958046000,"closed_at":1607941884000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1453","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1453","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1453.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1453.patch"},"body":"I addressed the comments on the PR1318","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1453\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1452","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1452\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1452\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1452\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1452","id":761104924,"node_id":"MDU6SXNzdWU3NjExMDQ5MjQ=","number":1452,"title":"SNLI dataset contains labels with value -1","user":{"login":"aarnetalman","id":11405654,"node_id":"MDQ6VXNlcjExNDA1NjU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11405654?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aarnetalman","html_url":"https:\/\/github.com\/aarnetalman","followers_url":"https:\/\/api.github.com\/users\/aarnetalman\/followers","following_url":"https:\/\/api.github.com\/users\/aarnetalman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aarnetalman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aarnetalman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aarnetalman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aarnetalman\/orgs","repos_url":"https:\/\/api.github.com\/users\/aarnetalman\/repos","events_url":"https:\/\/api.github.com\/users\/aarnetalman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aarnetalman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I believe the `-1` label is used for missing\/NULL data as per HuggingFace Dataset conventions. If I recall correctly SNLI has some entries with no (gold) labels in the dataset.","Ah, you're right. The dataset has some pairs with missing labels. Thanks for reminding me."],"created_at":1607595415000,"updated_at":1607622595000,"closed_at":1607622595000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```\r\nimport datasets\r\nnli_data = datasets.load_dataset(\"snli\")\r\ntrain_data = nli_data['train']\r\ntrain_labels = train_data['label']\r\nlabel_set = set(train_labels)\r\nprint(label_set)\r\n```\r\n\r\n**Output:**\r\n`{0, 1, 2, -1}`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1452\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1451","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1451\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1451\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1451\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1451","id":761102770,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1ODIwOTY3","number":1451,"title":"Add European Center for Disease Control and Preventions's (ECDC) Translation Memory dataset","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607595260000,"updated_at":1607705409000,"closed_at":1607705409000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1451","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1451","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1451.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1451.patch"},"body":"ECDC-TM homepage: https:\/\/ec.europa.eu\/jrc\/en\/language-technologies\/ecdc-translation-memory","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1451\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1450","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1450\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1450\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1450\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1450","id":761102429,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1ODIwNjg0","number":1450,"title":"Fix version in bible_para","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607595235000,"updated_at":1607704841000,"closed_at":1607704840000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1450","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1450","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1450.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1450.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1450\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1449","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1449\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1449\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1449\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1449","id":761083210,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1ODA0MzEy","number":1449,"title":"add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC) [PROPER]","user":{"login":"aseifert","id":4944799,"node_id":"MDQ6VXNlcjQ5NDQ3OTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aseifert","html_url":"https:\/\/github.com\/aseifert","followers_url":"https:\/\/api.github.com\/users\/aseifert\/followers","following_url":"https:\/\/api.github.com\/users\/aseifert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aseifert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aseifert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aseifert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aseifert\/orgs","repos_url":"https:\/\/api.github.com\/users\/aseifert\/repos","events_url":"https:\/\/api.github.com\/users\/aseifert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aseifert\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["linter your code with flake8 and also run the commands present in Makefile for proper formatting \r\n","merging since the CI is fixed on master"],"created_at":1607593868000,"updated_at":1607706466000,"closed_at":1607706466000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1449","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1449","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1449.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1449.patch"},"body":"- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC)\r\n- **Description:** https:\/\/www.cl.cam.ac.uk\/research\/nl\/bea2019st\/#data\r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/W19-4406\/\r\n- **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP.\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1449\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1448","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1448\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1448\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1448\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1448","id":761080776,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1ODAyNDM3","number":1448,"title":"add thai_toxicity_tweet","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607593682000,"updated_at":1607703687000,"closed_at":1607703687000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1448","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1448","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1448.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1448.patch"},"body":"Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity.\r\n\r\nNotes from data cleaner: The data is included into [huggingface\/datasets](https:\/\/www.github.com\/huggingface\/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`.\r\nProcessing can be found at [this PR](https:\/\/github.com\/tmu-nlp\/ThaiToxicityTweetCorpus\/pull\/1).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1448\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1447","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1447\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1447\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1447\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1447","id":761067955,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzkxODk1","number":1447,"title":"Update step-by-step guide for windows","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @thomwolf, for simplification purposes, I think you could remove the \"`pip install ...`\" steps from this commit, 'cause these deps (black, isort, flake8) are already installed on `pip install -e \".[dev]\"` on the [Start by preparing your environment](https:\/\/github.com\/huggingface\/datasets\/blob\/704107f924e74445f6f0fbde69a218b72178b588\/ADD_NEW_DATASET.md#start-by-preparing-your-environment)\r\n"],"created_at":1607592659000,"updated_at":1607602727000,"closed_at":1607592674000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1447","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1447","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1447.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1447.patch"},"body":"Update step-by-step guide for windows to give an alternative to `make style`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1447\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1446","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1446\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1446\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1446\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1446","id":761060323,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Nzg1NDk1","number":1446,"title":"Add Bing Coronavirus Query Set","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607592046000,"updated_at":1607706188000,"closed_at":1607706187000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1446","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1446","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1446.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1446.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1446\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1445","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1445\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1445\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1445\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1445","id":761057851,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzgzMzY2","number":1445,"title":"Added dataset clickbait_news_bg","user":{"login":"tsvm","id":1083319,"node_id":"MDQ6VXNlcjEwODMzMTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1083319?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tsvm","html_url":"https:\/\/github.com\/tsvm","followers_url":"https:\/\/api.github.com\/users\/tsvm\/followers","following_url":"https:\/\/api.github.com\/users\/tsvm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tsvm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tsvm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tsvm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tsvm\/orgs","repos_url":"https:\/\/api.github.com\/users\/tsvm\/repos","events_url":"https:\/\/api.github.com\/users\/tsvm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tsvm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like this PR includes changes about many other files than the ones for clickbait_news_bg\r\n\r\nCan you create another branch and another PR please ?","I created a new branch with the dataset code and submitted a new PR for it: https:\/\/github.com\/huggingface\/datasets\/pull\/1568"],"created_at":1607591848000,"updated_at":1608018319000,"closed_at":1608018319000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1445","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1445","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1445.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1445.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1445\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1444","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1444\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1444\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1444\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1444","id":761055651,"node_id":"MDU6SXNzdWU3NjEwNTU2NTE=","number":1444,"title":"FileNotFound remotly, can't load a dataset","user":{"login":"sadakmed","id":18331629,"node_id":"MDQ6VXNlcjE4MzMxNjI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18331629?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sadakmed","html_url":"https:\/\/github.com\/sadakmed","followers_url":"https:\/\/api.github.com\/users\/sadakmed\/followers","following_url":"https:\/\/api.github.com\/users\/sadakmed\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sadakmed\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sadakmed\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sadakmed\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sadakmed\/orgs","repos_url":"https:\/\/api.github.com\/users\/sadakmed\/repos","events_url":"https:\/\/api.github.com\/users\/sadakmed\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sadakmed\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This dataset will be available in version-2 of the library. If you want to use this dataset now, install datasets from `master` branch rather.\r\n\r\nCommand to install datasets from `master` branch:\r\n`!pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`","Closing this, thanks @VasudevGupta7 "],"created_at":1607591687000,"updated_at":1608054074000,"closed_at":1608054074000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```py\r\n!pip install datasets\r\nimport datasets as ds\r\n\r\ncorpus = ds.load_dataset('large_spanish_corpus')\r\n```\r\ngives the error\r\n\r\n> FileNotFoundError: Couldn't find file locally at large_spanish_corpus\/large_spanish_corpus.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/large_spanish_corpus\/large_spanish_corpus.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/large_spanish_corpus\/large_spanish_corpus.py\r\n\r\nnot just `large_spanish_corpus`, `zest` too, but `squad` is available. \r\n\r\nthis was using colab and localy ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1444\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1443","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1443\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1443\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1443\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1443","id":761033061,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzYyNTQ1","number":1443,"title":"Add OPUS Wikimedia Translations Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607589782000,"updated_at":1608056510000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1443","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1443","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1443.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1443.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1443\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1442","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1442\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1442\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1442\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1442","id":761026069,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzU2Nzgx","number":1442,"title":"Create XML dummy data without loading all dataset in memory","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607589127000,"updated_at":1608199183000,"closed_at":1608199183000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1442","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1442","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1442.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1442.patch"},"body":"While I was adding one XML dataset, I noticed that all the dataset was loaded in memory during the dummy data generation process (using nearly all my laptop RAM).\r\n\r\nLooking at the code, I have found that the origin is the use of `ET.parse()`. This method loads **all the file content in memory**.\r\n\r\nIn order to fix this, I have refactorized the code and use `ET.iterparse()` instead, which **parses the file content incrementally**.\r\n\r\nI have also implemented a test.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1442\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1441","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1441\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1441\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1441\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1441","id":761021823,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzUzMjI5","number":1441,"title":"Add Igbo-English Machine Translation Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607588734000,"updated_at":1607702093000,"closed_at":1607702092000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1441","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1441","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1441.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1441.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1441\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1440","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1440\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1440\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1440\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1440","id":760973057,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzEyNDY1","number":1440,"title":"Adding english plaintext jokes dataset","user":{"login":"purvimisal","id":22298787,"node_id":"MDQ6VXNlcjIyMjk4Nzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22298787?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/purvimisal","html_url":"https:\/\/github.com\/purvimisal","followers_url":"https:\/\/api.github.com\/users\/purvimisal\/followers","following_url":"https:\/\/api.github.com\/users\/purvimisal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/purvimisal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/purvimisal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/purvimisal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/purvimisal\/orgs","repos_url":"https:\/\/api.github.com\/users\/purvimisal\/repos","events_url":"https:\/\/api.github.com\/users\/purvimisal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/purvimisal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @purvimisal, thanks for your contributions!\r\n\r\nThis jokes dataset has come up before, and after a conversation with the initial submitter, we decided not to add it then. Humor is important, but looking at the actual data points in this set raises several concerns :) \r\n\r\nThe main issue is the Reddit part of the dataset which has most of the examples. A cursory look at the data shows a large number of highly offensive jokes that reproduce some pretty harmful biases (the second one from the top is a Holocaust joke). \r\n\r\nThe other two sources have similar issues (especially the \"Blond Jokes\") to a slightly lesser extent.\r\n\r\nWhile such datasets can be useful in the right context, there is a real concern that people using the library might miss some of this context (however much we outline it), and end up unwittingly training models that rely on some pretty racist and sexist associations.\r\n\r\nWe would recommend skipping this dataset altogether.\r\n\r\nIf you feel really strongly about having a joke dataset, then we would ask that you:\r\n- remove the Reddit part of the dataset altogether\r\n- write an in-depth description of the social biases present in the remaining data\r\n\r\nLet us know which of the two you decide! And if you want recommendations on other datasets to add, hit us up on Slack \ud83e\udd17 ","Hi @yjernite, thanks so much. I should've totally thought about this earlier. The harmful biases make so much sense. I should've consulted before making a PR. \r\nI will be closing this one and skipping this dataset altogether. \r\nThanks again \r\n"],"created_at":1607583857000,"updated_at":1607836920000,"closed_at":1607752543000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1440","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1440","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1440.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1440.patch"},"body":"This PR adds a dataset of 200k English plaintext Jokes from three sources: Reddit, Stupidstuff, and Wocka.\r\nLink: https:\/\/github.com\/taivop\/joke-dataset \r\n\r\nThis is my second PR. \r\nFirst was: [#1269 ](https:\/\/github.com\/huggingface\/datasets\/pull\/1269)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1440\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1439","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1439\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1439\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1439\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1439","id":760968410,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzA4NDU1","number":1439,"title":"Update README.md","user":{"login":"tuner007","id":46425391,"node_id":"MDQ6VXNlcjQ2NDI1Mzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46425391?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tuner007","html_url":"https:\/\/github.com\/tuner007","followers_url":"https:\/\/api.github.com\/users\/tuner007\/followers","following_url":"https:\/\/api.github.com\/users\/tuner007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tuner007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tuner007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tuner007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tuner007\/orgs","repos_url":"https:\/\/api.github.com\/users\/tuner007\/repos","events_url":"https:\/\/api.github.com\/users\/tuner007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tuner007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607583421000,"updated_at":1607700173000,"closed_at":1607700173000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1439","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1439","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1439.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1439.patch"},"body":"1k-10k -> 1k-1M\r\n\r\n3 separate configs are available with min. 1K and max. 211.3k examples","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1439\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1438","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1438\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1438\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1438\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1438","id":760962193,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NzAzMTEw","number":1438,"title":"A descriptive name for my changes","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have noticed that the master branch of your fork has diverged from the one of the repo. This is probably what causes the mess in the github diff \"Files changed\".\r\n\r\nI would suggest to re-fork the `datasets` repo and recreate a new branch and a new PR. ","You're pretty close to having all things ready to merge !\r\nFeel free to ping me when you have a new PR","Closing this one in favor of #1575 "],"created_at":1607582844000,"updated_at":1608028587000,"closed_at":1608028586000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1438","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1438","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1438.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1438.patch"},"body":"hind encorp resubmited","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1438\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1437","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1437\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1437\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1437\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1437","id":760891879,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NjQwODE0","number":1437,"title":"Add Indosum dataset","user":{"login":"prasastoadi","id":11614678,"node_id":"MDQ6VXNlcjExNjE0Njc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11614678?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/prasastoadi","html_url":"https:\/\/github.com\/prasastoadi","followers_url":"https:\/\/api.github.com\/users\/prasastoadi\/followers","following_url":"https:\/\/api.github.com\/users\/prasastoadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/prasastoadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/prasastoadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/prasastoadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/prasastoadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/prasastoadi\/repos","events_url":"https:\/\/api.github.com\/users\/prasastoadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/prasastoadi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @prasastoadi have you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping ;e if you have questions or when you're ready for a review"],"created_at":1607576520000,"updated_at":1608198679000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1437","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1437","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1437.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1437.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1437\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1436","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1436\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1436\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1436\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1436","id":760873132,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NjI1MDM0","number":1436,"title":"add ALT","user":{"login":"chameleonTK","id":6429850,"node_id":"MDQ6VXNlcjY0Mjk4NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6429850?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chameleonTK","html_url":"https:\/\/github.com\/chameleonTK","followers_url":"https:\/\/api.github.com\/users\/chameleonTK\/followers","following_url":"https:\/\/api.github.com\/users\/chameleonTK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chameleonTK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chameleonTK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chameleonTK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chameleonTK\/orgs","repos_url":"https:\/\/api.github.com\/users\/chameleonTK\/repos","events_url":"https:\/\/api.github.com\/users\/chameleonTK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chameleonTK\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The errors in de CI are fixed on master so it's fine"],"created_at":1607573841000,"updated_at":1607876058000,"closed_at":1607701961000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1436","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1436","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1436.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1436.patch"},"body":"ALT dataset -- https:\/\/www2.nict.go.jp\/astrec-att\/member\/mutiyama\/ALT\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1436\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1435","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1435\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1435\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1435\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1435","id":760867325,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NjIwODE4","number":1435,"title":"Add FreebaseQA dataset","user":{"login":"anaerobeth","id":3663322,"node_id":"MDQ6VXNlcjM2NjMzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3663322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anaerobeth","html_url":"https:\/\/github.com\/anaerobeth","followers_url":"https:\/\/api.github.com\/users\/anaerobeth\/followers","following_url":"https:\/\/api.github.com\/users\/anaerobeth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anaerobeth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anaerobeth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anaerobeth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anaerobeth\/orgs","repos_url":"https:\/\/api.github.com\/users\/anaerobeth\/repos","events_url":"https:\/\/api.github.com\/users\/anaerobeth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anaerobeth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@yjernite @lhoestq Any suggestions on how to get the dummy data generator to recognize the columns? The structure of the json is:\r\n```\r\n{\r\n \"Dataset\": \"FreebaseQA-eval\", \r\n \"Version\": \"1.0\", \r\n \"Questions\": [\r\n {\r\n \"Question-ID\": \"FreebaseQA-eval-0\", \r\n \"RawQuestion\": \"Who is the female presenter of the Channel 4 quiz show '1001 things you should know'?\", \r\n \"ProcessedQuestion\": \"who is the female presenter of the channel 4 quiz show '1001 things you should know'\", \r\n \"Parses\": [\r\n {\r\n \"Parse-Id\": \"FreebaseQA-eval-0.P0\", \r\n \"PotentialTopicEntityMention\": \"1001 things you should know\", \r\n \"TopicEntityName\": \"1001 things you should know\", \r\n \"TopicEntityMid\": \"m.0nd3t34\", \r\n \"InferentialChain\": \"tv.tv_program.regular_personal_appearances..tv.tv_regular_personal_appearance.person\", \r\n \"Answers\": [\r\n {\r\n \"AnswersMid\": \"m.0216y_\", \r\n \"AnswersName\": [\r\n \"sandi toksvig\"\r\n ]\r\n }\r\n ]\r\n }\r\n ]\r\n }, \r\n ...\r\n ]\r\n}\r\n```\r\n\r\nThanks!","Unfortunately this json structure is not recognized by the auto-generation yet, so you'd have to create the dummy data manually. \r\nYou can get some instructions on how to do that with: `python datasets-cli dummy_data datasets\/freebase_qa`\r\nWe can definitely help you with that if there are too many files! ","@yjernite Thanks for the instructions. I manually added dummy data and created the zip file but one of the splits seem to return an empty list.\r\n\r\n```\r\ntests\/test_dataset_common.py F [100%]\r\n\r\n========================= FAILURES ==========================\r\n_ LocalDatasetTest.test_load_dataset_all_configs_freebase_qa _\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_freebase_qa>\r\ndataset_name = 'freebase_qa'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests\/test_dataset_common.py:237:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n```\r\n\r\nNote that the dataset has `train`, `eval`, and `dev` (no test split). I am not sure if I am mapping them correctly when I called the Split Generator.\r\n","The dummy json files must follow the exact same structure as the original json files.\r\n\r\nHowever it looks like the dummy json files you have in your dummy_data.zip file are not structured the same way.\r\nFor example the original json is a dict with a field \"Questions\" that is a list of items.\r\nHowever your dummy json is simply a list of items.\r\n\r\nCan you update your dummy json files to follow the same structure ?","And I'm pretty sure that this structure is supported by the dummy data auto-generation tool\r\n```\r\npython datasets-cli dummy_data .\/datasets\/freebase_qa --json_field \"Questions\"\r\n```","Hi @anaerobeth did you manage to get the dummy data right ?\r\n\r\nFeel free to ping me if you have questions or when you're ready for a review","Thanks for your help! I am able to create the dummy data with the dict structure as suggested. I'll add the tags and update this PR shortly.","Also don't forget to run `make style` to fix the code formatting check in the CI :)","Hi @anaerobeth ! Have you had a chance to consider updating the dataset script to yield one example per question ?\r\n\r\nFeel free to ping me if you have questions or if I can help :) ","Hi @lhoestq,\r\n\r\nI am willing to take this forward if you and @anaerobeth don't mind.\r\n","Hi @gchhablani thanks for proposing your help :) \r\nSure if you want to take this forward feel free to do so.\r\nAlso pinging @anaerobeth to make sure that you both don't work on the same thing at the same time","Hi ! Closing this one since the dataset was added in #1814 \r\n\r\nThanks you two @anaerobeth and @gchhablani for adding this dataset !"],"created_at":1607573007000,"updated_at":1612518450000,"closed_at":1612518450000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1435","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1435","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1435.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1435.patch"},"body":"This PR adds the FreebaseQA dataset: A Trivia-type QA Data Set over the Freebase Knowledge Graph\r\n\r\nRepo: https:\/\/github.com\/kelvin-jiang\/FreebaseQA\r\n\r\nPaper: https:\/\/www.aclweb.org\/anthology\/N19-1028.pdf\r\n\r\n\r\n## TODO: create dummy data\r\n\r\nError encountered when running `python datasets-cli dummy_data datasets\/freebase_qa --auto_generate`\r\n```\r\n f\"Couldn't parse columns {list(json_data.keys())}. \"\r\nValueError: Couldn't parse columns ['Dataset', 'Version', 'Questions']. Maybe specify which json field must be used to read the data with --json_field <my_field>.\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1435\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1434","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1434\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1434\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1434\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1434","id":760821474,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTg3NjEx","number":1434,"title":"add_sofc_materials_articles","user":{"login":"ZacharySBrown","id":7950786,"node_id":"MDQ6VXNlcjc5NTA3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7950786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZacharySBrown","html_url":"https:\/\/github.com\/ZacharySBrown","followers_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/followers","following_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/repos","events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @lhoestq , thanks for the feedback on this! I updated the `_generate_examples` with some comments on the process, and reduced the `dummy_data.zip` down quite a bit as well. \r\n\r\nFor the dummy data, I reduced the text to only three sentences, and aligned the corresponding entity\/token\/sentence annotations to that (reduced accordingly). The frames file is a strange combined format for the annotations and I found if I reduced that that would break the parser no matter what I did, so I left that as is. The difference between a reduced frames and non-reduced frames file in the compressed dummy data was only about ~4kb, so hopefully leaving this as is will be ok!"],"created_at":1607566502000,"updated_at":1608199194000,"closed_at":1608199194000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1434","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1434","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1434.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1434.patch"},"body":"adding [SOFC-Exp Corpus](https:\/\/arxiv.org\/abs\/2006.03039)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1434\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1433","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1433\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1433\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1433\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1433","id":760813539,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTgxNzE3","number":1433,"title":"Adding the ASSIN 2 dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607565422000,"updated_at":1607697176000,"closed_at":1607697176000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1433","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1433","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1433.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1433.patch"},"body":"Adding the ASSIN 2 dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1433\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1432","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1432\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1432\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1432\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1432","id":760808449,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTc3ODk3","number":1432,"title":"Adding journalists questions dataset","user":{"login":"MaramHasanain","id":3918663,"node_id":"MDQ6VXNlcjM5MTg2NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3918663?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MaramHasanain","html_url":"https:\/\/github.com\/MaramHasanain","followers_url":"https:\/\/api.github.com\/users\/MaramHasanain\/followers","following_url":"https:\/\/api.github.com\/users\/MaramHasanain\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MaramHasanain\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MaramHasanain\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MaramHasanain\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MaramHasanain\/orgs","repos_url":"https:\/\/api.github.com\/users\/MaramHasanain\/repos","events_url":"https:\/\/api.github.com\/users\/MaramHasanain\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MaramHasanain\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thanks a lot for checking! I hope I addressed all your comments. ","merging since the CI is fixed on master"],"created_at":1607564687000,"updated_at":1607953865000,"closed_at":1607953864000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1432","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1432","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1432.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1432.patch"},"body":"This is my first dataset to be added to HF. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1432\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1431","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1431\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1431\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1431\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1431","id":760791019,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTYzOTk1","number":1431,"title":"Ar cov19","user":{"login":"Fatima-Haouari","id":71061623,"node_id":"MDQ6VXNlcjcxMDYxNjIz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71061623?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Fatima-Haouari","html_url":"https:\/\/github.com\/Fatima-Haouari","followers_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/followers","following_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/orgs","repos_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/repos","events_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Fatima-Haouari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607561974000,"updated_at":1607698883000,"closed_at":1607698883000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1431","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1431","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1431.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1431.patch"},"body":"Adding ArCOV-19 dataset. ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1431\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1430","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1430\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1430\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1430\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1430","id":760779666,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTU0Njg0","number":1430,"title":"Add 1.5 billion words Arabic corpus ","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can't pass dummy data tests. For the instructions, it asks me to generate the following file `dummy_data\/Youm7_XML_utf_8.rar\/Youm7_utf_8.xml` which is strange, any ideas @lhoestq ?\r\n\r\ncc: I tested the data locally and it works, maybe the dummy tests doesn't support `rar` ? ","In the dummy_data.zip files you must include the rar file as if is was already extracted.\r\nIn particular here `Youm7_XML_utf_8.rar` is a directory (not an archive).","Also I'm getting `BadRarFile: Failed the read enough data: req=16384 got=51` while trying to download and extract the `Alittihad_XML_utf_8.rar` file. Do you have this issue as well ?\r\n\r\nI have rarfile 4.0","Sorry it was my mistake, I missed up the directories, it works now. Not sure why you got that error. I have the same version of `rarfile`. Between, there were some suggestions to change the dataset from `1bn_words_arabic` to `arabic_billion_words` like https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/spanish_billion_words. \r\n","I'm ok with renaming the dataset `arabic_billion_words` if you want.\r\nNote that you will need to rename class name `ArabicBillionWords` instead of `BillionWords`\r\n(though `BillionWords` was not matching `1bn_words_arabic` anyway)\r\n\r\nYou will need to regenerate the dataset_infos.json file after this change.\r\nOR alternatively just replace all mentions of `billion_words` with `arabic_billion_words` in dataset_infos.json <- this trick should save you some time :)","Hmmm I'm still not able to run it on my side because of the rar error (I'm running macos)\r\nI just tried with rarfile 3.1 and it didn't work either.\r\nI would like to be able to run it end-to-end on my side before merging if you don't mind. Let me investigate this issue a little bit","No worries, I will investigate it as well. ","I created a minimal example in [colab ](https:\/\/colab.research.google.com\/drive\/11ijesuGbrQylANka0VdsZ5vXwIuxkheY?usp=sharing).","Nice thanks, maybe it's just an issue on my side then","Ok I managed to solve the BadRarFile issue on my side :) \r\nTo fix it I had to install the `unrar` tool for macos (though it seems it's not available with `brew install` anymore, I had to install it from elsewhere)."],"created_at":1607560338000,"updated_at":1608631439000,"closed_at":1608631439000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1430","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1430","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1430.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1430.patch"},"body":"Needs https:\/\/github.com\/huggingface\/datasets\/pull\/1429 to work. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1430\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1429","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1429\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1429\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1429\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1429","id":760737818,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTE5MjY5","number":1429,"title":"extract rar files","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607554870000,"updated_at":1608303817000,"closed_at":1608303817000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1429","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1429","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1429.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1429.patch"},"body":"Unfortunately, I didn't find any native python libraries for extracting rar files. The user has to manually install `sudo apt-get install unrar`. Discussion with @yjernite is in the slack channel. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1429\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1428","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1428\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1428\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1428\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1428","id":760736726,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTE4MzIy","number":1428,"title":"Add twi wordsim353","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607554759000,"updated_at":1607695052000,"closed_at":1607695052000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1428","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1428","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1428.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1428.patch"},"body":"Add twi WordSim 353","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1428\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1427","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1427\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1427\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1427\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1427","id":760736703,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTE4MzAx","number":1427,"title":"Hebrew project BenYehuda","user":{"login":"imvladikon","id":10088963,"node_id":"MDQ6VXNlcjEwMDg4OTYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10088963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/imvladikon","html_url":"https:\/\/github.com\/imvladikon","followers_url":"https:\/\/api.github.com\/users\/imvladikon\/followers","following_url":"https:\/\/api.github.com\/users\/imvladikon\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/imvladikon\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/imvladikon\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/imvladikon\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/imvladikon\/orgs","repos_url":"https:\/\/api.github.com\/users\/imvladikon\/repos","events_url":"https:\/\/api.github.com\/users\/imvladikon\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/imvladikon\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607554757000,"updated_at":1607708363000,"closed_at":1607708363000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1427","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1427","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1427.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1427.patch"},"body":"Added Hebrew corpus from https:\/\/github.com\/projectbenyehuda\/public_domain_dump","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1427\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1426","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1426\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1426\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1426\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1426","id":760735763,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTE3NDc4","number":1426,"title":"init commit for MultiReQA for third PR with all issues fixed","user":{"login":"Karthik-Bhaskar","id":13200370,"node_id":"MDQ6VXNlcjEzMjAwMzcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13200370?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar","html_url":"https:\/\/github.com\/Karthik-Bhaskar","followers_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/followers","following_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/repos","events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["good dataset card as well :) ","@lhoestq Thank you :) "],"created_at":1607554661000,"updated_at":1607693828000,"closed_at":1607693828000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1426","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1426","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1426.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1426.patch"},"body":"3rd PR w.r.t. PR #1349 with all the issues fixed. As #1349 had uploaded other files along with the multi_re_qa dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1426\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1425","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1425\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1425\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1425\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1425","id":760733638,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTE1NjQz","number":1425,"title":"Add german common crawl dataset","user":{"login":"Phil1108","id":39518904,"node_id":"MDQ6VXNlcjM5NTE4OTA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39518904?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Phil1108","html_url":"https:\/\/github.com\/Phil1108","followers_url":"https:\/\/api.github.com\/users\/Phil1108\/followers","following_url":"https:\/\/api.github.com\/users\/Phil1108\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Phil1108\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Phil1108\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Phil1108\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Phil1108\/orgs","repos_url":"https:\/\/api.github.com\/users\/Phil1108\/repos","events_url":"https:\/\/api.github.com\/users\/Phil1108\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Phil1108\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @Phil1108 !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or if you're ready for a review\r\n\r\nThanks again for adding this dataset, this one is very useful !","> \r\n> \r\n> Hi @Phil1108 !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or if you're ready for a review\r\n> \r\n> Thanks again for adding this dataset, this one is very useful !\r\n\r\nHello @lhoestq , \r\n\r\nokay so should we go on with only the 2 parts or wait until the hosting of all 200 GB is cleared at HF? \r\nIf you want to merge it with the 2 parts only at first und upgrade later to all, sure I'll include the minor changes in the next few days and update it here","I think we can wait a bit to find a solution for the hosting and update the PR to include all the files when it's done.\r\nIn the meantime you can update the PR to take into account the suggestions regarding the dataset card, the parallel downloads and the use of eval.\r\n\r\nA safer alternative to eval is `ast.literal_eval`, it should do the job"],"created_at":1607554452000,"updated_at":1608717587000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1425","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1425","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1425.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1425.patch"},"body":"Adding a subpart of the Common Crawl which was extracted with this repo https:\/\/github.com\/facebookresearch\/cc_net and additionally filtered for duplicates ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1425\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1424","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1424\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1424\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1424\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1424","id":760724914,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NTA4MjY5","number":1424,"title":"Add yoruba wordsim353","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607553462000,"updated_at":1607553585000,"closed_at":1607553585000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1424","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1424","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1424.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1424.patch"},"body":"Added WordSim-353 evaluation dataset for Yoruba","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1424\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1423","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1423\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1423\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1423\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1423","id":760712421,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDk3OTk5","number":1423,"title":"Imppres","user":{"login":"aclifton314","id":53267795,"node_id":"MDQ6VXNlcjUzMjY3Nzk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53267795?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aclifton314","html_url":"https:\/\/github.com\/aclifton314","followers_url":"https:\/\/api.github.com\/users\/aclifton314\/followers","following_url":"https:\/\/api.github.com\/users\/aclifton314\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aclifton314\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aclifton314\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aclifton314\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aclifton314\/orgs","repos_url":"https:\/\/api.github.com\/users\/aclifton314\/repos","events_url":"https:\/\/api.github.com\/users\/aclifton314\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aclifton314\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Feel free to ping me once you're ready for another review :) ","For sure! Gonna work on this now!","I incorporated all the changes but when I go to rebase I get the following error:\r\n```python\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git rebase upstream\/master\r\nerror: cannot rebase: You have unstaged changes.\r\nerror: Please commit or stash them.\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git stash\r\nSaved working directory and index state WIP on imppres: 51736236 Incorporated secondary sets as configurations instead of splits.\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git rebase upstream\/master\r\nCONFLICT (add\/add): Merge conflict in datasets\/wiki_movies\/wiki_movies.py\r\nAuto-merging datasets\/wiki_movies\/wiki_movies.py\r\nCONFLICT (add\/add): Merge conflict in datasets\/wiki_movies\/dataset_infos.json\r\nAuto-merging datasets\/wiki_movies\/dataset_infos.json\r\nCONFLICT (add\/add): Merge conflict in datasets\/wiki_movies\/README.md\r\nAuto-merging datasets\/wiki_movies\/README.md\r\nerror: could not apply 04d08587... Created wiki_movies dataset.\r\nResolve all conflicts manually, mark them as resolved with\r\n\"git add\/rm <conflicted_files>\", then run \"git rebase --continue\".\r\nYou can instead skip this commit: run \"git rebase --skip\".\r\nTo abort and get back to the state before \"git rebase\", run \"git rebase --abort\".\r\nCould not apply 04d08587... Created wiki_movies dataset.\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git branch\r\n* (no branch, rebasing imppres)\r\n imppres\r\n logiqa_en\r\n master\r\n wiki_movies\r\n wiki_movies_htl\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git checkout imppres \r\ndatasets\/wiki_movies\/README.md: needs merge\r\ndatasets\/wiki_movies\/dataset_infos.json: needs merge\r\ndatasets\/wiki_movies\/wiki_movies.py: needs merge\r\nerror: you need to resolve your current index first\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ \r\n```","I think it's because the current branch includes changes about wiki_movies.\r\n\r\nCan you create a new branch from `master` and create another PR please ?","I get this response when I try to switch to master:\r\n```\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git checkout master\r\ndatasets\/wiki_movies\/README.md: needs merge\r\ndatasets\/wiki_movies\/dataset_infos.json: needs merge\r\ndatasets\/wiki_movies\/wiki_movies.py: needs merge\r\nerror: you need to resolve your current index first\r\n```","Maybe you have to remove the changes in wiki_movies before checkout to master\r\n```\r\ngit stash\r\n```\r\n\r\nshould do the job","Here is what I get:\r\n```\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git stash\r\ndatasets\/wiki_movies\/README.md: needs merge\r\ndatasets\/wiki_movies\/dataset_infos.json: needs merge\r\ndatasets\/wiki_movies\/wiki_movies.py: needs merge\r\n```","Ok I see\r\nLooks like you're in a `merge` process.\r\nYou can abort it with `git reset --merge`\r\n\r\nThen `git checkout master` should work","So close! I got the new branch made and went through all the tests. When I went to push, I got the following:\r\n```\r\naclifton@pop-os:~\/hf_datasets_sprint\/datasets$ git push -u origin imppres\r\nUsername for 'https:\/\/github.com': aclifton314\r\nPassword for 'https:\/\/aclifton314@github.com': \r\nTo https:\/\/github.com\/aclifton314\/datasets\r\n ! [rejected] imppres -> imppres (non-fast-forward)\r\nerror: failed to push some refs to 'https:\/\/github.com\/aclifton314\/datasets'\r\nhint: Updates were rejected because the tip of your current branch is behind\r\nhint: its remote counterpart. Integrate the remote changes (e.g.\r\nhint: 'git pull ...') before pushing again.\r\nhint: See the 'Note about fast-forwards' in 'git push --help' for details.\r\n```","after a rebase you need to `git push --force`","Done!"],"created_at":1607552052000,"updated_at":1608229634000,"closed_at":1608229634000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1423","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1423","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1423.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1423.patch"},"body":"2nd PR ever! Hopefully I'm starting to get the hang of this. This is for the IMPPRES dataset. Please let me know of any corrections or changes that need to be made.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1423\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1422","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1422\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1422\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1422\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1422","id":760707113,"node_id":"MDU6SXNzdWU3NjA3MDcxMTM=","number":1422,"title":"Can't map dataset (loaded from csv)","user":{"login":"SolomidHero","id":28161779,"node_id":"MDQ6VXNlcjI4MTYxNzc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28161779?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SolomidHero","html_url":"https:\/\/github.com\/SolomidHero","followers_url":"https:\/\/api.github.com\/users\/SolomidHero\/followers","following_url":"https:\/\/api.github.com\/users\/SolomidHero\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SolomidHero\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SolomidHero\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SolomidHero\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SolomidHero\/orgs","repos_url":"https:\/\/api.github.com\/users\/SolomidHero\/repos","events_url":"https:\/\/api.github.com\/users\/SolomidHero\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SolomidHero\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Please could you post the whole script? I can't reproduce your issue. After updating the feature names\/labels to match with the data, everything works fine for me. Try to update datasets\/transformers to the newest version.","Actually, the problem was how `tokenize` function was defined. This was completely my side mistake, so there are really no needs in this issue anymore"],"created_at":1607551542000,"updated_at":1608228820000,"closed_at":1608228820000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes.\r\n\r\nBelow steps are similar with [this notebook](https:\/\/colab.research.google.com\/drive\/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to classify lmdb loaded dataset. Only one difference it is the dataset loaded from .csv file.\r\nHere is how I load it:\r\n\r\n```python\r\ndata_path = 'data.csv'\r\ndata = pd.read_csv(data_path)\r\n\r\n# process class name to indices\r\nclasses = ['neg', 'pos']\r\nclass_to_idx = { cl: i for i, cl in enumerate(classes) }\r\n\r\n# now data is like {'label': int, 'text' str}\r\ndata['label'] = data['label'].apply(lambda x: class_to_idx[x])\r\n\r\n# load dataset and map it with defined `tokenize` function\r\nfeatures = Features({\r\n target: ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None),\r\n feature: Value(dtype='string', id=None),\r\n})\r\ndataset = Dataset.from_pandas(data, features=features)\r\ndataset.map(tokenize, batched=True, batch_size=len(dataset))\r\n```\r\n\r\nIt ruins on the last line with following error:\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<ipython-input-112-32b6275ce418> in <module>()\r\n 9 })\r\n 10 dataset = Dataset.from_pandas(data, features=features)\r\n---> 11 dataset.map(tokenizer, batched=True, batch_size=len(dataset))\r\n\r\n2 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1237 test_inputs = self[:2] if batched else self[0]\r\n 1238 test_indices = [0, 1] if batched else 0\r\n-> 1239 update_data = does_function_return_dict(test_inputs, test_indices)\r\n 1240 logger.info(\"Testing finished, running the mapping function on the dataset\")\r\n 1241 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_dataset.py in does_function_return_dict(inputs, indices)\r\n 1208 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]\r\n 1209 processed_inputs = (\r\n-> 1210 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n 1211 )\r\n 1212 does_return_dict = isinstance(processed_inputs, Mapping)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)\r\n 2281 )\r\n 2282 ), (\r\n-> 2283 \"text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) \"\r\n 2284 \"or `List[List[str]]` (batch of pretokenized examples).\"\r\n 2285 )\r\n\r\nAssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).\r\n```\r\n\r\nwhich I think is not expected. I also tried the same steps using `Dataset.from_csv` which resulted in the same error.\r\n\r\nFor reproducing this, I used [this dataset from kaggle](https:\/\/www.kaggle.com\/team-ai\/spam-text-message-classification)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1422\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1421","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1421\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1421\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1421\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1421","id":760706851,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDkzMzU4","number":1421,"title":"adding fake-news-english-2","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607551513000,"updated_at":1607820529000,"closed_at":1607820529000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1421","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1421","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1421.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1421.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1421\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1420","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1420\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1420\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1420\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1420","id":760700388,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDg4MTM5","number":1420,"title":"Add dataset yoruba_wordsim353","user":{"login":"michael-aloys","id":1858628,"node_id":"MDQ6VXNlcjE4NTg2Mjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1858628?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/michael-aloys","html_url":"https:\/\/github.com\/michael-aloys","followers_url":"https:\/\/api.github.com\/users\/michael-aloys\/followers","following_url":"https:\/\/api.github.com\/users\/michael-aloys\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/michael-aloys\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/michael-aloys\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/michael-aloys\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/michael-aloys\/orgs","repos_url":"https:\/\/api.github.com\/users\/michael-aloys\/repos","events_url":"https:\/\/api.github.com\/users\/michael-aloys\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/michael-aloys\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607550869000,"updated_at":1607693644000,"closed_at":1607693644000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1420","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1420","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1420.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1420.patch"},"body":"Contains loading script as well as dataset card including YAML tags.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1420\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1419","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1419\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1419\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1419\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1419","id":760673716,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDY1OTA4","number":1419,"title":"Add Turkish News Category Dataset (270K)","user":{"login":"basakbuluz","id":41359672,"node_id":"MDQ6VXNlcjQxMzU5Njcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/41359672?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/basakbuluz","html_url":"https:\/\/github.com\/basakbuluz","followers_url":"https:\/\/api.github.com\/users\/basakbuluz\/followers","following_url":"https:\/\/api.github.com\/users\/basakbuluz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/basakbuluz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/basakbuluz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/basakbuluz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/basakbuluz\/orgs","repos_url":"https:\/\/api.github.com\/users\/basakbuluz\/repos","events_url":"https:\/\/api.github.com\/users\/basakbuluz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/basakbuluz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, can you please review this PR?\r\n","@SBrandeis,\r\nSorry. All of the latest version came to my branch. You can find final version. \r\nResubmitted as a clean final version of #1466\r\nI have completed all the review comments.","Closing this as PR is now https:\/\/github.com\/huggingface\/datasets\/pull\/1466"],"created_at":1607548113000,"updated_at":1607695351000,"closed_at":1607695351000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1419","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1419","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1419.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1419.patch"},"body":"This PR adds the Turkish News Categories Dataset (270K) dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news** in **17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https:\/\/www.interpress.com\/) media monitoring company.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1419\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1418","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1418\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1418\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1418\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1418","id":760672320,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDY0NzQ4","number":1418,"title":"Add arabic dialects","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607547967000,"updated_at":1608198056000,"closed_at":1608198056000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1418","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1418","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1418.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1418.patch"},"body":"Data loading script and dataset card for Dialectal Arabic Resources dataset. \r\nFixed git issues from PR #976","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1418\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1417","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1417\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1417\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1417\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1417","id":760660918,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDU1NzM3","number":1417,"title":"WIP: Vinay\/add peer read dataset","user":{"login":"vinaykudari","id":34424769,"node_id":"MDQ6VXNlcjM0NDI0NzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34424769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vinaykudari","html_url":"https:\/\/github.com\/vinaykudari","followers_url":"https:\/\/api.github.com\/users\/vinaykudari\/followers","following_url":"https:\/\/api.github.com\/users\/vinaykudari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vinaykudari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vinaykudari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vinaykudari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vinaykudari\/orgs","repos_url":"https:\/\/api.github.com\/users\/vinaykudari\/repos","events_url":"https:\/\/api.github.com\/users\/vinaykudari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vinaykudari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607546992000,"updated_at":1607712211000,"closed_at":1607712211000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1417","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1417","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1417.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1417.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1417\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1416","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1416\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1416\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1416\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1416","id":760653971,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDUwMTIz","number":1416,"title":"Add Shrinked Turkish NER from Kaggle.","user":{"login":"bhctsntrk","id":22636672,"node_id":"MDQ6VXNlcjIyNjM2Njcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22636672?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhctsntrk","html_url":"https:\/\/github.com\/bhctsntrk","followers_url":"https:\/\/api.github.com\/users\/bhctsntrk\/followers","following_url":"https:\/\/api.github.com\/users\/bhctsntrk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhctsntrk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhctsntrk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhctsntrk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhctsntrk\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhctsntrk\/repos","events_url":"https:\/\/api.github.com\/users\/bhctsntrk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhctsntrk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607546315000,"updated_at":1607685811000,"closed_at":1607685811000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1416","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1416","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1416.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1416.patch"},"body":"Add Shrinked Turkish NER from [Kaggle](https:\/\/www.kaggle.com\/behcetsenturk\/shrinked-twnertc-turkish-ner-data-by-kuzgunlar).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1416\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1415","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1415\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1415\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1415\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1415","id":760642786,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDQxMTQx","number":1415,"title":"Add Hate Speech and Offensive Language Detection dataset","user":{"login":"hugoabonizio","id":1206395,"node_id":"MDQ6VXNlcjEyMDYzOTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1206395?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hugoabonizio","html_url":"https:\/\/github.com\/hugoabonizio","followers_url":"https:\/\/api.github.com\/users\/hugoabonizio\/followers","following_url":"https:\/\/api.github.com\/users\/hugoabonizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hugoabonizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hugoabonizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hugoabonizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hugoabonizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/hugoabonizio\/repos","events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done! The failing testes don't seem to be related, it seems to be a connection issue, if I understand it correctly.","@lhoestq done!","merging since the CI is fixed on master"],"created_at":1607545332000,"updated_at":1607969204000,"closed_at":1607963131000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1415","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1415","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1415.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1415.patch"},"body":"Add [Hate Speech and Offensive Language Detection dataset](https:\/\/github.com\/t-davidson\/hate-speech-and-offensive-language) from [this paper](https:\/\/arxiv.org\/abs\/1703.04009).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1415\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1414","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1414\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1414\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1414\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1414","id":760622133,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDIzODgy","number":1414,"title":"Adding BioCreative II Gene Mention corpus","user":{"login":"mahajandiwakar","id":10516432,"node_id":"MDQ6VXNlcjEwNTE2NDMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10516432?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mahajandiwakar","html_url":"https:\/\/github.com\/mahajandiwakar","followers_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/followers","following_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/orgs","repos_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/repos","events_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mahajandiwakar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607543368000,"updated_at":1607685460000,"closed_at":1607685460000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1414","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1414","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1414.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1414.patch"},"body":"Adding BioCreative II Gene Mention corpus","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1414\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1413","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1413\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1413\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1413\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1413","id":760615090,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDE4MDY2","number":1413,"title":"Add OffComBR","user":{"login":"hugoabonizio","id":1206395,"node_id":"MDQ6VXNlcjEyMDYzOTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1206395?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hugoabonizio","html_url":"https:\/\/github.com\/hugoabonizio","followers_url":"https:\/\/api.github.com\/users\/hugoabonizio\/followers","following_url":"https:\/\/api.github.com\/users\/hugoabonizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hugoabonizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hugoabonizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hugoabonizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hugoabonizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/hugoabonizio\/repos","events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @hugoabonizio, thanks for the contribution.\r\nRegarding the fake data, you can generate it manually.\r\nRunning the `python datasets-cli dummy_data datasets\/offcombr` should give you instructions on how to manually create the dummy data.\r\nFor reference, here is a spec for `.arff` files : https:\/\/www.cs.waikato.ac.nz\/ml\/weka\/arff.html","@lhoestq again the failing tests doesn't seem to be related","merging since the CI is fixed on master"],"created_at":1607542688000,"updated_at":1607969205000,"closed_at":1607964670000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1413","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1413","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1413.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1413.patch"},"body":"Add [OffComBR](https:\/\/github.com\/rogersdepelle\/OffComBR) from [Offensive Comments in the Brazilian Web: a dataset and baseline results](https:\/\/sol.sbc.org.br\/index.php\/brasnam\/article\/view\/3260\/3222) paper.\r\n\r\nBut I'm having a hard time generating dummy data since the original dataset extion is `.arff` and the [_create_dummy_data function](https:\/\/github.com\/huggingface\/datasets\/blob\/a4aeaf911240057286a01bff1b1d75a89aedd57b\/src\/datasets\/commands\/dummy_data.py#L185) doesn't allow it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1413\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1412","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1412\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1412\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1412\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1412","id":760607959,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDEyMDg2","number":1412,"title":"Adding the ASSIN dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607542026000,"updated_at":1607683270000,"closed_at":1607683270000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1412","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1412","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1412.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1412.patch"},"body":"Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1412\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1411","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1411\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1411\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1411\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1411","id":760606290,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDEwNjU3","number":1411,"title":"2 typos","user":{"login":"dezow","id":47401160,"node_id":"MDQ6VXNlcjQ3NDAxMTYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47401160?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dezow","html_url":"https:\/\/github.com\/dezow","followers_url":"https:\/\/api.github.com\/users\/dezow\/followers","following_url":"https:\/\/api.github.com\/users\/dezow\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dezow\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dezow\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dezow\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dezow\/orgs","repos_url":"https:\/\/api.github.com\/users\/dezow\/repos","events_url":"https:\/\/api.github.com\/users\/dezow\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dezow\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607541874000,"updated_at":1607683145000,"closed_at":1607683145000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1411","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1411","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1411.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1411.patch"},"body":"Corrected 2 typos","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1411\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1410","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1410\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1410\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1410\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1410","id":760597092,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1NDAyNjcw","number":1410,"title":"Add penn treebank dataset","user":{"login":"harshalmittal4","id":24206326,"node_id":"MDQ6VXNlcjI0MjA2MzI2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24206326?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/harshalmittal4","html_url":"https:\/\/github.com\/harshalmittal4","followers_url":"https:\/\/api.github.com\/users\/harshalmittal4\/followers","following_url":"https:\/\/api.github.com\/users\/harshalmittal4\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/harshalmittal4\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/harshalmittal4\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/harshalmittal4\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/harshalmittal4\/orgs","repos_url":"https:\/\/api.github.com\/users\/harshalmittal4\/repos","events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/harshalmittal4\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@yjernite I have updated the PR to be language modeling task specific. Please review!\r\n","Yes a line corresponds to a sentence in this data."],"created_at":1607541093000,"updated_at":1608111503000,"closed_at":1608111503000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1410","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1410","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1410.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1410.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1410\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1409","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1409\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1409\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1409\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1409","id":760593932,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mzk5OTI1","number":1409,"title":"Adding the ASSIN dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I wrongly commited data from another branch in this PR, I'll close this a reopen another PR with the fixed branch"],"created_at":1607540820000,"updated_at":1607541492000,"closed_at":1607541352000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1409","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1409","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1409.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1409.patch"},"body":"Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1409\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1408","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1408\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1408\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1408\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1408","id":760590589,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mzk3MTAw","number":1408,"title":"adding fake-news-english","user":{"login":"MisbahKhan789","id":15351802,"node_id":"MDQ6VXNlcjE1MzUxODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15351802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MisbahKhan789","html_url":"https:\/\/github.com\/MisbahKhan789","followers_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/followers","following_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/orgs","repos_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/repos","events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MisbahKhan789\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["also don't forget to format your code using `make style` to fix the CI"],"created_at":1607540527000,"updated_at":1607820559000,"closed_at":1607820559000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1408","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1408","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1408.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1408.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1408\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1407","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1407\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1407\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1407\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1407","id":760581756,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mzg5ODQx","number":1407,"title":"Add Tweet Eval Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nSeeing that it has been almost two months to this draft, I'm willing to take this forward if you and @abhishekkrthakur don't mind. :)","Hi @gchhablani !\r\nSure if @abhishekkrthakur doesn't mind\r\nThanks for your help :)","Please feel free :) ","Hi @lhoestq, @abhishekkrthakur \r\n\r\nI believe this can be closed. Merged in #1829."],"created_at":1607539737000,"updated_at":1614329644000,"closed_at":1614329644000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1407","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1407","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1407.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1407.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1407\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1406","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1406\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1406\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1406\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1406","id":760581330,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mzg5NDk5","number":1406,"title":"Add Portuguese Hate Speech dataset","user":{"login":"hugoabonizio","id":1206395,"node_id":"MDQ6VXNlcjEyMDYzOTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1206395?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hugoabonizio","html_url":"https:\/\/github.com\/hugoabonizio","followers_url":"https:\/\/api.github.com\/users\/hugoabonizio\/followers","following_url":"https:\/\/api.github.com\/users\/hugoabonizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hugoabonizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hugoabonizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hugoabonizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hugoabonizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/hugoabonizio\/repos","events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done! (The failing tests don't seem to be related)","merging since the CI is fixed on master"],"created_at":1607539696000,"updated_at":1607969202000,"closed_at":1607962940000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1406","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1406","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1406.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1406.patch"},"body":"Binary Portuguese Hate Speech dataset from [this paper](https:\/\/www.aclweb.org\/anthology\/W19-3510\/).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1406\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1405","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1405\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1405\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1405\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1405","id":760578035,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mzg2ODA1","number":1405,"title":"Adding TaPaCo Dataset with README.md","user":{"login":"pacman100","id":13534540,"node_id":"MDQ6VXNlcjEzNTM0NTQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13534540?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pacman100","html_url":"https:\/\/github.com\/pacman100","followers_url":"https:\/\/api.github.com\/users\/pacman100\/followers","following_url":"https:\/\/api.github.com\/users\/pacman100\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pacman100\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pacman100\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pacman100\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pacman100\/orgs","repos_url":"https:\/\/api.github.com\/users\/pacman100\/repos","events_url":"https:\/\/api.github.com\/users\/pacman100\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pacman100\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We want to keep the repo as light as possible so that it doesn't take ages to clone, that's why we ask for small dummy data files (especially when there are many of them). Let me know if you have questions or if we can help you on this","Hello @lhoestq , made the changes as you suggested and pushed, please review. By default, the dummy data was generated the way it was by the dummy data auto generate command. Thank you."],"created_at":1607539378000,"updated_at":1607886678000,"closed_at":1607886678000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1405","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1405","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1405.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1405.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1405\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1404","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1404\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1404\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1404\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1404","id":760575473,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mzg0NzEz","number":1404,"title":"Add Acronym Identification Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["fixed @lhoestq "],"created_at":1607539134000,"updated_at":1607951521000,"closed_at":1607951520000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1404","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1404","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1404.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1404.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1404\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1403","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1403\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1403\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1403\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1403","id":760571419,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MzgxMzQ3","number":1403,"title":"Add dataset clickbait_news_bg","user":{"login":"tsvm","id":1083319,"node_id":"MDQ6VXNlcjEwODMzMTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1083319?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tsvm","html_url":"https:\/\/github.com\/tsvm","followers_url":"https:\/\/api.github.com\/users\/tsvm\/followers","following_url":"https:\/\/api.github.com\/users\/tsvm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tsvm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tsvm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tsvm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tsvm\/orgs","repos_url":"https:\/\/api.github.com\/users\/tsvm\/repos","events_url":"https:\/\/api.github.com\/users\/tsvm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tsvm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this pull request, will submit a new one for this dataset."],"created_at":1607538732000,"updated_at":1607591804000,"closed_at":1607591803000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1403","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1403","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1403.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1403.patch"},"body":"Adding a new dataset - clickbait_news_bg","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1403\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1402","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1402\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1402\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1402\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1402","id":760538325,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MzUzMzE0","number":1402,"title":"adding covid-tweets-japanese (again)","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["README.md is not created yet. I'll add it soon.","Thank you for your detailed code review! It's so helpful.\r\nI'll reflect them to the code in 24 hours.\r\n\r\nYou may have told me in Slack (I cannot find the conversation log though I've looked through threads), but I'm sorry it seems I'm still misunderstanding how to get YAML from the tagger.\r\nI'm now asking on Slack if I am looking at the tagger the wrong way.","One more thing I'd like to ask.\r\nShould I make changes by myself, or can I use the \"Commit suggestion\" feature?\r\nI'm new to this feature and I don't know how the rules work in this repository, so I'd like to ask just in case.","Thank you very much for merging!"],"created_at":1607536006000,"updated_at":1607882054000,"closed_at":1607881656000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1402","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1402","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1402.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1402.patch"},"body":"I had mistaken use git rebase, I was so hurried to fix it. However, I didn't fully consider the use of git reset , so I unintendedly stopped PR (#1367) altogether. Sorry about that.\r\nI'll make a new PR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1402\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1401","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1401\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1401\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1401\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1401","id":760525949,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MzQyOTY2","number":1401,"title":"Add reasoning_bg","user":{"login":"saradhix","id":1351362,"node_id":"MDQ6VXNlcjEzNTEzNjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1351362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saradhix","html_url":"https:\/\/github.com\/saradhix","followers_url":"https:\/\/api.github.com\/users\/saradhix\/followers","following_url":"https:\/\/api.github.com\/users\/saradhix\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saradhix\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saradhix\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saradhix\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saradhix\/orgs","repos_url":"https:\/\/api.github.com\/users\/saradhix\/repos","events_url":"https:\/\/api.github.com\/users\/saradhix\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saradhix\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @saradhix have you had the chance to reduce the size of the dummy data ?\r\n\r\nFeel free to ping me when it's done so we can merge :) ","@lhoestq I have reduced the size of the dummy data manually and pushed the changes.","The CI errors are not related to your dataset.\r\nThey're fixed on master, you can ignore them","merging since the CI is fixed on master"],"created_at":1607535049000,"updated_at":1608223843000,"closed_at":1608223842000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1401","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1401","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1401.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1401.patch"},"body":"Adding reading comprehension dataset for Bulgarian language","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1401\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1400","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1400\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1400\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1400\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1400","id":760514215,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MzMzMDYz","number":1400,"title":"Add European Union Education and Culture Translation Memory (EAC-TM) dataset","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607534092000,"updated_at":1607951208000,"closed_at":1607951207000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1400","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1400","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1400.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1400.patch"},"body":"Adding the EAC Translation Memory dataset : https:\/\/ec.europa.eu\/jrc\/en\/language-technologies\/eac-translation-memory","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1400\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1399","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1399\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1399\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1399\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1399","id":760499576,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MzIwNzA2","number":1399,"title":"Add HoVer Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq all comments addressed :) ","merging since the CI is fixed on master"],"created_at":1607532939000,"updated_at":1607943443000,"closed_at":1607943442000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1399","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1399","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1399.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1399.patch"},"body":"HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification\r\nhttps:\/\/arxiv.org\/abs\/2011.03088 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1399\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1398","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1398\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1398\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1398\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1398","id":760497024,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MzE4NTg5","number":1398,"title":"Add Neural Code Search Dataset","user":{"login":"vinaykudari","id":34424769,"node_id":"MDQ6VXNlcjM0NDI0NzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34424769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vinaykudari","html_url":"https:\/\/github.com\/vinaykudari","followers_url":"https:\/\/api.github.com\/users\/vinaykudari\/followers","following_url":"https:\/\/api.github.com\/users\/vinaykudari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vinaykudari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vinaykudari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vinaykudari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vinaykudari\/orgs","repos_url":"https:\/\/api.github.com\/users\/vinaykudari\/repos","events_url":"https:\/\/api.github.com\/users\/vinaykudari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vinaykudari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Refactored into new branch, please review :) ","The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1607532736000,"updated_at":1607536947000,"closed_at":1607536947000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1398","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1398","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1398.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1398.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1398\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1397","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1397\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1397\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1397\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1397","id":760467501,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mjk0MDgz","number":1397,"title":"datasets card-creator link added","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607530518000,"updated_at":1607532468000,"closed_at":1607532468000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1397","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1397","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1397.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1397.patch"},"body":"dataset card creator link has been added \r\nlink: https:\/\/huggingface.co\/datasets\/card-creator\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1397\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1396","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1396\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1396\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1396\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1396","id":760455295,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjgzOTAw","number":1396,"title":"initial commit for MultiReQA for second PR","user":{"login":"Karthik-Bhaskar","id":13200370,"node_id":"MDQ6VXNlcjEzMjAwMzcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13200370?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar","html_url":"https:\/\/github.com\/Karthik-Bhaskar","followers_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/followers","following_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/repos","events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Subsequent [PR #1426 ](https:\/\/github.com\/huggingface\/datasets\/pull\/1426) since this PR has uploaded other files along with the MultiReQA dataset.","closing this one since a new PR has been created"],"created_at":1607529635000,"updated_at":1607624412000,"closed_at":1607624411000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1396","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1396","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1396.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1396.patch"},"body":"Since last PR #1349 had some issues passing the tests. So, a new PR is generated.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1396\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1395","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1395\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1395\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1395\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1395","id":760448255,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1Mjc4MTQ2","number":1395,"title":"Add WikiSource Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq fixed :) "],"created_at":1607529126000,"updated_at":1607941454000,"closed_at":1607941453000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1395","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1395","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1395.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1395.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1395\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1394","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1394\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1394\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1394\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1394","id":760436365,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjY4MzMz","number":1394,"title":"Add OfisPublik Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq fixed :) "],"created_at":1607528265000,"updated_at":1607941410000,"closed_at":1607941409000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1394","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1394","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1394.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1394.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1394\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1393","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1393\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1393\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1393\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1393","id":760436267,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjY4MjUx","number":1393,"title":"Add script_version suggestion when dataset\/metric not found","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607528258000,"updated_at":1607624225000,"closed_at":1607624225000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1393","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1393","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1393.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1393.patch"},"body":"Adds a helpful prompt to the error message when a dataset\/metric is not found, suggesting the user might need to pass `script_version=\"master\"` if the dataset was added recently. The whole error looks like:\r\n\r\n> Couldn't find file locally at blah\/blah.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1\/metrics\/blah\/blah.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/met\r\nrics\/blah\/blah.py.\r\nIf the dataset was added recently, you may need to to pass script_version=\"master\" to find the loading script on the master branch.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1393\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1392","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1392\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1392\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1392\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1392","id":760432261,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjY0ODQ5","number":1392,"title":"Add KDE4 Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq fixed :) "],"created_at":1607527978000,"updated_at":1607941353000,"closed_at":1607941352000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1392","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1392","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1392.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1392.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1392\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1391","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1391\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1391\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1391\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1391","id":760432041,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjY0NjUx","number":1391,"title":"Add MultiParaCrawl Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607527966000,"updated_at":1607625585000,"closed_at":1607625584000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1391","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1391","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1391.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1391.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1391\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1390","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1390\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1390\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1390\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1390","id":760431051,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjYzNzk1","number":1390,"title":"Add SPC Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607527911000,"updated_at":1607944433000,"closed_at":1607944432000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1390","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1390","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1390.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1390.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1390\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1389","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1389\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1389\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1389\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1389","id":760402224,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjM5OTYy","number":1389,"title":"add amazon polarity dataset","user":{"login":"hfawaz","id":29229602,"node_id":"MDQ6VXNlcjI5MjI5NjAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29229602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hfawaz","html_url":"https:\/\/github.com\/hfawaz","followers_url":"https:\/\/api.github.com\/users\/hfawaz\/followers","following_url":"https:\/\/api.github.com\/users\/hfawaz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hfawaz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hfawaz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hfawaz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hfawaz\/orgs","repos_url":"https:\/\/api.github.com\/users\/hfawaz\/repos","events_url":"https:\/\/api.github.com\/users\/hfawaz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hfawaz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`amazon_polarity` is probably a subset of `amazon_us_reviews` but I am not entirely sure about that.\r\nI guess `amazon_polarity` will help in reproducing results of papers using this dataset since even if it is a subset from `amazon_us_reviews`, it is not trivial how to extract `amazon_polarity` from `amazon_us_reviews`, especially since `amazon_us_reviews` was released after `amazon_polarity`. ","do you know what the problem would be ? should I pull the master before ? @lhoestq ","The error just appeared on master. I will try to fix it today.\r\nYou can ignore them since it's not related to the dataset you added","merging since the CI is fixed on master","Great thanks for the help. "],"created_at":1607525901000,"updated_at":1607687139000,"closed_at":1607686861000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1389","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1389","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1389.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1389.patch"},"body":"This corresponds to the amazon (binary dataset) requested in https:\/\/github.com\/huggingface\/datasets\/issues\/353","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1389\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1388","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1388\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1388\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1388\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1388","id":760373136,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjE1Nzk2","number":1388,"title":"hind_encorp","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607523779000,"updated_at":1607525211000,"closed_at":1607525197000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1388","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1388","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1388.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1388.patch"},"body":"resubmit of hind_encorp file changes","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1388\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1387","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1387\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1387\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1387\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1387","id":760368355,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjExODQ1","number":1387,"title":"Add LIAR dataset","user":{"login":"hugoabonizio","id":1206395,"node_id":"MDQ6VXNlcjEyMDYzOTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1206395?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hugoabonizio","html_url":"https:\/\/github.com\/hugoabonizio","followers_url":"https:\/\/api.github.com\/users\/hugoabonizio\/followers","following_url":"https:\/\/api.github.com\/users\/hugoabonizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hugoabonizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hugoabonizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hugoabonizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hugoabonizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/hugoabonizio\/repos","events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done! The failing testes don't seem to be related, it seems to be a connection issue, if I understand it correctly.","merging since the CI is fixed on master"],"created_at":1607523415000,"updated_at":1607969203000,"closed_at":1607963039000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1387","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1387","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1387.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1387.patch"},"body":"Add LIAR dataset from [\u201cLiar, Liar Pants on Fire\u201d: A New Benchmark Dataset for Fake News Detection](https:\/\/www.aclweb.org\/anthology\/P17-2067\/).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1387\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1386","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1386\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1386\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1386\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1386","id":760365505,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MjA5NDUx","number":1386,"title":"Add RecipeNLG Dataset (manual download)","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq yes. I asked the authors for direct link but unfortunately we need to fill a form (captcha)"],"created_at":1607523199000,"updated_at":1607619502000,"closed_at":1607619501000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1386","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1386","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1386.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1386.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1386\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1385","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1385\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1385\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1385\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1385","id":760351405,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTk3Nzk5","number":1385,"title":"add best2009","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607522169000,"updated_at":1607943548000,"closed_at":1607943548000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1385","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1385","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1385.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1385.patch"},"body":"`best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https:\/\/www.nectec.or.th\/) (148,995\/2,252 lines of train\/test). It was created for [BEST 2010: Word Tokenization Competition](https:\/\/thailang.nectec.or.th\/archive\/indexa290.html?q=node\/10). The test set answers are not provided publicly.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1385\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1384","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1384\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1384\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1384\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1384","id":760331767,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTgxMjg1","number":1384,"title":"Add News Commentary Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607520636000,"updated_at":1607619248000,"closed_at":1607619247000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1384","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1384","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1384.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1384.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1384\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1383","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1383\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1383\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1383\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1383","id":760331480,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTgxMDQ2","number":1383,"title":"added conv ai 2","user":{"login":"rkc007","id":22396042,"node_id":"MDQ6VXNlcjIyMzk2MDQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22396042?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rkc007","html_url":"https:\/\/github.com\/rkc007","followers_url":"https:\/\/api.github.com\/users\/rkc007\/followers","following_url":"https:\/\/api.github.com\/users\/rkc007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rkc007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rkc007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rkc007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rkc007\/orgs","repos_url":"https:\/\/api.github.com\/users\/rkc007\/repos","events_url":"https:\/\/api.github.com\/users\/rkc007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rkc007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thank you for the suggestions. I added the changes to the branch and seems after rebasing it to master, all the commits previous commits got added. Should I create a new PR or should I keep this one only ? ","closing this one in favor of #1527 "],"created_at":1607520612000,"updated_at":1607885682000,"closed_at":1607885681000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1383","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1383","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1383.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1383.patch"},"body":"Dataset : https:\/\/github.com\/DeepPavlov\/convai\/tree\/master\/2018","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1383\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1382","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1382\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1382\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1382\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1382","id":760325077,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTc1NzMx","number":1382,"title":"adding UNPC","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI just had a connection error"],"created_at":1607520101000,"updated_at":1607536386000,"closed_at":1607536386000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1382","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1382","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1382.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1382.patch"},"body":"Adding United Nations Parallel Corpus\r\nhttp:\/\/opus.nlpl.eu\/UNPC.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1382\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1381","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1381\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1381\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1381\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1381","id":760320960,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTcyMjkw","number":1381,"title":"Add twi text c3","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes about other datasets\r\n\r\nCan you only include the changes related to twi text c3 please ?","Hi @lhoestq , I have removed the unnecessary files. Can you please confirm?","You might need to either find a way to go back to the commit before it changes 389 files or create a new branch.","okay, I have created another branch, see the latest pull https:\/\/github.com\/huggingface\/datasets\/pull\/1518 @cstorm125 ","Hii please follow me","Closing this one in favor of #1518"],"created_at":1607519798000,"updated_at":1607884767000,"closed_at":1607884767000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1381","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1381","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1381.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1381.patch"},"body":"Added Twi texts for training embeddings and language models based on the paper https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.335\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1381\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1380","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1380\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1380\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1380\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1380","id":760320494,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTcxOTAw","number":1380,"title":"Add Tatoeba Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607519764000,"updated_at":1607619268000,"closed_at":1607619267000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1380","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1380","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1380.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1380.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1380\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1379","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1379\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1379\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1379\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1379","id":760320487,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTcxODk0","number":1379,"title":"Add yoruba text c3","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes about other datasets\r\n","Thanks for the review. I'm a bit confused how to remove the files. Every time I add a new branch name using the following commands:\r\n\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit checkout -b a-descriptive-name-for-my-changes\r\n\r\nand push to the origin, this issue occurs","Can you try to create the branch from the master branch of your fork ?\r\n\r\nfirst update your master branch:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit push\r\n```\r\n\r\nthen create a new one:\r\n```\r\ngit checkout -b my-new-branch\r\n```","I think you were still having the files because you were creating the new branch from a branch in which you've committed the files, instead of creating the new branch from the master branch","Got it, will correct that. Thanks","@lhoestq , I have removed the unnecessary files. Looks like I still have one error. How do I resolve this?","> @lhoestq , I have removed the unnecessary files. Looks like I still have one error. How do I resolve this?\r\n\r\nI think it's connection error on piqa dataset. Can you try triggering the test again? I usually resolve similar issues with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit push -u -f origin your_branch_name\r\n```","thank you @cstorm125 ","I have created another pull request for this https:\/\/github.com\/huggingface\/datasets\/pull\/1515 @cstorm125 @lhoestq ","Hii please follow me","merging since the CI is fixed on master","Great, thanks a lot"],"created_at":1607519763000,"updated_at":1607885112000,"closed_at":1607884653000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1379","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1379","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1379.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1379.patch"},"body":"Added Yoruba texts for training embeddings and language models based on the paper https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.335\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1379\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1378","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1378\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1378\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1378\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1378","id":760313108,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTY1OTE3","number":1378,"title":"Add FACTCK.BR dataset","user":{"login":"hugoabonizio","id":1206395,"node_id":"MDQ6VXNlcjEyMDYzOTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1206395?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hugoabonizio","html_url":"https:\/\/github.com\/hugoabonizio","followers_url":"https:\/\/api.github.com\/users\/hugoabonizio\/followers","following_url":"https:\/\/api.github.com\/users\/hugoabonizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hugoabonizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hugoabonizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hugoabonizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hugoabonizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/hugoabonizio\/repos","events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hugoabonizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done!","merging since the CI is fixed on master"],"created_at":1607519182000,"updated_at":1608208725000,"closed_at":1608046451000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1378","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1378","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1378.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1378.patch"},"body":"This PR adds [FACTCK.BR](https:\/\/github.com\/jghm-f\/FACTCK.BR) dataset from [FACTCK.BR: a new dataset to study fake news](https:\/\/dl.acm.org\/doi\/10.1145\/3323503.3361698).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1378\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1377","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1377\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1377\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1377\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1377","id":760309435,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTYyOTcz","number":1377,"title":"adding marathi-wiki dataset","user":{"login":"ekdnam","id":40426312,"node_id":"MDQ6VXNlcjQwNDI2MzEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40426312?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ekdnam","html_url":"https:\/\/github.com\/ekdnam","followers_url":"https:\/\/api.github.com\/users\/ekdnam\/followers","following_url":"https:\/\/api.github.com\/users\/ekdnam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ekdnam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ekdnam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ekdnam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ekdnam\/orgs","repos_url":"https:\/\/api.github.com\/users\/ekdnam\/repos","events_url":"https:\/\/api.github.com\/users\/ekdnam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ekdnam\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you make it a draft PR until you've added the dataset please ? @ekdnam ","Done"],"created_at":1607518880000,"updated_at":1607690797000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1377","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1377","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1377.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1377.patch"},"body":"Adding marathi-wiki-articles dataset. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1377\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1376","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1376\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1376\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1376\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1376","id":760309300,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTYyODU4","number":1376,"title":"Add SETimes Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607518868000,"updated_at":1607616717000,"closed_at":1607616716000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1376","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1376","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1376.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1376.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1376\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1375","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1375\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1375\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1375\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1375","id":760294931,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTUwOTk2","number":1375,"title":"Add OPUS EMEA Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607517584000,"updated_at":1607616669000,"closed_at":1607616668000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1375","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1375","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1375.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1375.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1375\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1374","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1374\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1374\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1374\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1374","id":760288291,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTQ1Mzgw","number":1374,"title":"Add OPUS Tilde Model Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607516963000,"updated_at":1607616689000,"closed_at":1607616688000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1374","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1374","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1374.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1374.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1374\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1373","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1373\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1373\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1373\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1373","id":760280869,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTM5MTY0","number":1373,"title":"Add OPUS ECB Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607516302000,"updated_at":1607613955000,"closed_at":1607613954000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1373","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1373","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1373.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1373.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1373\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1372","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1372\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1372\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1372\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1372","id":760274046,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTMzMzQ4","number":1372,"title":"Add OPUS Books Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done"],"created_at":1607515729000,"updated_at":1607939788000,"closed_at":1607939787000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1372","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1372","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1372.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1372.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1372\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1371","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1371\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1371\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1371\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1371","id":760270116,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTMwMTQ1","number":1371,"title":"Adding Scielo","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607515368000,"updated_at":1607536417000,"closed_at":1607536417000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1371","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1371","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1371.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1371.patch"},"body":"Adding Scielo: Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO\r\nhttps:\/\/sites.google.com\/view\/felipe-soares\/datasets#h.p_92uSCyAjWSRB","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1371\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1370","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1370\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1370\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1370\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1370","id":760264132,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MTI1MTc3","number":1370,"title":"Add OPUS PHP Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607514810000,"updated_at":1607614645000,"closed_at":1607614644000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1370","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1370","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1370.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1370.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1370\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1369","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1369\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1369\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1369\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1369","id":760227776,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDk0NDk1","number":1369,"title":"Use passed --cache_dir for modules cache","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have a question: why not using a tmp dir instead, like the DummyDataGeneratorDownloadManager does?","Hi @lhoestq, I am trying to understand better the logic...\r\n\r\nWhy do we have a `dynamic_module_path` besides the modules cache path?\r\n```python\r\nDYNAMIC_MODULES_PATH = os.path.join(HF_MODULES_CACHE, \"datasets_modules\")\r\n```\r\nMoreover, 2 subdirectories (for datasets and for metrics) were created inside it:\r\n```python\r\nDATASETS_PATH = os.path.join(DYNAMIC_MODULES_PATH, \"datasets\")\r\nMETRICS_PATH = os.path.join(DYNAMIC_MODULES_PATH, \"metrics\")\r\n```","Hi :) \r\nThe modules cache path is the path added to `sys.path`.\r\nTherefore inside we need to have a folder that is going to be a package: `datasets_modules`.\r\nThis package will contain dynamic modules, i.e. datasets and metrics modules added on-the-fly.\r\nThen we have two sub-modules `datasets_modules.datasets` and `datasets_modules.metrics`.\r\n\r\nMaybe we can make things more explicit in the code with some comments explaining the structure, and maybe better variable naming as well..\r\n\r\nAlso I wanted to say that I started to work on offline loading of modules in #1726 and actually it lead to do similar changes to what you did to control the path where modules are stored.","Hi @lhoestq, I see...\r\n\r\nIndeed I was also creating a draft for test_load, to clarify the expected behavior... ;)\r\n\r\nSo, for the command line:\r\n```sh\r\npython datasets-cli test datasets\/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n```\r\nthe `cache_dir` argument refers to dataset cache dir. We do not have control over the modules cache dir, but we would like to have. And if I understand well, you suggest adding another argument `dynamic_module_path`. Am I right?","> So, for the command line:\r\n> \r\n> ```shell\r\n> python datasets-cli test datasets\/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n> ```\r\n> \r\n> the `cache_dir` argument refers to dataset cache dir. We do not have control over the modules cache dir, but we would like to have. And if I understand well, you suggest adding another argument `dynamic_module_path`. Am I right?\r\n\r\nYes the cache_dir is used to download files and also so save the dataset arrow files.\r\nThis is indeed different from the path for dynamic modules.\r\n\r\nI suggested to have `dynamic_module_path` as a parameter but actually this is the parent directory `hf_modules_cache` that we would need (it's the one that is passed to `init_dynamic_modules ` that we need to add to `sys.path`).\r\n\r\nCurrently it's already possible to override it using the env variable `HF_MODULES_CACHE` but we can imagine having it as a parameter as well.\r\n\r\nThis way the user controls both the `cache_dir` and the `hf_modules_cache` which are the two places used by the library to read\/write stuff.\r\n\r\n","I think #1726 is going to be merged pretty soon. Maybe can work on this as soon as it's merged to avoid doing the same things twice and to avoid conflicts ?","I agree. Indeed I took some of your code in one of my last commit, to try to implement the logic you described."],"created_at":1607511599000,"updated_at":1619174047000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1369","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1369","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1369.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1369.patch"},"body":"When passed `--cache_dir` arg:\r\n```shell\r\npython datasets-cli test datasets\/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>\r\n```\r\nit is not used for caching the modules, which are cached in the default location at `.cache\/huggingface\/modules`.\r\n\r\nWith this fix, the modules will be cached at `<my-cache-dir>\/modules`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1369\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1368","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1368\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1368\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1368\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1368","id":760222616,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDkwMjM0","number":1368,"title":"Re-adding narrativeqa dataset","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I think I've fixed the dummy data - it finally passes! I'll add the model card now.","@lhoestq - pretty happy with it now","> Awesome thank you !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip file before we merge ? (it's 300KB right now)\r\n> \r\n> To do so feel free to take a look inside it and remove all the unnecessary files and chunks of text, to only keep a few examples. The idea is to have a zip file that is only a few KB\r\n\r\nAh, it only contains 1 example for each split. I think the problem is that I include an entire story (like in the full dataset). We can probably get away with a summarised version.","> Nice thank you, can you make it even lighter if possible ? Something round 10KB would be awesone\r\n> We try to keep the repo light so that it doesn't take ages to clone. So we have to make sure the dummy data are as small as possible for every single dataset.\r\n\r\nHave trimmed a little more out of each example now."],"created_at":1607511189000,"updated_at":1607693459000,"closed_at":1607693459000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1368","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1368","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1368.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1368.patch"},"body":"An update of #309. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1368\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1367","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1367\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1367\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1367\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1367","id":760208191,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDc4MTAx","number":1367,"title":"adding covid-tweets-japanese","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it's because the file you download uncompresses into a file and not a folder so `--autogenerate` couldn't create dummy data for you. See in your dummy_data.zip if there is a file there. If not, manually create your dummy data and compress them to dummy_data.zip.","@cstorm125 Thank you for the comment! \r\nAs you point out, it seems my code has something wrong about downloading and uncompressing the file.\r\nHowever, my manually created dummy data seems to contain a file of the required format.\r\n\r\nOn Colaboratory,\r\n`!unzip \/content\/datasets\/datasets\/covid_tweets_japanese\/dummy\/1.1.0\/dummy_data.zip`\r\nreturns:\r\n\r\n```\r\nArchive: \/content\/datasets\/datasets\/covid_tweets_japanese\/dummy\/1.1.0\/dummy_data.zip\r\n creating: content\/datasets\/datasets\/covid_tweets_japanese\/dummy\/1.1.0\/dummy_data\/\r\n extracting: content\/datasets\/datasets\/covid_tweets_japanese\/dummy\/1.1.0\/dummy_data\/data.csv.bz2 \r\n```\r\n\r\nThe original data is `data.csv.bz2`, and I had a very hard time dealing with uncompressing bzip2.\r\nI think I could handle it, but there may be problems remain."],"created_at":1607510041000,"updated_at":1607534714000,"closed_at":1607534714000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1367","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1367","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1367.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1367.patch"},"body":"Adding COVID-19 Japanese Tweets Dataset as part of the sprint.\r\n\r\nTesting with dummy data is not working (the file is said to not exist). Sorry for the incomplete PR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1367\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1366","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1366\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1366\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1366\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1366","id":760205506,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDc1ODU2","number":1366,"title":"Adding Hope EDI dataset","user":{"login":"jamespaultg","id":7421838,"node_id":"MDQ6VXNlcjc0MjE4Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7421838?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jamespaultg","html_url":"https:\/\/github.com\/jamespaultg","followers_url":"https:\/\/api.github.com\/users\/jamespaultg\/followers","following_url":"https:\/\/api.github.com\/users\/jamespaultg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jamespaultg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jamespaultg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jamespaultg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jamespaultg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jamespaultg\/repos","events_url":"https:\/\/api.github.com\/users\/jamespaultg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jamespaultg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Have addressed your comments. Please review. Thanks."],"created_at":1607509823000,"updated_at":1607956077000,"closed_at":1607956077000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1366","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1366","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1366.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1366.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1366\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1365","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1365\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1365\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1365\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1365","id":760188457,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDYxNTI2","number":1365,"title":"Add Mkqa dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the `RemoteDatasetTest ` error pf the CI is fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1607508393000,"updated_at":1607614676000,"closed_at":1607614676000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1365","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1365","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1365.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1365.patch"},"body":"# MKQA: Multilingual Knowledge Questions & Answers Dataset\r\nAdding the [MKQA](https:\/\/github.com\/apple\/ml-mkqa) dataset as part of the sprint \ud83c\udf89\r\n\r\nThere is no official data splits so I added just a `train` split.\r\n \r\ndifferently from the original:\r\n- answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions)\r\n- answer:entity field has a default value of empty string '' (since this key is not available for all in original)\r\n- answer:alias has default value of []\r\n\r\n- [x] All tests passed\r\n- [x] Added dummy data\r\n- [x] Added data card (as much as I could)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1365\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1364","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1364\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1364\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1364\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1364","id":760164558,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDQxNjUz","number":1364,"title":"Narrative QA (Manual Download Stories) Dataset","user":{"login":"rsanjaykamath","id":18527321,"node_id":"MDQ6VXNlcjE4NTI3MzIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18527321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rsanjaykamath","html_url":"https:\/\/github.com\/rsanjaykamath","followers_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/followers","following_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/orgs","repos_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/repos","events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Maybe we can rename it `narrativeqa_manual` to make it explicit that this one requires manual download contrary to `narrativeqa` ?\r\nIt's important to have this one as well, in case the `narrativeqa` one suffers from download issues (checksums or dead links for example).\r\n\r\nYou can also copy the dataset card from `narrativeqa` and add the dummy data as well","Thanks @lhoestq will do all this and submit a request in the coming days. \ud83d\ude0a ","Closing this as another pull request is already done. "],"created_at":1607506439000,"updated_at":1611588711000,"closed_at":1611588691000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1364","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1364","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1364.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1364.patch"},"body":"Narrative QA with manual download for stories. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1364\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1363","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1363\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1363\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1363\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1363","id":760160944,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDM4NjM0","number":1363,"title":"Adding OPUS MultiUN","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607506141000,"updated_at":1607536460000,"closed_at":1607536460000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1363","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1363","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1363.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1363.patch"},"body":"Adding UnMulti\r\nhttp:\/\/www.euromatrixplus.net\/multi-un\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1363\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1362","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1362\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1362\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1362\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1362","id":760138233,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM1MDIwMDAz","number":1362,"title":"adding opus_infopankki","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks Quentin !"],"created_at":1607504230000,"updated_at":1607537780000,"closed_at":1607537628000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1362","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1362","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1362.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1362.patch"},"body":"Adding opus_infopankki\r\nhttp:\/\/opus.nlpl.eu\/infopankki-v1.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1362\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1361","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1361\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1361\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1361\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1361","id":760101728,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0OTg5Nzcy","number":1361,"title":"adding bprec","user":{"login":"kldarek","id":15803781,"node_id":"MDQ6VXNlcjE1ODAzNzgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15803781?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kldarek","html_url":"https:\/\/github.com\/kldarek","followers_url":"https:\/\/api.github.com\/users\/kldarek\/followers","following_url":"https:\/\/api.github.com\/users\/kldarek\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kldarek\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kldarek\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kldarek\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kldarek\/orgs","repos_url":"https:\/\/api.github.com\/users\/kldarek\/repos","events_url":"https:\/\/api.github.com\/users\/kldarek\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kldarek\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I think this is ready for review, I assume the errors (connection) are unrelated to the PR :) ","merging since the CI is fixed on master"],"created_at":1607500965000,"updated_at":1608138284000,"closed_at":1608138284000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1361","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1361","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1361.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1361.patch"},"body":"Brand-Product Relation Extraction Corpora in Polish","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1361\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1360","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1360\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1360\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1360\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1360","id":760088419,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0OTc4NzM0","number":1360,"title":"add wisesight1000","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607499690000,"updated_at":1607610521000,"closed_at":1607610521000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1360","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1360","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1360.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1360.patch"},"body":"`wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1360\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1359","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1359\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1359\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1359\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1359","id":760055969,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0OTUxMTgy","number":1359,"title":"Add JNLPBA","user":{"login":"edugp","id":17855740,"node_id":"MDQ6VXNlcjE3ODU1NzQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17855740?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/edugp","html_url":"https:\/\/github.com\/edugp","followers_url":"https:\/\/api.github.com\/users\/edugp\/followers","following_url":"https:\/\/api.github.com\/users\/edugp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/edugp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/edugp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/edugp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/edugp\/orgs","repos_url":"https:\/\/api.github.com\/users\/edugp\/repos","events_url":"https:\/\/api.github.com\/users\/edugp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/edugp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607496531000,"updated_at":1607610276000,"closed_at":1607610276000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1359","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1359","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1359.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1359.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1359\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1358","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1358\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1358\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1358\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1358","id":760031131,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0OTI5ODIx","number":1358,"title":"Add spider dataset","user":{"login":"olinguyen","id":4341867,"node_id":"MDQ6VXNlcjQzNDE4Njc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4341867?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/olinguyen","html_url":"https:\/\/github.com\/olinguyen","followers_url":"https:\/\/api.github.com\/users\/olinguyen\/followers","following_url":"https:\/\/api.github.com\/users\/olinguyen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/olinguyen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/olinguyen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/olinguyen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/olinguyen\/orgs","repos_url":"https:\/\/api.github.com\/users\/olinguyen\/repos","events_url":"https:\/\/api.github.com\/users\/olinguyen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/olinguyen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607493978000,"updated_at":1607613151000,"closed_at":1607613151000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1358","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1358","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1358.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1358.patch"},"body":"This PR adds the Spider dataset, a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.\r\n\r\nDataset website: https:\/\/yale-lily.github.io\/spider\r\nPaper link: https:\/\/www.aclweb.org\/anthology\/D18-1425\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1358\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1357","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1357\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1357\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1357\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1357","id":760023525,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0OTIzMzA4","number":1357,"title":"Youtube caption corrections","user":{"login":"2dot71mily","id":21292059,"node_id":"MDQ6VXNlcjIxMjkyMDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/21292059?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/2dot71mily","html_url":"https:\/\/github.com\/2dot71mily","followers_url":"https:\/\/api.github.com\/users\/2dot71mily\/followers","following_url":"https:\/\/api.github.com\/users\/2dot71mily\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/2dot71mily\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/2dot71mily\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/2dot71mily\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/2dot71mily\/orgs","repos_url":"https:\/\/api.github.com\/users\/2dot71mily\/repos","events_url":"https:\/\/api.github.com\/users\/2dot71mily\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/2dot71mily\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry about forgetting flake8.\r\nRather than use up the circleci resources on a new push with only formatting changes, I will wait to push until the results from all tests finish and\/or any feedback comes in... probably tomorrow for me.","\r\nSo... my normal work is with mercurial and seem to have clearly forked this up using git... :(\r\n\r\nWhat I did is after calling:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\n```\r\n\r\nI then I attempt to pull in my most recent changes UI commit changes based on @lhoestq's feedback with:\r\n```\r\ngit pull\r\n``` \r\n... which I now suspect undid the above fetch and rebase. Will look into fixing later today when I have more time. Sorry!\r\n","My dummy data seems quite large as a single row is composed of tokens\/labels for an entire youtube video, with at least one row required for each file, which in this case 1 file per 13 youtube channels.\r\n\r\nTo make it smaller I passed `--n_lines 1` to reduce about 5x.\r\n\r\nI then manually reduced size of the particularly long youtube lectures to get the size to about 30KB. However, after recompressing into a zip, and running dummy data test I got the following error:\r\n`FAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_youtube_caption_corrections - OSError: Cannot find data file. `, despite file being there, which I haven't had a chance yet to debug.","I wrote a small script to generate a smaller json file for the dummy_data, with the hope that I could resolve the pytest error noted above (in case related to a manual typo I could have introduce), however the test contains to fail locally... here's to hoping it can pass on remote!","Sorry for delayed comments here. Last commit made two changes:\r\n- Increased the valency of the labels from just True\/False to more categories to describe the various types of diffs encountered. This required some rewrite of the README\r\n- Reduced the number of remote files to be downloaded from 13 to 4, by combining all 13 of the channel-specific files together, and the splitting them up in a way to meet Github file size requirements. This also reduces size of the dummy-data.","@lhoestq, thank you for the great feedback, especially given how busy you guys are now! \r\n\r\nI checked out GitHub release tags and looks cool. I have added the version tag to the url, instead of the commit sha as originally suggested, with the hope that it serves the same purpose of pinning the content to this url. Please let me know if I have misunderstood.\r\n\r\nIn regard to dynamically changing the number of files downloaded by first downloading a JSON listing the files, I love that idea. But I am a little confused, as I was thinking that any changes to the dataset itself would require a new PR with an updated `dataset_infos.json`, e.g. `num_examples` would increase. \r\n\r\nIf the purpose of this is not to permit dynamic (without a PR needed) growth of the number of files, but instead to provide stability to the consumers of the dataset, maybe I continued use the release tags, maintaining access to old releases could serve this purpose? I am still learning about these release tags... ","For dynamic datasets, i.e. datasets that evolve over time, we support custom configurations: they are configurations that are not part of the BUILDER_CONFIGS or in the dataset_infos.json\r\n\r\nFor example for wikipedia, you can use the latest wiki dump by specifying `date=` inside `load_dataset()`. A configuration is created on the fly for this date and is used to build the dataset using the latest data.\r\n\r\nTherefore we don't need to have PRs to update the script for each wikipedia release.\r\n\r\nOne downside though is that we don't have metadata in advance such as the size of the dataset.\r\n\r\nI think this could be a nice addition for the youtube caption dataset in the future to be have a system of releases and be able to load the version we want easily. What do you think ?","\r\n\r\n\r\n\r\n> For dynamic datasets, i.e. datasets that evolve over time, we support custom configurations: they are configurations that are not part of the BUILDER_CONFIGS or in the dataset_infos.json\r\n> \r\n \r\n> I think this could be a nice addition for the youtube caption dataset in the future to be have a system of releases and be able to load the version we want easily. What do you think ?\r\n\r\nThank you for the suggestion! This sounds great! I will take a look at the some datasets that do this, and would love to give it a try in the future, if I continue to grow the captions dataset in a meaningful way. \r\n\r\nAppreciate all the help on this. It has been a really great experience for me. :)","Excited to merge! And sorry to be such a github n00b, but from what I've quickly read, I don't 'Close pull request', but rather the next steps are action taken on your end... Please let me know if there is some action to be taken at my end first. :\/","Alright merging this one then :) "],"created_at":1607493154000,"updated_at":1608055976000,"closed_at":1608055976000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1357","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1357","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1357.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1357.patch"},"body":"This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1357\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1356","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1356\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1356\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1356\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1356","id":759994457,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODk3OTQ1","number":1356,"title":"Add StackOverflow StackSample dataset","user":{"login":"ncoop57","id":7613470,"node_id":"MDQ6VXNlcjc2MTM0NzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7613470?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ncoop57","html_url":"https:\/\/github.com\/ncoop57","followers_url":"https:\/\/api.github.com\/users\/ncoop57\/followers","following_url":"https:\/\/api.github.com\/users\/ncoop57\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ncoop57\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ncoop57\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ncoop57\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ncoop57\/orgs","repos_url":"https:\/\/api.github.com\/users\/ncoop57\/repos","events_url":"https:\/\/api.github.com\/users\/ncoop57\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ncoop57\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thanks for the review and suggestions! I've added your comments and pushed the changes. I'm having issues with the dummy data still. When I run the dummy data test\r\n\r\n```bash\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample\r\n```\r\nI get this error: \r\n\r\n```\r\n___________________________________________ LocalDatasetTest.test_load_dataset_all_configs_so_stacksample ____________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_so_stacksample>, dataset_name = 'so_stacksample'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests\/test_dataset_common.py:237: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample - AssertionError: False is not true\r\n```\r\n\r\nI tried formatting the data similar to other datasets, but I think I don't have my csv's in the zip folder with the proper name. I also ran the command that's supposed to outline the exact steps I need to perform to get them into the correct format, but I followed them and they don't seem to be working still :\/. Any help would be greatly appreciated!\r\n","Ok I found the issue with the dummy data.\r\nIt's currently failing because it's not generating a single example using the dummy csv file.\r\nThat's because there's only only line in the dummy csv file, and this line is skipped using the `next()` call used to ignore the headers of the csv.\r\n\r\nTo fix the dummy data you must add headers to the dummy csv files.","Also can you make sure that all the original CSV files have headers ? i.e. check that their first line is just the column names","> Ok I found the issue with the dummy data.\r\n> It's currently failing because it's not generating a single example using the dummy csv file.\r\n> That's because there's only only line in the dummy csv file, and this line is skipped using the `next()` call used to ignore the headers of the csv.\r\n> \r\n> To fix the dummy data you must add headers to the dummy csv files.\r\n\r\nOh man, I bamboozled myself! Thank you @lhoestq for catching that! I've updated the dummy csv's to include headers and also confirmed that they all have headers, so I am not throwing away any information with that `next()` call. When I run the test locally for the dummy data it passes, so hopefully it is good to go :D","merging since the Ci is fixed on master"],"created_at":1607489991000,"updated_at":1608562101000,"closed_at":1608562101000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1356","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1356","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1356.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1356.patch"},"body":"This PR adds the StackOverflow StackSample dataset from Kaggle: https:\/\/www.kaggle.com\/stackoverflow\/stacksample\r\n\r\nRan through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1356\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1355","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1355\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1355\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1355\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1355","id":759994208,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODk3NzQw","number":1355,"title":"Addition of py_ast dataset","user":{"login":"reshinthadithyan","id":36307201,"node_id":"MDQ6VXNlcjM2MzA3MjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36307201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/reshinthadithyan","html_url":"https:\/\/github.com\/reshinthadithyan","followers_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/followers","following_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/repos","events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607489957000,"updated_at":1607530789000,"closed_at":1607530788000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1355","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1355","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1355.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1355.patch"},"body":"@lhoestq as discussed in PR #1195 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1355\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1354","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1354\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1354\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1354\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1354","id":759987763,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODkyMzE2","number":1354,"title":"Add TweetQA dataset","user":{"login":"anaerobeth","id":3663322,"node_id":"MDQ6VXNlcjM2NjMzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3663322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anaerobeth","html_url":"https:\/\/github.com\/anaerobeth","followers_url":"https:\/\/api.github.com\/users\/anaerobeth\/followers","following_url":"https:\/\/api.github.com\/users\/anaerobeth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anaerobeth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anaerobeth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anaerobeth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anaerobeth\/orgs","repos_url":"https:\/\/api.github.com\/users\/anaerobeth\/repos","events_url":"https:\/\/api.github.com\/users\/anaerobeth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anaerobeth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607489041000,"updated_at":1607613030000,"closed_at":1607613030000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1354","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1354","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1354.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1354.patch"},"body":"This PR adds the TweetQA dataset, the first dataset for QA on social media data by leveraging news media and crowdsourcing.\r\n\r\nPaper: https:\/\/arxiv.org\/abs\/1907.06292\r\nRepository: https:\/\/tweetqa.github.io\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1354\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1353","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1353\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1353\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1353\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1353","id":759980004,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODg2MDk4","number":1353,"title":"New instruction for how to generate dataset_infos.json","user":{"login":"ncoop57","id":7613470,"node_id":"MDQ6VXNlcjc2MTM0NzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7613470?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ncoop57","html_url":"https:\/\/github.com\/ncoop57","followers_url":"https:\/\/api.github.com\/users\/ncoop57\/followers","following_url":"https:\/\/api.github.com\/users\/ncoop57\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ncoop57\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ncoop57\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ncoop57\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ncoop57\/orgs","repos_url":"https:\/\/api.github.com\/users\/ncoop57\/repos","events_url":"https:\/\/api.github.com\/users\/ncoop57\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ncoop57\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607487880000,"updated_at":1607607915000,"closed_at":1607607915000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1353","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1353","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1353.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1353.patch"},"body":"Add additional instructions for how to generate dataset_infos.json for manual download datasets. Information courtesy of `Taimur Ibrahim` from the slack channel","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1353\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1352","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1352\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1352\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1352\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1352","id":759978543,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODg0ODg4","number":1352,"title":"change url for prachathai67k to internet archive","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607487637000,"updated_at":1607607737000,"closed_at":1607607737000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1352","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1352","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1352.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1352.patch"},"body":"`prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1352\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1351","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1351\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1351\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1351\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1351","id":759902770,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODI0NTcw","number":1351,"title":"added craigslist_bargians","user":{"login":"ZacharySBrown","id":7950786,"node_id":"MDQ6VXNlcjc5NTA3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7950786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZacharySBrown","html_url":"https:\/\/github.com\/ZacharySBrown","followers_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/followers","following_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/repos","events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607475751000,"updated_at":1607609674000,"closed_at":1607609674000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1351","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1351","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1351.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1351.patch"},"body":"`craigslist_bargains` data set from [here](https:\/\/worksheets.codalab.org\/worksheets\/0x453913e76b65495d8b9730d41c7e0a0c\/)\r\n\r\n(Cleaned up version of #1278)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1351\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1350","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1350\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1350\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1350\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1350","id":759879789,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0ODA1OTY3","number":1350,"title":"add LeNER-Br dataset","user":{"login":"jonatasgrosman","id":5097052,"node_id":"MDQ6VXNlcjUwOTcwNTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5097052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jonatasgrosman","html_url":"https:\/\/github.com\/jonatasgrosman","followers_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/followers","following_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/orgs","repos_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/repos","events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jonatasgrosman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I don't know what happened, my first commit passed on all checks, but after just a README.md update one of the scripts failed, is it normal? \ud83d\ude15 ","Looks like a flaky connection error, I've launched a re-run, it should be fine :)","The RemoteDatasetTest error in the CI is just a connection error, we can ignore it","merging since the CI is fixed on master"],"created_at":1607472398000,"updated_at":1607609493000,"closed_at":1607609493000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1350","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1350","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1350.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1350.patch"},"body":"Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1350\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1349","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1349\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1349\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1349\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1349","id":759870664,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Nzk4NDQ3","number":1349,"title":"initial commit for MultiReQA ","user":{"login":"Karthik-Bhaskar","id":13200370,"node_id":"MDQ6VXNlcjEzMjAwMzcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13200370?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar","html_url":"https:\/\/github.com\/Karthik-Bhaskar","followers_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/followers","following_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/orgs","repos_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/repos","events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Karthik-Bhaskar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n\r\nCan you create another branch and another PR please ?","> looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\nSure I will do that. Thank you."],"created_at":1607471074000,"updated_at":1607532397000,"closed_at":1607532397000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1349","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1349","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1349.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1349.patch"},"body":"Added MultiReQA, which is a dataset containing the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1349\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1348","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1348\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1348\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1348\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1348","id":759869849,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Nzk3Nzcy","number":1348,"title":"add Yoruba NER dataset","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you. Okay, other pull requests only have one dataset","The `RemoteDatasetTest` error in the CI is just a connection error, we can ignore it","merging since the CI is fixed on master","Thank you very much"],"created_at":1607470955000,"updated_at":1607610625000,"closed_at":1607609383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1348","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1348","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1348.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1348.patch"},"body":"Added Yoruba GV dataset based on this paper","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1348\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1347","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1347\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1347\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1347\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1347","id":759845231,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Nzc3NjQ0","number":1347,"title":"Add spanish billion words corpus","user":{"login":"mariagrandury","id":57645283,"node_id":"MDQ6VXNlcjU3NjQ1Mjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57645283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariagrandury","html_url":"https:\/\/github.com\/mariagrandury","followers_url":"https:\/\/api.github.com\/users\/mariagrandury\/followers","following_url":"https:\/\/api.github.com\/users\/mariagrandury\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariagrandury\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariagrandury\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariagrandury\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariagrandury\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariagrandury\/repos","events_url":"https:\/\/api.github.com\/users\/mariagrandury\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariagrandury\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for your feedback! I've reduced the dummy data size to 2KB.\r\n\r\nI had to rebase to fix `RemoteDatasetTest` fails, sorry about the 80 commits. \r\nI could create a new clean PR if you prefer.","I have seen that in similar cases you have suggested to other contributors to create another branch and another PR, so I will do that.","Yes thank you !","The new PR is #1476 :)"],"created_at":1607467898000,"updated_at":1607685999000,"closed_at":1607685328000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1347","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1347","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1347.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1347.patch"},"body":"Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1347\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1346","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1346\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1346\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1346\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1346","id":759844137,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Nzc2ODE5","number":1346,"title":"Add MultiBooked dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There' still an issue with the dummy data, let me take a look"],"created_at":1607467776000,"updated_at":1608051729000,"closed_at":1608051729000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1346","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1346","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1346.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1346.patch"},"body":"Add dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1346\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1345","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1345\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1345\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1345\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1345","id":759835486,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NzY5NzMw","number":1345,"title":"First commit of NarrativeQA Dataset","user":{"login":"rsanjaykamath","id":18527321,"node_id":"MDQ6VXNlcjE4NTI3MzIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18527321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rsanjaykamath","html_url":"https:\/\/github.com\/rsanjaykamath","followers_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/followers","following_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/orgs","repos_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/repos","events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rsanjaykamath\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607466719000,"updated_at":1611588712000,"closed_at":1607506192000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1345","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1345","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1345.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1345.patch"},"body":"Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1345\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1344","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1344\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1344\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1344\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1344","id":759831925,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NzY2ODIy","number":1344,"title":"Add hausa ner corpus","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607466304000,"updated_at":1607469115000,"closed_at":1607469115000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1344","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1344","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1344.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1344.patch"},"body":"Added Hausa VOA NER data ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1344\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1343","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1343\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1343\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1343\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1343","id":759809999,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NzQ4NTE4","number":1343,"title":"Add LiveQA","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607464356000,"updated_at":1607938828000,"closed_at":1607938828000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1343","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1343","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1343.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1343.patch"},"body":"This PR adds LiveQA, the Chinese real-time\/timeline-based QA task by [Liu et al., 2020](https:\/\/arxiv.org\/pdf\/2010.00526.pdf). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1343\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1342","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1342\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1342\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1342\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1342","id":759794121,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NzM1MzAw","number":1342,"title":"[yaml] Fix metadata according to pre-specified scheme","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607462794000,"updated_at":1607528247000,"closed_at":1607528246000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1342","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1342","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1342.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1342.patch"},"body":"@lhoestq @yjernite ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1342\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1341","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1341\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1341\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1341\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1341","id":759784557,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NzI3MzU5","number":1341,"title":"added references to only data card creator to all guides","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607461871000,"updated_at":1607463372000,"closed_at":1607463371000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1341","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1341","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1341.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1341.patch"},"body":"We can now use the wonderful online form for dataset cards created by @evrardts ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1341\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1340","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1340\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1340\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1340\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1340","id":759765408,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NzExMjc5","number":1340,"title":":fist: \u00a1Viva la Independencia!","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've added the changes \/ fixes - ready for a second pass :)"],"created_at":1607460223000,"updated_at":1607942161000,"closed_at":1607942161000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1340","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1340","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1340.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1340.patch"},"body":"Adds the Catalonia Independence Corpus for stance-detection of Tweets.\r\n\r\nReady for review!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1340\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1339","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1339\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1339\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1339\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1339","id":759744088,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Njk0NDI4","number":1339,"title":"hate_speech_18 initial commit","user":{"login":"czabo","id":75574105,"node_id":"MDQ6VXNlcjc1NTc0MTA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75574105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/czabo","html_url":"https:\/\/github.com\/czabo","followers_url":"https:\/\/api.github.com\/users\/czabo\/followers","following_url":"https:\/\/api.github.com\/users\/czabo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/czabo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/czabo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/czabo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/czabo\/orgs","repos_url":"https:\/\/api.github.com\/users\/czabo\/repos","events_url":"https:\/\/api.github.com\/users\/czabo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/czabo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice thanks !\r\n> \r\n> Can you rename the dataset folder and the dataset script name `hate_speech18` instead of `hate_speech_18` to follow the snake case convention we're using ?\r\n> \r\n> Also it looks like the dummy_data.zip file is quite big (almost 4MB).\r\n> Can you try to reduce its size ?\r\n> \r\n> To do so feel free to take a look inside it and remove all the unnecessary files or chunks of texts. The idea is to only keep a few examples\r\n\r\nDone, thanks! ","Re-opened in https:\/\/github.com\/huggingface\/datasets\/pull\/1486"],"created_at":1607458208000,"updated_at":1607789852000,"closed_at":1607789852000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1339","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1339","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1339.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1339.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1339\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1338","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1338\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1338\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1338\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1338","id":759725770,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Njc5ODcz","number":1338,"title":"Add GigaFren Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq fixed"],"created_at":1607456524000,"updated_at":1607940227000,"closed_at":1607940226000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1338","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1338","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1338.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1338.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1338\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1337","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1337\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1337\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1337\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1337","id":759710482,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjY3NDUz","number":1337,"title":"Add spanish billion words","user":{"login":"mariagrandury","id":57645283,"node_id":"MDQ6VXNlcjU3NjQ1Mjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57645283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariagrandury","html_url":"https:\/\/github.com\/mariagrandury","followers_url":"https:\/\/api.github.com\/users\/mariagrandury\/followers","following_url":"https:\/\/api.github.com\/users\/mariagrandury\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariagrandury\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariagrandury\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariagrandury\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariagrandury\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariagrandury\/repos","events_url":"https:\/\/api.github.com\/users\/mariagrandury\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariagrandury\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)."],"created_at":1607455082000,"updated_at":1607468378000,"closed_at":1607462127000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1337","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1337","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1337.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1337.patch"},"body":"Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web.\r\n\r\nThe dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1337\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1336","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1336\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1336\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1336\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1336","id":759706932,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjY0NjIw","number":1336,"title":"Add dataset Yoruba BBC Topic Classification","user":{"login":"michael-aloys","id":1858628,"node_id":"MDQ6VXNlcjE4NTg2Mjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1858628?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/michael-aloys","html_url":"https:\/\/github.com\/michael-aloys","followers_url":"https:\/\/api.github.com\/users\/michael-aloys\/followers","following_url":"https:\/\/api.github.com\/users\/michael-aloys\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/michael-aloys\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/michael-aloys\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/michael-aloys\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/michael-aloys\/orgs","repos_url":"https:\/\/api.github.com\/users\/michael-aloys\/repos","events_url":"https:\/\/api.github.com\/users\/michael-aloys\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/michael-aloys\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607454738000,"updated_at":1607599661000,"closed_at":1607599661000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1336","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1336","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1336.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1336.patch"},"body":"Added new dataset Yoruba BBC Topic Classification\r\n\r\nContains loading script as well as dataset card including YAML tags.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1336\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1335","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1335\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1335\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1335\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1335","id":759705835,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjYzNzQ2","number":1335,"title":"Added Bianet dataset","user":{"login":"param087","id":26374564,"node_id":"MDQ6VXNlcjI2Mzc0NTY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26374564?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/param087","html_url":"https:\/\/github.com\/param087","followers_url":"https:\/\/api.github.com\/users\/param087\/followers","following_url":"https:\/\/api.github.com\/users\/param087\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/param087\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/param087\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/param087\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/param087\/orgs","repos_url":"https:\/\/api.github.com\/users\/param087\/repos","events_url":"https:\/\/api.github.com\/users\/param087\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/param087\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the Ci is fixed on master"],"created_at":1607454632000,"updated_at":1607940056000,"closed_at":1607940056000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1335","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1335","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1335.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1335.patch"},"body":"Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http:\/\/opus.nlpl.eu\/Bianet.php) dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1335\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1334","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1334\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1334\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1334\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1334","id":759699993,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjU5MDg2","number":1334,"title":"Add QED Amara Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607454073000,"updated_at":1607599045000,"closed_at":1607598957000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1334","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1334","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1334.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1334.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1334\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1333","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1333\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1333\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1333\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1333","id":759687836,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjQ4OTI4","number":1333,"title":"Add Tanzil Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607453115000,"updated_at":1607599076000,"closed_at":1607598883000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1333","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1333","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1333.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1333.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1333\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1332","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1332\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1332\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1332\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1332","id":759679135,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjQxOTE5","number":1332,"title":"Add Open Subtitles Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607452305000,"updated_at":1607599058000,"closed_at":1607598798000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1332","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1332","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1332.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1332.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1332\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1331","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1331\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1331\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1331\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1331","id":759677189,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjQwMzc5","number":1331,"title":"First version of the new dataset hausa_voa_topics","user":{"login":"michael-aloys","id":1858628,"node_id":"MDQ6VXNlcjE4NTg2Mjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1858628?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/michael-aloys","html_url":"https:\/\/github.com\/michael-aloys","followers_url":"https:\/\/api.github.com\/users\/michael-aloys\/followers","following_url":"https:\/\/api.github.com\/users\/michael-aloys\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/michael-aloys\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/michael-aloys\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/michael-aloys\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/michael-aloys\/orgs","repos_url":"https:\/\/api.github.com\/users\/michael-aloys\/repos","events_url":"https:\/\/api.github.com\/users\/michael-aloys\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/michael-aloys\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607452132000,"updated_at":1607598593000,"closed_at":1607598593000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1331","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1331","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1331.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1331.patch"},"body":"Contains loading script as well as dataset card including YAML tags.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1331\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1330","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1330\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1330\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1330\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1330","id":759657324,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjI0MzMx","number":1330,"title":"added un_ga dataset","user":{"login":"param087","id":26374564,"node_id":"MDQ6VXNlcjI2Mzc0NTY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26374564?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/param087","html_url":"https:\/\/github.com\/param087","followers_url":"https:\/\/api.github.com\/users\/param087\/followers","following_url":"https:\/\/api.github.com\/users\/param087\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/param087\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/param087\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/param087\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/param087\/orgs","repos_url":"https:\/\/api.github.com\/users\/param087\/repos","events_url":"https:\/\/api.github.com\/users\/param087\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/param087\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like this PR includes changes about many other files than the ones for un_ga\r\n\r\nCan you create another branch an another PR please ?","@lhoestq, Thank you for suggestions. I have made the changes and raised the new PR https:\/\/github.com\/huggingface\/datasets\/pull\/1569. "],"created_at":1607450318000,"updated_at":1607968354000,"closed_at":1607968354000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1330","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1330","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1330.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1330.patch"},"body":"Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http:\/\/opus.nlpl.eu\/UN.php) dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1330\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1329","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1329\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1329\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1329\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1329","id":759654174,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjIxNzg0","number":1329,"title":"Add yoruba ner corpus","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607450040000,"updated_at":1607469072000,"closed_at":1607469072000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1329","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1329","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1329.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1329.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1329\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1328","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1328\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1328\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1328\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1328","id":759634907,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjA2MDM1","number":1328,"title":"Added the NewsPH Raw dataset and corresponding dataset card","user":{"login":"jcblaisecruz02","id":24757547,"node_id":"MDQ6VXNlcjI0NzU3NTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24757547?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jcblaisecruz02","html_url":"https:\/\/github.com\/jcblaisecruz02","followers_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/followers","following_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/orgs","repos_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/repos","events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607448345000,"updated_at":1607598274000,"closed_at":1607598274000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1328","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1328","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1328.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1328.patch"},"body":"This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems.\r\n\r\nPaper: https:\/\/arxiv.org\/abs\/2010.11574\r\nRepo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1328\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1327","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1327\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1327\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1327\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1327","id":759629321,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NjAxNDM3","number":1327,"title":"Add msr_genomics_kbcomp dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607447900000,"updated_at":1607451512000,"closed_at":1607451486000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1327","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1327","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1327.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1327.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1327\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1326","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1326\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1326\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1326\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1326","id":759611784,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTg2ODY4","number":1326,"title":"TEP: Tehran English-Persian parallel corpus","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607446613000,"updated_at":1608389703000,"closed_at":1607599517000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1326","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1326","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1326.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1326.patch"},"body":"TEP: Tehran English-Persian parallel corpus\r\nmore info : http:\/\/opus.nlpl.eu\/TEP.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1326\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1325","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1325\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1325\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1325\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1325","id":759595556,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTczNjM2","number":1325,"title":"Add humicroedit dataset","user":{"login":"saradhix","id":1351362,"node_id":"MDQ6VXNlcjEzNTEzNjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1351362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saradhix","html_url":"https:\/\/github.com\/saradhix","followers_url":"https:\/\/api.github.com\/users\/saradhix\/followers","following_url":"https:\/\/api.github.com\/users\/saradhix\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saradhix\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saradhix\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saradhix\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saradhix\/orgs","repos_url":"https:\/\/api.github.com\/users\/saradhix\/repos","events_url":"https:\/\/api.github.com\/users\/saradhix\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saradhix\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Updated the commit with the generated yaml tags","merging since the CI is fixed on master"],"created_at":1607445346000,"updated_at":1608227949000,"closed_at":1608227949000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1325","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1325","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1325.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1325.patch"},"body":"Pull request for adding humicroedit dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1325\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1324","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1324\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1324\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1324\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1324","id":759587864,"node_id":"MDU6SXNzdWU3NTk1ODc4NjQ=","number":1324,"title":"\u2753 Sharing ElasticSearch indexed dataset ","user":{"login":"pietrolesci","id":61748653,"node_id":"MDQ6VXNlcjYxNzQ4NjUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/61748653?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pietrolesci","html_url":"https:\/\/github.com\/pietrolesci","followers_url":"https:\/\/api.github.com\/users\/pietrolesci\/followers","following_url":"https:\/\/api.github.com\/users\/pietrolesci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pietrolesci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pietrolesci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pietrolesci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pietrolesci\/orgs","repos_url":"https:\/\/api.github.com\/users\/pietrolesci\/repos","events_url":"https:\/\/api.github.com\/users\/pietrolesci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pietrolesci\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"\/path\/on\/your\/disk\")\r\n```\r\n\r\nThe saved dataset can later be retrieved using:\r\n```python\r\n>>> loaded_dataset = datasets.Dataset.load_from_disk(\"\/path\/on\/your\/disk\")\r\n```\r\n\r\nAlso, I'd recommend posting your question directly in the issue section of the [elasticsearch repo](https:\/\/github.com\/elastic\/elasticsearch)","Hi @SBrandeis,\n\nThanks a lot for picking up my request. \n\nMaybe I can clarify my use-case with a bit of context. Say I have the IMDb dataset. I create an ES index on it. Now I can save and reload the dataset from disk normally. Once I reload the dataset, it is easy to retrieve the ES index on my machine. I was wondering: is there a way I can share the (now) indexed version of the IMDb dataset with my colleagues without requiring them to re-index it?\n\nThanks a lot in advance for your consideration.\n\nBest,\n\nPietro","Thanks for the clarification.\r\n\r\nI am not familiar with ElasticSearch, but if I understand well you're trying to migrate your data along with the ES index.\r\nMy advice would be to check out ES documentation, for instance, this might help you: https:\/\/www.elastic.co\/guide\/en\/cloud\/current\/ec-migrate-data.html\r\n\r\nLet me know if it helps"],"created_at":1607444758000,"updated_at":1608623456000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there,\r\n\r\nFirst of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.\r\n\r\n**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering\r\n\r\n- how can I know where it has been saved?\r\n\r\n- how can I share the indexed dataset with others?\r\n\r\nI tried to dig into the docs, but could not find anything about that.\r\n\r\nThank you very much for your help.\r\n\r\nBest,\r\nPietro\r\n\r\nEdit: apologies for the wrong label","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1324\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1323","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1323\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1323\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1323\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1323","id":759581919,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTYyNDQ0","number":1323,"title":"Add CC-News dataset of English language articles","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@vblagoje nice work, please add the README.md file and it would be ready","@lhoestq @tanmoyio @yjernite please have a look at the dataset card. Don't forget that the dataset is still hosted on my private gs bucket and should eventually be moved to the HF bucket","I will move the files soon and ping you when it's done and with the new URLs :) ","Hi !\r\n\r\nI just moved the file to a HF bucket. It's available at https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/cc_news\/cc_news.tar.gz\r\n\r\nSorry for the delay ^^'","@lhoestq no worries, updated PR with the new URL and rebased to master\r\n"],"created_at":1607444295000,"updated_at":1612198549000,"closed_at":1612198549000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1323","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1323","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1323.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1323.patch"},"body":"Adds [CC-News](https:\/\/commoncrawl.org\/2016\/10\/news-dataset-available\/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https:\/\/spacy.io\/universe\/project\/spacy-langdetect) to confirm that the article language is indeed English. \r\n\r\nThe prepared dataset is temporarily hosted on my private Google Storage [bucket](https:\/\/storage.googleapis.com\/hf_datasets\/cc_news.tar.gz). We can move it to HF storage and update this PR before merging. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1323\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1322","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1322\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1322\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1322\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1322","id":759576003,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTU3Njg3","number":1322,"title":"add indonlu benchmark datasets","user":{"login":"yasirabd","id":6518504,"node_id":"MDQ6VXNlcjY1MTg1MDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6518504?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yasirabd","html_url":"https:\/\/github.com\/yasirabd","followers_url":"https:\/\/api.github.com\/users\/yasirabd\/followers","following_url":"https:\/\/api.github.com\/users\/yasirabd\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yasirabd\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yasirabd\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yasirabd\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yasirabd\/orgs","repos_url":"https:\/\/api.github.com\/users\/yasirabd\/repos","events_url":"https:\/\/api.github.com\/users\/yasirabd\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yasirabd\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607443858000,"updated_at":1607825487000,"closed_at":1607824468000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1322","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1322","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1322.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1322.patch"},"body":"The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1322\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1321","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1321\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1321\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1321\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1321","id":759573610,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTU1Nzg1","number":1321,"title":"added dutch_social","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq \r\nUpdated the `dummy_data.zip `(<10kb)I had to reduce it to just a few samples. \r\nTrain-Test-Dev (20-5-5 samples) \r\n\r\nBut the push also added changes from other PRs (probably because of a rebase!) So the files changed tab shows 466 files were changed! \r\n","Thanks ! The dummy data are all good now :) \r\n\r\nLooks like this PR includes changes to many other files than the ones for dutch_social now.\r\n\r\nCan you create another branch and another PR please ?","> \r\n> Can you create another branch and another PR please ?\r\n@lhoestq \r\n\r\nI did a rebase. Now it doesn't include the other files. Does that help? \r\n\r\n","Yes thanks !"],"created_at":1607443674000,"updated_at":1608113657000,"closed_at":1608113657000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1321","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1321","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1321.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1321.patch"},"body":"The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes`\r\n\r\nIt can be used for sentiment analysis, multi-label classification and entity tagging","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1321\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1320","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1320\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1320\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1320\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1320","id":759566148,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTUwMDM4","number":1320,"title":"Added the WikiText-TL39 dataset and corresponding card","user":{"login":"jcblaisecruz02","id":24757547,"node_id":"MDQ6VXNlcjI0NzU3NTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24757547?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jcblaisecruz02","html_url":"https:\/\/github.com\/jcblaisecruz02","followers_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/followers","following_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/orgs","repos_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/repos","events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607443226000,"updated_at":1607599493000,"closed_at":1607599493000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1320","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1320","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1320.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1320.patch"},"body":"This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one.\r\n\r\nPaper: https:\/\/arxiv.org\/abs\/1907.00409\r\nRepo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1320\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1319","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1319\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1319\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1319\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1319","id":759565923,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTQ5ODU5","number":1319,"title":"adding wili-2018 language identification dataset","user":{"login":"Shubhambindal2017","id":31540058,"node_id":"MDQ6VXNlcjMxNTQwMDU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31540058?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Shubhambindal2017","html_url":"https:\/\/github.com\/Shubhambindal2017","followers_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/followers","following_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/orgs","repos_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/repos","events_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Shubhambindal2017\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Not sure what happened, I just changed the py file but it is showing some TensorFlow error now.","You can ignore it.\r\nIt's caused by the Tensorflow update that happened 30min ago. They added breaking changes.\r\nI'm working on a fix on the master branch right now\r\n","oh okay, btw I have made the required change for reading the CSV, I think it should be fine now, please take a look at it when you have some time.","merging since the CI is fixed on master"],"created_at":1607443209000,"updated_at":1607980832000,"closed_at":1607980832000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1319","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1319","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1319.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1319.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1319\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1318","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1318\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1318\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1318\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1318","id":759565629,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTQ5NjE3","number":1318,"title":"ethos first commit","user":{"login":"iamollas","id":22838900,"node_id":"MDQ6VXNlcjIyODM4OTAw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22838900?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iamollas","html_url":"https:\/\/github.com\/iamollas","followers_url":"https:\/\/api.github.com\/users\/iamollas\/followers","following_url":"https:\/\/api.github.com\/users\/iamollas\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iamollas\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iamollas\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iamollas\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iamollas\/orgs","repos_url":"https:\/\/api.github.com\/users\/iamollas\/repos","events_url":"https:\/\/api.github.com\/users\/iamollas\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iamollas\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Nice thanks !\r\n> \r\n> I left a few comments\r\n> \r\n> Also it looks like this PR includes changes about other files than the ones for ethos\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\n@lhoestq Should I close this PR? The new one is the: #1453","You can create another PR and close this one if you don't mind","> You can create another PR and close this one if you don't mind\r\n\r\nPerfect! You should see the #1453 PR for the fixed version! Thanks"],"created_at":1607443187000,"updated_at":1607611557000,"closed_at":1607611557000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1318","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1318","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1318.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1318.patch"},"body":"Ethos passed all the tests except from this one: \r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name>\r\n\r\nwith this error: \r\nE OSError: Cannot find data file. \r\nE Original error:\r\nE [Errno 2] No such file or directory: ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1318\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1317","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1317\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1317\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1317\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1317","id":759553495,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTM5NTQ5","number":1317,"title":"add 10k German News Article Dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You can just create another branch from master on your fork and create another PR:\r\n\r\nfirst update your master branch\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit push\r\n```\r\n\r\nthen create a new branch\r\n```\r\ngit checkout -b my-new-branch-name\r\n```\r\n\r\nThen you can add, commit and push the gnad10 files and open a new PR","closing in favor of #1572 "],"created_at":1607442265000,"updated_at":1631897751000,"closed_at":1608137443000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1317","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1317","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1317.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1317.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1317\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1316","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1316\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1316\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1316\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1316","id":759549601,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTM2Mzc1","number":1316,"title":"Allow GitHub releases as dataset source","user":{"login":"benjaminvdb","id":8875786,"node_id":"MDQ6VXNlcjg4NzU3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8875786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminvdb","html_url":"https:\/\/github.com\/benjaminvdb","followers_url":"https:\/\/api.github.com\/users\/benjaminvdb\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminvdb\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminvdb\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminvdb\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminvdb\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminvdb\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminvdb\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminvdb\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminvdb\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607441975000,"updated_at":1607595120000,"closed_at":1607595120000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1316","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1316","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1316.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1316.patch"},"body":"# Summary\r\n\r\nProviding a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`.\r\n\r\n# Reproduce\r\n\r\n```\r\nimport datasets\r\nurl = 'http:\/\/github.com\/benjaminvdb\/DBRD\/releases\/download\/v3.0\/DBRD_v3.tgz'\r\nresult = datasets.utils.file_utils.get_from_cache(url)\r\n\r\n# Returns: ConnectionError: Couldn't reach http:\/\/github.com\/benjaminvdb\/DBRD\/releases\/download\/v3.0\/DBRD_v3.tgz\r\n```\r\n\r\n# Cause\r\n\r\nGitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown.\r\n\r\n# Solution\r\n\r\nJust like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1316\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1315","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1315\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1315\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1315\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1315","id":759548706,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTM1NjM4","number":1315,"title":"add yelp_review_full","user":{"login":"hfawaz","id":29229602,"node_id":"MDQ6VXNlcjI5MjI5NjAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29229602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hfawaz","html_url":"https:\/\/github.com\/hfawaz","followers_url":"https:\/\/api.github.com\/users\/hfawaz\/followers","following_url":"https:\/\/api.github.com\/users\/hfawaz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hfawaz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hfawaz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hfawaz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hfawaz\/orgs","repos_url":"https:\/\/api.github.com\/users\/hfawaz\/repos","events_url":"https:\/\/api.github.com\/users\/hfawaz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hfawaz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607441907000,"updated_at":1607529349000,"closed_at":1607529349000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1315","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1315","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1315.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1315.patch"},"body":"This corresponds to the Yelp-5 requested in https:\/\/github.com\/huggingface\/datasets\/issues\/353\r\nI included the dataset card. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1315\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1314","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1314\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1314\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1314\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1314","id":759541937,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTMwMDE5","number":1314,"title":"Add snips built in intents 2016 12","user":{"login":"bduvenhage","id":8405335,"node_id":"MDQ6VXNlcjg0MDUzMzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8405335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bduvenhage","html_url":"https:\/\/github.com\/bduvenhage","followers_url":"https:\/\/api.github.com\/users\/bduvenhage\/followers","following_url":"https:\/\/api.github.com\/users\/bduvenhage\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bduvenhage\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bduvenhage\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bduvenhage\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bduvenhage\/orgs","repos_url":"https:\/\/api.github.com\/users\/bduvenhage\/repos","events_url":"https:\/\/api.github.com\/users\/bduvenhage\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bduvenhage\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?\r\n","Added a fraction of the real data as dummy data.","merging since the CI is fixed on master"],"created_at":1607441419000,"updated_at":1607939947000,"closed_at":1607939947000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1314","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1314","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1314.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1314.patch"},"body":"This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1314\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1313","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1313\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1313\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1313\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1313","id":759536512,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTI1NjE3","number":1313,"title":"Add HateSpeech Corpus for Polish","user":{"login":"kacperlukawski","id":2649301,"node_id":"MDQ6VXNlcjI2NDkzMDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2649301?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kacperlukawski","html_url":"https:\/\/github.com\/kacperlukawski","followers_url":"https:\/\/api.github.com\/users\/kacperlukawski\/followers","following_url":"https:\/\/api.github.com\/users\/kacperlukawski\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kacperlukawski\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kacperlukawski\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kacperlukawski\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kacperlukawski\/orgs","repos_url":"https:\/\/api.github.com\/users\/kacperlukawski\/repos","events_url":"https:\/\/api.github.com\/users\/kacperlukawski\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kacperlukawski\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Do you think using the ClassLabel is correct if we don't know the meaning of them?","Once we find out the meanings we can still add them to the dataset card","Feel free to ping me when the PR is ready for the final review"],"created_at":1607441033000,"updated_at":1608137325000,"closed_at":1608137325000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1313","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1313","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1313.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1313.patch"},"body":"This PR adds a HateSpeech Corpus for Polish, containing offensive language examples.\r\n\r\n- **Homepage:** http:\/\/zil.ipipan.waw.pl\/HateSpeech\r\n- **Paper:** http:\/\/www.qualitativesociologyreview.org\/PL\/Volume38\/PSJ_13_2_Troszynski_Wawer.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1313\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1312","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1312\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1312\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1312\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1312","id":759532626,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTIyMzc1","number":1312,"title":"Jigsaw toxicity pred","user":{"login":"taihim","id":13764071,"node_id":"MDQ6VXNlcjEzNzY0MDcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13764071?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/taihim","html_url":"https:\/\/github.com\/taihim","followers_url":"https:\/\/api.github.com\/users\/taihim\/followers","following_url":"https:\/\/api.github.com\/users\/taihim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/taihim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/taihim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/taihim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/taihim\/orgs","repos_url":"https:\/\/api.github.com\/users\/taihim\/repos","events_url":"https:\/\/api.github.com\/users\/taihim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/taihim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607440754000,"updated_at":1607688692000,"closed_at":1607688692000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1312","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1312","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1312.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1312.patch"},"body":"Requires manually downloading data from Kaggle.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1312\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1311","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1311\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1311\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1311\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1311","id":759514819,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTA3NjM1","number":1311,"title":"Add OPUS Bible Corpus (102 Languages)","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done"],"created_at":1607439428000,"updated_at":1607527857000,"closed_at":1607527856000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1311","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1311","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1311.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1311.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1311\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1310","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1310\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1310\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1310\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1310","id":759508921,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NTAyNzE5","number":1310,"title":"Add OffensEval-TR 2020 Dataset","user":{"login":"yavuzKomecoglu","id":5150963,"node_id":"MDQ6VXNlcjUxNTA5NjM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5150963?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yavuzKomecoglu","html_url":"https:\/\/github.com\/yavuzKomecoglu","followers_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/followers","following_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/orgs","repos_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/repos","events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yavuzKomecoglu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq, can you please review this PR? ","> Awesome thank you !\r\n\r\nThanks for the small fixes @lhoestq ","@coltekin, we have added the data set that you created an article that says \"Turkish Attack Language Community in Social Media\", HuggingFace dataset update sprint for you. We added Sprint quickly for a short time. I hope you welcome it too. The dataset is accessible at https:\/\/huggingface.co\/datasets\/offenseval2020_tr. ","Thank you for the heads up. I am not familiar with the terminology above (no idea what a sprint is), but I am happy that you found the data useful. Please feel free to distribute\/use it as you see fit.\r\n\r\nThe OffensEval version you included in your data set has only binary labels. There is also a version [here](https:\/\/coltekin.github.io\/offensive-turkish\/troff-v1.0.tsv.gz) which also includes fine-grained labels similar to the OffensEval English data set - Just in case it would be of interest.\r\n\r\nIf you have questions about the data set, or need more information please let me know."],"created_at":1607438991000,"updated_at":1607782542000,"closed_at":1607529726000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1310","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1310","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1310.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1310.patch"},"body":"This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https:\/\/sites.google.com\/site\/offensevalsharedtask\/) and [GermEval](https:\/\/projects.fzai.h-da.de\/iggsa\/).\r\n\r\n- **Homepage:** [offensive-turkish](https:\/\/coltekin.github.io\/offensive-turkish\/)\r\n- **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https:\/\/coltekin.github.io\/offensive-turkish\/troff.pdf)\r\n- **Point of Contact:** [\u00c7a\u011fr\u0131 \u00c7\u00f6ltekin](ccoltekin@sfs.uni-tuebingen.de)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1310\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1309","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1309\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1309\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1309\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1309","id":759501370,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDk2NTYx","number":1309,"title":"Add SAMSum Corpus dataset","user":{"login":"cccntu","id":31893406,"node_id":"MDQ6VXNlcjMxODkzNDA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31893406?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cccntu","html_url":"https:\/\/github.com\/cccntu","followers_url":"https:\/\/api.github.com\/users\/cccntu\/followers","following_url":"https:\/\/api.github.com\/users\/cccntu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cccntu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cccntu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cccntu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cccntu\/orgs","repos_url":"https:\/\/api.github.com\/users\/cccntu\/repos","events_url":"https:\/\/api.github.com\/users\/cccntu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cccntu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["also to fix the check_code_quality CI you have to remove the imports of the unused `csv` and `os`","@lhoestq Thanks for the review! I have done what you asked, README is also updated. \ud83e\udd17 \r\nThe CI fails because of the added dependency. I have never used circleCI before, so I am curious how will you solve that?","I just added `py7zr` to our test dependencies","merging since the CI is fixed on master","Thanks! \ud83e\udd17 "],"created_at":1607438456000,"updated_at":1607949153000,"closed_at":1607941255000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1309","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1309","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1309.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1309.patch"},"body":"Did not spent much time writing README, might update later.\r\n\r\nCopied description and some stuff from tensorflow_datasets\r\nhttps:\/\/github.com\/tensorflow\/datasets\/blob\/master\/tensorflow_datasets\/summarization\/samsum.py","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1309\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1308","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1308\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1308\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1308\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1308","id":759492953,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDg5Nzcw","number":1308,"title":"Add Wiki Lingua Dataset","user":{"login":"katnoria","id":7674948,"node_id":"MDQ6VXNlcjc2NzQ5NDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7674948?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/katnoria","html_url":"https:\/\/github.com\/katnoria","followers_url":"https:\/\/api.github.com\/users\/katnoria\/followers","following_url":"https:\/\/api.github.com\/users\/katnoria\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/katnoria\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/katnoria\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/katnoria\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/katnoria\/orgs","repos_url":"https:\/\/api.github.com\/users\/katnoria\/repos","events_url":"https:\/\/api.github.com\/users\/katnoria\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/katnoria\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am done adding the dataset. Requesting to review and advise.","looks like this PR has changes about many other files than the ones for WIki Lingua \r\n\r\nCan you create another branch and another PR please ?","Any reason to have english as the default config over the other languages ?","> looks like this PR has changes about many other files than the ones for WIki Lingua\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\nOk, I will create another branch and submit a fresh PR.","> Any reason to have english as the default config over the other languages ?\r\n\r\nThe data for all other languages has a direct reference to English article. Thus, I kept English as default.","closing in favor of #1470 "],"created_at":1607437813000,"updated_at":1607942392000,"closed_at":1607942392000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1308","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1308","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1308.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1308.patch"},"body":"Hello,\r\n\r\nThis is my first PR. \r\n\r\nI have added Wiki Lingua Dataset along with dataset card to the best of my knowledge.\r\nThere was one hiccup though. I was unable to create dummy data because the data is in pkl format.\r\nFrom the document, I see that:\r\n```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1308\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1307","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1307\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1307\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1307\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1307","id":759458835,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDYxODc5","number":1307,"title":"adding capes","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607435173000,"updated_at":1607528409000,"closed_at":1607527665000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1307","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1307","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1307.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1307.patch"},"body":"Adding Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES\r\nhttps:\/\/sites.google.com\/view\/felipe-soares\/datasets#h.p_kxOR6EhHm2a6","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1307\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1306","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1306\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1306\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1306\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1306","id":759448427,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDUzMTU1","number":1306,"title":"add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC)","user":{"login":"aseifert","id":4944799,"node_id":"MDQ6VXNlcjQ5NDQ3OTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aseifert","html_url":"https:\/\/github.com\/aseifert","followers_url":"https:\/\/api.github.com\/users\/aseifert\/followers","following_url":"https:\/\/api.github.com\/users\/aseifert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aseifert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aseifert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aseifert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aseifert\/orgs","repos_url":"https:\/\/api.github.com\/users\/aseifert\/repos","events_url":"https:\/\/api.github.com\/users\/aseifert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aseifert\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I created a clean PR where I also incorporated the suggested changes here: https:\/\/github.com\/huggingface\/datasets\/pull\/1449\r\n"],"created_at":1607434294000,"updated_at":1607594034000,"closed_at":1607594008000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1306","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1306","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1306.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1306.patch"},"body":"- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC)\r\n- **Description:** https:\/\/www.cl.cam.ac.uk\/research\/nl\/bea2019st\/#data\r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/W19-4406\/\r\n- **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP.\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1306\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1305","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1305\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1305\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1305\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1305","id":759446665,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDUxNzEx","number":1305,"title":"[README] Added Windows command to enable slow tests","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607434144000,"updated_at":1607435793000,"closed_at":1607435792000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1305","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1305","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1305.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1305.patch"},"body":"The Windows command to run slow tests has caused issues, so this adds a functional Windows command.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1305\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1304","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1304\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1304\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1304\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1304","id":759440841,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDQ2Nzcy","number":1304,"title":"adding eitb_parcc","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607433654000,"updated_at":1607536974000,"closed_at":1607536923000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1304","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1304","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1304.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1304.patch"},"body":"Adding EiTB-ParCC: Parallel Corpus of Comparable News\r\nhttp:\/\/opus.nlpl.eu\/EiTB-ParCC.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1304\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1303","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1303\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1303\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1303\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1303","id":759440484,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDQ2NDg0","number":1303,"title":"adding opus_openoffice","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607433621000,"updated_at":1607593030000,"closed_at":1607593030000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1303","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1303","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1303.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1303.patch"},"body":"Adding Opus OpenOffice: http:\/\/opus.nlpl.eu\/OpenOffice.php\r\n8 languages, 28 bitexts","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1303\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1302","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1302\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1302\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1302\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1302","id":759435740,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDQyNTA0","number":1302,"title":"Add Danish NER dataset","user":{"login":"ophelielacroix","id":28562991,"node_id":"MDQ6VXNlcjI4NTYyOTkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28562991?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ophelielacroix","html_url":"https:\/\/github.com\/ophelielacroix","followers_url":"https:\/\/api.github.com\/users\/ophelielacroix\/followers","following_url":"https:\/\/api.github.com\/users\/ophelielacroix\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ophelielacroix\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ophelielacroix\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ophelielacroix\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ophelielacroix\/orgs","repos_url":"https:\/\/api.github.com\/users\/ophelielacroix\/repos","events_url":"https:\/\/api.github.com\/users\/ophelielacroix\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ophelielacroix\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607433234000,"updated_at":1607592926000,"closed_at":1607592926000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1302","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1302","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1302.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1302.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1302\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1301","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1301\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1301\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1301\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1301","id":759419945,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDI5MjAy","number":1301,"title":"arxiv dataset added","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Readme added\r\n","@lhoestq is it looking alright ? "],"created_at":1607431851000,"updated_at":1607537116000,"closed_at":1607537116000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1301","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1301","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1301.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1301.patch"},"body":"**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM\r\ndataset link: https:\/\/www.kaggle.com\/Cornell-University\/arxiv","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1301\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1300","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1300\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1300\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1300\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1300","id":759418122,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDI3Njk1","number":1300,"title":"added dutch_social","user":{"login":"skyprince999","id":9033954,"node_id":"MDQ6VXNlcjkwMzM5NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9033954?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/skyprince999","html_url":"https:\/\/github.com\/skyprince999","followers_url":"https:\/\/api.github.com\/users\/skyprince999\/followers","following_url":"https:\/\/api.github.com\/users\/skyprince999\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/skyprince999\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/skyprince999\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/skyprince999\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/skyprince999\/orgs","repos_url":"https:\/\/api.github.com\/users\/skyprince999\/repos","events_url":"https:\/\/api.github.com\/users\/skyprince999\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/skyprince999\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this since a new pull request has been made. "],"created_at":1607431670000,"updated_at":1607443745000,"closed_at":1607443745000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1300","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1300","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1300.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1300.patch"},"body":"WIP \r\nAs some tests did not clear! \ud83d\udc4e\ud83c\udffc ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1300\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1299","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1299\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1299\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1299\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1299","id":759414566,"node_id":"MDU6SXNzdWU3NTk0MTQ1NjY=","number":1299,"title":"can't load \"german_legal_entity_recognition\" dataset","user":{"login":"nataly-obr","id":59837137,"node_id":"MDQ6VXNlcjU5ODM3MTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59837137?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nataly-obr","html_url":"https:\/\/github.com\/nataly-obr","followers_url":"https:\/\/api.github.com\/users\/nataly-obr\/followers","following_url":"https:\/\/api.github.com\/users\/nataly-obr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nataly-obr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nataly-obr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nataly-obr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nataly-obr\/orgs","repos_url":"https:\/\/api.github.com\/users\/nataly-obr\/repos","events_url":"https:\/\/api.github.com\/users\/nataly-obr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nataly-obr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos","> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during the execution of this line:\r\ndataset = load_dataset(\"german_legal_entity_recognition\")\r\n\r\nAlso, when I try to open mentioned links via Opera I have errors \"404: Not Found\" and \"This XML file does not appear to have any style information associated with it. The document tree is shown below.\" respectively.","Hello @nataly-obr, the `german_legal_entity_recognition` dataset has not yet been released (it is part of the coming soon v2 release).\r\n\r\nYou can still access it now if you want, but you will need to install `datasets` via the master branch:\r\n`pip install git+https:\/\/github.com\/huggingface\/datasets.git@master`\r\n\r\nPlease let me know if it solves the issue :) "],"created_at":1607431321000,"updated_at":1608134593000,"closed_at":1608134593000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition\/german_legal_entity_recognition.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/german_legal_entity_recognition\/german_legal_entity_recognition.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/german_legal_entity_recognition\/german_legal_entity_recognition.py\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1299\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1298","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1298\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1298\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1298\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1298","id":759412451,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDIyODQy","number":1298,"title":"Add OPUS Ted Talks 2013","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607431118000,"updated_at":1608137870000,"closed_at":1608137869000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1298","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1298","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1298.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1298.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1298\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1297","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1297\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1297\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1297\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1297","id":759404103,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0NDE1ODMx","number":1297,"title":"OPUS Ted Talks 2013","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607430339000,"updated_at":1607430950000,"closed_at":1607430950000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1297","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1297","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1297.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1297.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1297\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1296","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1296\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1296\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1296\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1296","id":759375292,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzQ1","number":1296,"title":"The Snips Built In Intents 2016 dataset.","user":{"login":"bduvenhage","id":8405335,"node_id":"MDQ6VXNlcjg0MDUzMzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8405335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bduvenhage","html_url":"https:\/\/github.com\/bduvenhage","followers_url":"https:\/\/api.github.com\/users\/bduvenhage\/followers","following_url":"https:\/\/api.github.com\/users\/bduvenhage\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bduvenhage\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bduvenhage\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bduvenhage\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bduvenhage\/orgs","repos_url":"https:\/\/api.github.com\/users\/bduvenhage\/repos","events_url":"https:\/\/api.github.com\/users\/bduvenhage\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bduvenhage\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?","Will tag the dataset and update the dataset card."],"created_at":1607427610000,"updated_at":1607441272000,"closed_at":1607441272000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1296","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1296","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1296.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1296.patch"},"body":"This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1296\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1295","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1295\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1295\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1295\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1295","id":759375251,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzE1","number":1295,"title":"add hrenwac_para","user":{"login":"IvanZidov","id":11391118,"node_id":"MDQ6VXNlcjExMzkxMTE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11391118?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/IvanZidov","html_url":"https:\/\/github.com\/IvanZidov","followers_url":"https:\/\/api.github.com\/users\/IvanZidov\/followers","following_url":"https:\/\/api.github.com\/users\/IvanZidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/IvanZidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/IvanZidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/IvanZidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/IvanZidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/IvanZidov\/repos","events_url":"https:\/\/api.github.com\/users\/IvanZidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/IvanZidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607427606000,"updated_at":1607708540000,"closed_at":1607708540000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1295","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1295","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1295.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1295.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1295\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1294","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1294\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1294\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1294\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1294","id":759365246,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzgzMjg5","number":1294,"title":"adding opus_euconst","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607426656000,"updated_at":1607453060000,"closed_at":1607452883000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1294","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1294","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1294.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1294.patch"},"body":"Adding EUconst, a parallel corpus collected from the European Constitution.\r\n21 languages, 210 bitexts","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1294\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1293","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1293\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1293\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1293\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1293","id":759360113,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Mzc4OTQ0","number":1293,"title":"add hrenwac_para","user":{"login":"ivan-zidov","id":51969305,"node_id":"MDQ6VXNlcjUxOTY5MzA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51969305?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ivan-zidov","html_url":"https:\/\/github.com\/ivan-zidov","followers_url":"https:\/\/api.github.com\/users\/ivan-zidov\/followers","following_url":"https:\/\/api.github.com\/users\/ivan-zidov\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ivan-zidov\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ivan-zidov\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ivan-zidov\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ivan-zidov\/orgs","repos_url":"https:\/\/api.github.com\/users\/ivan-zidov\/repos","events_url":"https:\/\/api.github.com\/users\/ivan-zidov\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ivan-zidov\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607426201000,"updated_at":1607427287000,"closed_at":1607427278000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1293","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1293","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1293.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1293.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1293\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1292","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1292\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1292\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1292\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1292","id":759354627,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Mzc0MzQ3","number":1292,"title":"arXiv dataset added","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607425708000,"updated_at":1607436133000,"closed_at":1607436133000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1292","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1292","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1292.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1292.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1292\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1291","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1291\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1291\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1291\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1291","id":759352810,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzcyNzk2","number":1291,"title":"adding pubmed_qa dataset","user":{"login":"tuner007","id":46425391,"node_id":"MDQ6VXNlcjQ2NDI1Mzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46425391?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tuner007","html_url":"https:\/\/github.com\/tuner007","followers_url":"https:\/\/api.github.com\/users\/tuner007\/followers","following_url":"https:\/\/api.github.com\/users\/tuner007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tuner007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tuner007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tuner007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tuner007\/orgs","repos_url":"https:\/\/api.github.com\/users\/tuner007\/repos","events_url":"https:\/\/api.github.com\/users\/tuner007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tuner007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607425544000,"updated_at":1607504090000,"closed_at":1607504090000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1291","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1291","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1291.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1291.patch"},"body":"Pubmed QA dataset:\r\nPQA-L(abeled) 1k\r\nPQA-U(labeled) 61.2k\r\nPQA-A(rtifical labeled) 211.3k","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1291\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1290","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1290\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1290\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1290\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1290","id":759339989,"node_id":"MDU6SXNzdWU3NTkzMzk5ODk=","number":1290,"title":"imdb dataset cannot be downloaded","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `\/idiap\/temp\/rkarimi\/cache_home_1\/datasets`) and retry ?","Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"train\")\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\nDownloading and preparing dataset imdb\/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/imdb\/plain_text\/1.0.0\/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=4902716, num_examples=3680, dataset_name='imdb')}]\r\n```\r\n\r\ndatasets version\r\n```\r\ndatasets 1.1.2 <pip>\r\ntensorflow-datasets 4.1.0 <pip>\r\n\r\n```","resolved with moving to version 1.1.3"],"created_at":1607424456000,"updated_at":1608831489000,"closed_at":1608831489000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"hi\r\nplease find error below getting imdb train spli:\r\nthanks\r\n\r\n`\r\ndatasets.load_dataset>>> datasets.load_dataset(\"imdb\", split=\"train\")`\r\n\r\n\r\nerrors\r\n\r\n\r\n```\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\nDownloading and preparing dataset imdb\/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/imdb\/plain_text\/1.0.0\/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}]\r\n\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1290\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1289","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1289\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1289\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1289\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1289","id":759333684,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzU2ODUw","number":1289,"title":"Jigsaw toxicity classification dataset added","user":{"login":"taihim","id":13764071,"node_id":"MDQ6VXNlcjEzNzY0MDcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13764071?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/taihim","html_url":"https:\/\/github.com\/taihim","followers_url":"https:\/\/api.github.com\/users\/taihim\/followers","following_url":"https:\/\/api.github.com\/users\/taihim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/taihim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/taihim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/taihim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/taihim\/orgs","repos_url":"https:\/\/api.github.com\/users\/taihim\/repos","events_url":"https:\/\/api.github.com\/users\/taihim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/taihim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607423931000,"updated_at":1607440668000,"closed_at":1607440668000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1289","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1289","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1289.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1289.patch"},"body":"The dataset requires manually downloading data from Kaggle.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1289\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1288","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1288\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1288\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1288\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1288","id":759309457,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzM2Mzgz","number":1288,"title":"Add CodeSearchNet corpus dataset","user":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq ready for a second review"],"created_at":1607422070000,"updated_at":1607533528000,"closed_at":1607533528000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1288","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1288","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1288.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1288.patch"},"body":"This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https:\/\/github.com\/github\/CodeSearchNet\r\nI have had a few issues, mentioned below. Would appreciate some help on how to solve them.\r\n\r\n## Issues generating dataset card\r\nIs there something wrong with my declaration of the dataset features ?\r\n```\r\nfeatures=datasets.Features(\r\n {\r\n \"repository_name\": datasets.Value(\"string\"),\r\n \"func_path_in_repository\": datasets.Value(\"string\"),\r\n \"func_name\": datasets.Value(\"string\"),\r\n \"whole_func_string\": datasets.Value(\"string\"),\r\n \"language\": datasets.Value(\"string\"),\r\n \"func_code_string\": datasets.Value(\"string\"),\r\n \"func_code_tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"func_documentation_string\": datasets.Value(\"string\"),\r\n \"func_documentation_tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"split_name\": datasets.Value(\"string\"),\r\n \"func_code_url\": datasets.Value(\"string\"),\r\n # TODO - add licensing info in the examples\r\n }\r\n),\r\n\r\n```\r\nWhen running the streamlite app for tagging the dataset on my machine, I get the following error :\r\n![image](https:\/\/user-images.githubusercontent.com\/33657802\/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png)\r\n\r\n\r\n## Issues with dummy data\r\nDue to the unusual structure of the data, I have been unable to generate dummy data automatically.\r\nI tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data.\r\n```\r\n============================================================================================== test session starts ==============================================================================================\r\nplatform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1\r\nplugins: xdist-2.1.0, forked-1.3.0\r\ncollected 1 item\r\n\r\ntests\/test_dataset_common.py F [100%]\r\n\r\n=================================================================================================== FAILURES ====================================================================================================\r\n________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests\/test_dataset_common.py:237:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\/test_dataset_common.py:198: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n--------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------\r\nDownloading and preparing dataset code_search_net\/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to \/tmp\/tmppx78sj24\/code_search_net\/all\/1.0.0...\r\nDataset code_search_net downloaded and prepared to \/tmp\/tmppx78sj24\/code_search_net\/all\/1.0.0. Subsequent calls will reuse this data.\r\n--------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------\r\n... (irrelevant info - Deprecation warnings)\r\n============================================================================================ short test summary info ============================================================================================\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true\r\n========================================================================================= 1 failed, 4 warnings in 3.00s ========================================================================================\r\n```\r\n\r\n\r\n## Note : Data structure in S3\r\nThe data is stored on S3, and organized by programming languages.\r\nIt is stored in the following repository structure:\r\n```\r\n.\r\n\u251c\u2500\u2500 <language_name> # e.g. python\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 final\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 jsonl\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 test\r\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 <language_name>_test_0.jsonl.gz\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 train\r\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 <language_name>_train_0.jsonl.gz\r\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 <language_name>_train_1.jsonl.gz\r\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 ...\r\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 <language_name>_train_n.jsonl.gz\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 valid\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 <language_name>_valid_0.jsonl.gz\r\n\u251c\u2500\u2500 <language_name>_dedupe_definitions_v2.pkl\r\n\u2514\u2500\u2500 <language_name>_licenses.pkl\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1288\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1287","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1287\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1287\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1287\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1287","id":759300992,"node_id":"MDU6SXNzdWU3NTkzMDA5OTI=","number":1287,"title":"'iwslt2017-ro-nl', cannot be downloaded ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ","even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n","Looks like the data has been moved from its original location to google drive\r\n\r\nNew url: https:\/\/drive.google.com\/u\/0\/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download"],"created_at":1607421415000,"updated_at":1608056694000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying \r\n`>>> datasets.load_dataset(\"iwslt2017\", 'iwslt2017-ro-nl', split=\"train\")`\r\n\r\ngetting this error thank you for your help\r\n```\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\nDownloading and preparing dataset iwsl_t217\/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/iwsl_t217\/iwslt2017-ro-nl\/1.0.0\/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd...\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" \/idiap\/home\/rkarimi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/iwslt2017\/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd\/iwslt2017.py\", line 118, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(MULTI_URL)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 179, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 216, in map_nested\r\n return function(data_struct)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 477, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/wit3.fbk.eu\/archive\/2017-01-trnmted\/\/texts\/DeEnItNlRo\/DeEnItNlRo\/DeEnItNlRo-DeEnItNlRo.tgz\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1287\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1286","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1286\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1286\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1286\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1286","id":759291509,"node_id":"MDU6SXNzdWU3NTkyOTE1MDk=","number":1286,"title":"[libprotobuf FATAL \/sentencepiece\/src\/..\/third_party\/protobuf-lite\/google\/protobuf\/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. ","maybe there is an empty line or something inside these datasets? could you tell me why this is happening? thanks ","I just checked and the wmt16 en-ro doesn't have empty lines\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nlen(d) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"en\"].strip()) > 0)) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"ro\"].strip()) > 0)) # 610320\r\n# also tested for split=\"validation\" and \"test\"\r\n```\r\n\r\nCan you open an issue on the `transformers` repo ? also cc @sgugger ","Hi @lhoestq \r\nI am not really sure which part is causing this, to me this is more related to dataset library as this is happening for some of the datassets below please find the information to reprodcue the bug, this is really blocking me and I appreciate your help\r\n\r\n\r\n## Environment info\r\n- `transformers` version: 3.5.1\r\n- Platform: GPU\r\n- Python version: 3.7 \r\n- PyTorch version (GPU?): 1.0.4\r\n- Tensorflow version (GPU?): - \r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n tokenizers: @mfuntowicz\r\n Trainer: @sgugger\r\n TextGeneration: @TevenLeScao \r\n nlp datasets: [different repo](https:\/\/github.com\/huggingface\/nlp)\r\n rust tokenizers: [different repo](https:\/\/github.com\/huggingface\/tokenizers)\r\n examples\/seq2seq: @patil-suraj\r\n\r\n## Information\r\nHi\r\nI am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks \r\n\r\n```\r\n[libprotobuf FATAL \/sentencepiece\/src\/..\/third_party\/protobuf-lite\/google\/protobuf\/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n\r\n```\r\n\r\nTo reproduce the error please run on 1 GPU:\r\n```\r\ngit clone git@github.com:rabeehk\/debug-seq2seq.git\r\npython setup.py develop \r\ncd seq2seq \r\npython finetune_t5_trainer.py temp.json\r\n\r\n```\r\n\r\nFull output of the program:\r\n\r\n```\r\n(internship) rkarimi@vgnh008:\/idiap\/user\/rkarimi\/dev\/debug-seq2seq\/seq2seq$ python finetune_t5_trainer.py temp.json \r\n2020-12-12 15:38:16.234542: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-12 15:38:16.234598: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n12\/12\/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False\r\n12\/12\/2020 15:38:32 - INFO - __main__ - Training\/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs\/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs\/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs\/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs\/finetune-adapter\/test-n-1-lr-1e-02-e-20')\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\n12\/12\/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12\/12\/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12\/12\/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12\/12\/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12\/12\/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (\/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/default\/0.1.0\/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12\/12\/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/default\/0.1.0\/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534\/cache-6810ece2a440c3be.arrow\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\n12\/12\/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12\/12\/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12\/12\/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12\/12\/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12\/12\/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (\/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/default\/0.1.0\/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12\/12\/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/default\/0.1.0\/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534\/cache-9a2822394a3a4e34.arrow\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12\/12\/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0} \r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.37it\/s]12\/12\/2020 15:38:46 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co\/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 2.43it\/s]\r\n12\/12\/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs\/test\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\n12\/12\/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12\/12\/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12\/12\/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12\/12\/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12\/12\/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (\/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/default\/0.1.0\/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12\/12\/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/default\/0.1.0\/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534\/cache-164dd1d57e9fa69a.arrow\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch\r\n 0%| | 0\/2 [00:00<?, ?it\/s]12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co\/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n 0%| | 0\/2 [00:00<?, ?it\/s]\r\n12\/12\/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs\/finetune-adapter\/test-n-1-lr-1e-02-e-20\/boolq\r\n12\/12\/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}\r\n12\/12\/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****\r\n12\/12\/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269\r\n12\/12\/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 52\/52 [00:12<00:00, 4.86it\/s][libprotobuf FATAL \/sentencepiece\/src\/..\/third_party\/protobuf-lite\/google\/protobuf\/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n```\r\n\r\n\r\n\r\n","solved see https:\/\/github.com\/huggingface\/transformers\/issues\/9079?_pjax=%23js-repo-pjax-container ","Hii please follow me"],"created_at":1607420655000,"updated_at":1607801782000,"closed_at":1607790156000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help\r\n\r\n{'epoch': 20.0} \r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20\/20 [00:16<00:00, 1.22it\/s]\r\n12\/08\/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs\/experiment\/joint\/finetune\/lr-2e-5\r\n12\/08\/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)}\r\n12\/08\/2020 10:41:24 - INFO - __main__ - *** Evaluate ***\r\n12\/08\/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4}\r\n12\/08\/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****\r\n12\/08\/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998\r\n12\/08\/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 32\/32 [00:37<00:00, 1.19s\/it][libprotobuf FATAL \/sentencepiece\/src\/..\/third_party\/protobuf-lite\/google\/protobuf\/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1286\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1285","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1285\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1285\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1285\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1285","id":759278758,"node_id":"MDU6SXNzdWU3NTkyNzg3NTg=","number":1285,"title":"boolq does not work ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["here is the minimal code to reproduce\r\n\r\n`datasets>>> datasets.load_dataset(\"boolq\", \"train\")\r\n\r\nthe errors\r\n\r\n```\r\n`cahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\nUsing custom data configuration train\r\nDownloading and preparing dataset boolq\/train (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/boolq\/train\/0.1.0\/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home_1\/datasets\/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" \/idiap\/home\/rkarimi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/boolq\/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11\/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/tensorflow\/python\/lib\/io\/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\n\r\n\r\n\r\n```","This has been fixed by #881 \r\nthis fix will be available in the next release soon.\r\n\r\nIf you don't want to wait for the release you can actually load the latest version of boolq by specifying `script_version=\"master\"` in `load_dataset`","thank you this solved this issue, for now seems to work, thanks "],"created_at":1607419727000,"updated_at":1607420830000,"closed_at":1607420830000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am getting this error when trying to load boolq, thanks for your help\r\n\r\nts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock\r\nTraceback (most recent call last):\r\n File \"finetune_t5_trainer.py\", line 274, in <module>\r\n main()\r\n File \"finetune_t5_trainer.py\", line 147, in main\r\n for task in data_args.tasks]\r\n File \"finetune_t5_trainer.py\", line 147, in <listcomp>\r\n for task in data_args.tasks]\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/ruse\/seq2seq\/tasks\/tasks.py\", line 58, in get_dataset\r\n dataset = self.load_dataset(split=split)\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/ruse\/seq2seq\/tasks\/tasks.py\", line 54, in load_dataset\r\n return datasets.load_dataset(self.task.name, split=split)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" \/idiap\/home\/rkarimi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/boolq\/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11\/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/tensorflow\/python\/lib\/io\/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\ntensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1285\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1284","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1284\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1284\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1284\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1284","id":759269920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MzAzNDk0","number":1284,"title":"Update coqa dataset url","user":{"login":"ojasaar","id":73708394,"node_id":"MDQ6VXNlcjczNzA4Mzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73708394?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ojasaar","html_url":"https:\/\/github.com\/ojasaar","followers_url":"https:\/\/api.github.com\/users\/ojasaar\/followers","following_url":"https:\/\/api.github.com\/users\/ojasaar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ojasaar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ojasaar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ojasaar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ojasaar\/orgs","repos_url":"https:\/\/api.github.com\/users\/ojasaar\/repos","events_url":"https:\/\/api.github.com\/users\/ojasaar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ojasaar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607418998000,"updated_at":1607451549000,"closed_at":1607451549000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1284","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1284","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1284.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1284.patch"},"body":"`datasets.stanford.edu` is invalid.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1284\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1283","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1283\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1283\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1283\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1283","id":759251457,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0Mjg4MDg2","number":1283,"title":"Add dutch book review dataset","user":{"login":"benjaminvdb","id":8875786,"node_id":"MDQ6VXNlcjg4NzU3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8875786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/benjaminvdb","html_url":"https:\/\/github.com\/benjaminvdb","followers_url":"https:\/\/api.github.com\/users\/benjaminvdb\/followers","following_url":"https:\/\/api.github.com\/users\/benjaminvdb\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/benjaminvdb\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/benjaminvdb\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/benjaminvdb\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/benjaminvdb\/orgs","repos_url":"https:\/\/api.github.com\/users\/benjaminvdb\/repos","events_url":"https:\/\/api.github.com\/users\/benjaminvdb\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/benjaminvdb\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Really cool thanks !\r\n> \r\n> I left some (minor) comments\r\n\r\nThank you for your comments! \ud83d\udc4d I went ahead and improved the dataset card using your suggestions and some tweaks of my own. I hope you like it! \ud83d\ude04"],"created_at":1607417448000,"updated_at":1607545318000,"closed_at":1607534725000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1283","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1283","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1283.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1283.patch"},"body":"- Name: Dutch Book Review Dataset (DBRD)\r\n- Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch.\r\n- Paper: https:\/\/arxiv.org\/abs\/1910.00896\r\n- Data: https:\/\/github.com\/benjaminvdb\/DBRD\r\n- Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive\/negative), based on the associated rating.\r\n\r\nChecks\r\n- [x] Create the dataset script \/datasets\/dbrd\/dbrd.py using the template\r\n- [x] Fill the _DESCRIPTION and _CITATION variables\r\n- [x] Implement _info(), _split_generators() and _generate_examples()\r\n- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.\r\n- [x] Generate the metadata file dataset_infos.json for all configurations\r\n- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1283\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1282","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1282\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1282\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1282\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1282","id":759208335,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MjQ4NzI5","number":1282,"title":"add thaiqa_squad","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607415278000,"updated_at":1607452578000,"closed_at":1607452578000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1282","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1282","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1282.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1282.patch"},"body":"Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers.\r\n\r\n`thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https:\/\/rajpurkar.github.io\/SQuAD-explorer\/) format, originally created by [NECTEC](https:\/\/www.nectec.or.th\/en\/) from Wikipedia articles and adapted to [SQuAD](https:\/\/rajpurkar.github.io\/SQuAD-explorer\/) format by [PyThaiNLP](https:\/\/github.com\/PyThaiNLP\/).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1282\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1281","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1281\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1281\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1281\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1281","id":759203317,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MjQ0MTA1","number":1281,"title":"adding hybrid_qa","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607415019000,"updated_at":1607450968000,"closed_at":1607450820000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1281","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1281","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1281.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1281.patch"},"body":"Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data\r\nhttps:\/\/github.com\/wenhuchen\/HybridQA","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1281\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1280","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1280\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1280\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1280\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1280","id":759151028,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MTk2MDc0","number":1280,"title":"disaster response messages dataset","user":{"login":"darshan-gandhi","id":44197177,"node_id":"MDQ6VXNlcjQ0MTk3MTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44197177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/darshan-gandhi","html_url":"https:\/\/github.com\/darshan-gandhi","followers_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/followers","following_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/orgs","repos_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/repos","events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/darshan-gandhi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have added the Readme.md as well, the PR is ready for review. \r\n\r\nThank you ","Hi @lhoestq I have updated the code and files. Please if you could check once.\r\n\r\nThank you"],"created_at":1607412436000,"updated_at":1607530917000,"closed_at":1607530917000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1280","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1280","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1280.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1280.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1280\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1279","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1279\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1279\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1279\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1279","id":759108726,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MTU4OTY5","number":1279,"title":"added para_pat","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Updated with Translation feature type. Working on dataset tags and README","merging since the CI is fixed on master"],"created_at":1607408927000,"updated_at":1607953277000,"closed_at":1607953277000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1279","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1279","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1279.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1279.patch"},"body":"Dataset link : https:\/\/figshare.com\/articles\/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts\/12627632\r\nWorking on README.md currently","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1279\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1278","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1278\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1278\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1278\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1278","id":758988465,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDYwNDY5","number":1278,"title":"Craigslist bargains","user":{"login":"ZacharySBrown","id":7950786,"node_id":"MDQ6VXNlcjc5NTA3ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7950786?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ZacharySBrown","html_url":"https:\/\/github.com\/ZacharySBrown","followers_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/followers","following_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/orgs","repos_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/repos","events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ZacharySBrown\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Seeing this in the CircleCI builds, this is what I was originally getting before I started messing around with the download URLS to try to fix this:\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: '\/tmp\/tmpwvji917g\/extracted\/d6185140afb24ad8fee67392100a478269cba286b0d88915a137fdf88872de14\/dummy_data\/train__VARIABLE_MISUSE__SStuB.txt-00001-of-00300'`\r\n\r\nCould this be because of the files in my `dummy_data.zip`? I had to manually create it, and it looked like the test was looking for the following files, so I created the `.zip` with this structure:\r\n\r\n```\r\nArchive: dummy_data.zip\r\n creating: dummy_data\/\r\n inflating: dummy_data\/blobtest \r\n inflating: dummy_data\/parsed.jsontrain \r\n inflating: dummy_data\/parsed.jsonvalidation \r\n```","Going to close this out and link to a new (cleaner) PR"],"created_at":1607391955000,"updated_at":1607474775000,"closed_at":1607474775000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1278","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1278","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1278.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1278.patch"},"body":"`craigslist_bargains` dataset from [here](https:\/\/worksheets.codalab.org\/worksheets\/0x453913e76b65495d8b9730d41c7e0a0c\/)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1278\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1276","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1276\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1276\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1276\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1276","id":758965936,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDQyODYy","number":1276,"title":"add One Million Posts Corpus","user":{"login":"aseifert","id":4944799,"node_id":"MDQ6VXNlcjQ5NDQ3OTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aseifert","html_url":"https:\/\/github.com\/aseifert","followers_url":"https:\/\/api.github.com\/users\/aseifert\/followers","following_url":"https:\/\/api.github.com\/users\/aseifert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aseifert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aseifert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aseifert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aseifert\/orgs","repos_url":"https:\/\/api.github.com\/users\/aseifert\/repos","events_url":"https:\/\/api.github.com\/users\/aseifert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aseifert\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607388608000,"updated_at":1607711298000,"closed_at":1607711298000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1276","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1276","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1276.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1276.patch"},"body":"- **Name:** One Million Posts Corpus\r\n- **Description:** The \u201cOne Million Posts\u201d corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language).\r\n- **Paper:** https:\/\/dl.acm.org\/doi\/10.1145\/3077136.3080711\r\n- **Data:** https:\/\/github.com\/OFAI\/million-post-corpus\r\n- **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations.\r\n\r\n### Checkbox\r\n\r\n- [X] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [X] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [X] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [X] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1276\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1275","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1275\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1275\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1275\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1275","id":758958066,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDM2NjIw","number":1275,"title":"Yoruba GV NER added","user":{"login":"dadelani","id":23586676,"node_id":"MDQ6VXNlcjIzNTg2Njc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23586676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dadelani","html_url":"https:\/\/github.com\/dadelani","followers_url":"https:\/\/api.github.com\/users\/dadelani\/followers","following_url":"https:\/\/api.github.com\/users\/dadelani\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dadelani\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dadelani\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dadelani\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dadelani\/orgs","repos_url":"https:\/\/api.github.com\/users\/dadelani\/repos","events_url":"https:\/\/api.github.com\/users\/dadelani\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dadelani\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you. Okay, I will add the dataset card."],"created_at":1607387498000,"updated_at":1607469928000,"closed_at":1607469928000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1275","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1275","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1275.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1275.patch"},"body":"I just added Yoruba GV NER dataset from this paper https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.335\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1275\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1274","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1274\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1274\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1274\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1274","id":758943174,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDI0MTQx","number":1274,"title":"oclar-dataset","user":{"login":"alaameloh","id":26907161,"node_id":"MDQ6VXNlcjI2OTA3MTYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26907161?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alaameloh","html_url":"https:\/\/github.com\/alaameloh","followers_url":"https:\/\/api.github.com\/users\/alaameloh\/followers","following_url":"https:\/\/api.github.com\/users\/alaameloh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alaameloh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alaameloh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alaameloh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alaameloh\/orgs","repos_url":"https:\/\/api.github.com\/users\/alaameloh\/repos","events_url":"https:\/\/api.github.com\/users\/alaameloh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alaameloh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607385405000,"updated_at":1607528168000,"closed_at":1607528168000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1274","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1274","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1274.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1274.patch"},"body":"Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http:\/\/archive.ics.uci.edu\/ml\/datasets\/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1274\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1273","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1273\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1273\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1273\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1273","id":758935768,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDE4MjQ2","number":1273,"title":"Created wiki_movies dataset.","user":{"login":"aclifton314","id":53267795,"node_id":"MDQ6VXNlcjUzMjY3Nzk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53267795?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aclifton314","html_url":"https:\/\/github.com\/aclifton314","followers_url":"https:\/\/api.github.com\/users\/aclifton314\/followers","following_url":"https:\/\/api.github.com\/users\/aclifton314\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aclifton314\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aclifton314\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aclifton314\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aclifton314\/orgs","repos_url":"https:\/\/api.github.com\/users\/aclifton314\/repos","events_url":"https:\/\/api.github.com\/users\/aclifton314\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aclifton314\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like your PR includes changes about many other files than the ones for wiki_movies\r\n\r\nCan you create another branch and another PR please ?","I'm happy to. What's the best way to do that (sorry, I'm new to PRs etc.)?","Sure !\r\n\r\nFirst please save your new dataset files somewhere.\r\nThen you can do in this order:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit push\r\ngit checkout -b my-new-branch-name\r\n```\r\nThis will create a new branch from the updated master branch.\r\nThen you can re-add your files and commit + push them\r\n\r\nOnce it's done you should be able to create a new PR using your new branch :) ","Done!","closing in favor of #1485 "],"created_at":1607384334000,"updated_at":1607954209000,"closed_at":1607954209000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1273","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1273","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1273.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1273.patch"},"body":"First PR (ever). Hopefully this movies dataset is useful to others!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1273\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1272","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1272\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1272\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1272\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1272","id":758924960,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDA5MTk0","number":1272,"title":"Psc","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607383176000,"updated_at":1607384885000,"closed_at":1607384868000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1272","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1272","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1272.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1272.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1272\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1271","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1271\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1271\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1271\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1271","id":758924203,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDA4NTg4","number":1271,"title":"SMS Spam Dataset","user":{"login":"czabo","id":75574105,"node_id":"MDQ6VXNlcjc1NTc0MTA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75574105?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/czabo","html_url":"https:\/\/github.com\/czabo","followers_url":"https:\/\/api.github.com\/users\/czabo\/followers","following_url":"https:\/\/api.github.com\/users\/czabo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/czabo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/czabo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/czabo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/czabo\/orgs","repos_url":"https:\/\/api.github.com\/users\/czabo\/repos","events_url":"https:\/\/api.github.com\/users\/czabo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/czabo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607383086000,"updated_at":1607449339000,"closed_at":1607449339000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1271","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1271","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1271.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1271.patch"},"body":"Hi :) I added this [SMS Spam Dataset](http:\/\/archive.ics.uci.edu\/ml\/datasets\/SMS+Spam+Collection)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1271\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1270","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1270\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1270\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1270\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1270","id":758917216,"node_id":"MDExOlB1bGxSZXF1ZXN0NTM0MDAyODIz","number":1270,"title":"add DFKI SmartData Corpus","user":{"login":"aseifert","id":4944799,"node_id":"MDQ6VXNlcjQ5NDQ3OTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4944799?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aseifert","html_url":"https:\/\/github.com\/aseifert","followers_url":"https:\/\/api.github.com\/users\/aseifert\/followers","following_url":"https:\/\/api.github.com\/users\/aseifert\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aseifert\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aseifert\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aseifert\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aseifert\/orgs","repos_url":"https:\/\/api.github.com\/users\/aseifert\/repos","events_url":"https:\/\/api.github.com\/users\/aseifert\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aseifert\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607382228000,"updated_at":1607449283000,"closed_at":1607449283000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1270","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1270","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1270.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1270.patch"},"body":"- **Name:** DFKI SmartData Corpus\r\n- **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types.\r\n- **Paper:** https:\/\/www.dfki.de\/fileadmin\/user_upload\/import\/9427_lrec_smartdata_corpus.pdf\r\n- **Data:** https:\/\/github.com\/DFKI-NLP\/smartdata-corpus\r\n- **Motivation:** Contains fine-grained NER labels for German.\r\n\r\n### Checkbox\r\n\r\n- [X] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [X] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [X] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [X] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1270\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1269","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1269\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1269\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1269\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1269","id":758886174,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzOTc3MTE2","number":1269,"title":"Adding OneStopEnglish corpus dataset","user":{"login":"purvimisal","id":22298787,"node_id":"MDQ6VXNlcjIyMjk4Nzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22298787?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/purvimisal","html_url":"https:\/\/github.com\/purvimisal","followers_url":"https:\/\/api.github.com\/users\/purvimisal\/followers","following_url":"https:\/\/api.github.com\/users\/purvimisal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/purvimisal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/purvimisal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/purvimisal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/purvimisal\/orgs","repos_url":"https:\/\/api.github.com\/users\/purvimisal\/repos","events_url":"https:\/\/api.github.com\/users\/purvimisal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/purvimisal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq, thanks for the review.\r\nI have made all the changes, PTAL! :) "],"created_at":1607378711000,"updated_at":1607539418000,"closed_at":1607528033000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1269","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1269","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1269.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1269.patch"},"body":"This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. \r\n\r\nLink to the paper: https:\/\/www.aclweb.org\/anthology\/W18-0535.pdf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1269\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1268","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1268\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1268\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1268\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1268","id":758871252,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzOTY0OTQ4","number":1268,"title":"new pr for Turkish NER","user":{"login":"merveenoyan","id":53175384,"node_id":"MDQ6VXNlcjUzMTc1Mzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53175384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/merveenoyan","html_url":"https:\/\/github.com\/merveenoyan","followers_url":"https:\/\/api.github.com\/users\/merveenoyan\/followers","following_url":"https:\/\/api.github.com\/users\/merveenoyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/merveenoyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/merveenoyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/merveenoyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/merveenoyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/merveenoyan\/repos","events_url":"https:\/\/api.github.com\/users\/merveenoyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/merveenoyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you run `make style` to fix the code format ?\r\n\r\nAlso it looks like the file `file_downloaded\/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip\/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.DUMP` is missing inside the dummy_data.zip\r\n\r\n\r\n(note that `TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip` is a directory name, not an actual zip file)","Hi Quentin, thank you for your patience with me. I've fixed the preprocessing pipeline, got this very weird error that Yacine told me to push. I've pushed it and after I'll find out that it will work, I will have my final pr on styling.","looks like you removed the dataset script file in your latest commit, is it expected ?"],"created_at":1607377226000,"updated_at":1607521505000,"closed_at":1607521505000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1268","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1268","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1268.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1268.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1268\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1267","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1267\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1267\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1267\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1267","id":758826568,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzOTMwNzU2","number":1267,"title":"Has part","user":{"login":"jeromeku","id":2455711,"node_id":"MDQ6VXNlcjI0NTU3MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2455711?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeromeku","html_url":"https:\/\/github.com\/jeromeku","followers_url":"https:\/\/api.github.com\/users\/jeromeku\/followers","following_url":"https:\/\/api.github.com\/users\/jeromeku\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeromeku\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeromeku\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeromeku\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeromeku\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeromeku\/repos","events_url":"https:\/\/api.github.com\/users\/jeromeku\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeromeku\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607373123000,"updated_at":1607711142000,"closed_at":1607711142000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1267","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1267","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1267.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1267.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1267\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1266","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1266\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1266\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1266\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1266","id":758704178,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzODMyNTQ1","number":1266,"title":"removing unzipped hansards dummy data","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607362276000,"updated_at":1607362349000,"closed_at":1607362349000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1266","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1266","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1266.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1266.patch"},"body":"which were added by mistake","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1266\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1265","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1265\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1265\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1265\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1265","id":758687223,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzODE4NjY0","number":1265,"title":"Add CovidQA dataset","user":{"login":"olinguyen","id":4341867,"node_id":"MDQ6VXNlcjQzNDE4Njc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4341867?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/olinguyen","html_url":"https:\/\/github.com\/olinguyen","followers_url":"https:\/\/api.github.com\/users\/olinguyen\/followers","following_url":"https:\/\/api.github.com\/users\/olinguyen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/olinguyen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/olinguyen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/olinguyen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/olinguyen\/orgs","repos_url":"https:\/\/api.github.com\/users\/olinguyen\/repos","events_url":"https:\/\/api.github.com\/users\/olinguyen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/olinguyen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems to share the same name as this dataset: https:\/\/openreview.net\/forum?id=JENSKEEzsoU","> It seems to share the same name as this dataset: https:\/\/openreview.net\/forum?id=JENSKEEzsoU\r\n\r\nyou're right it can be confusing. I'll add the organization\/research group for clarity: `covid_qa_castorini`. I added the dataset you shared as `covid_qa_deepset` in another PR (#1182) ","Thanks for avoiding the name collision !"],"created_at":1607360811000,"updated_at":1607446946000,"closed_at":1607446946000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1265","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1265","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1265.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1265.patch"},"body":"This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle\u2019s COVID-19 Open Research Dataset Challenge.\r\n\r\nLink to the paper: https:\/\/arxiv.org\/pdf\/2004.11339.pdf\r\nLink to the homepage: https:\/\/covidqa.ai","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1265\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1264","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1264\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1264\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1264\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1264","id":758686474,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzODE4MDM2","number":1264,"title":"enriched webnlg dataset rebase","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've removed the `en` within `de` and reciprocally; but I don't think I will be able to thin it more than this. (Edit: ignore the close, I missclicked !)"],"created_at":1607360745000,"updated_at":1607533229000,"closed_at":1607533227000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1264","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1264","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1264.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1264.patch"},"body":"Rebase of #1206 !","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1264\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1263","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1263\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1263\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1263\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1263","id":758663787,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzk5NzU5","number":1263,"title":"Added kannada news headlines classification dataset. ","user":{"login":"vrindaprabhu","id":16264631,"node_id":"MDQ6VXNlcjE2MjY0NjMx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16264631?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vrindaprabhu","html_url":"https:\/\/github.com\/vrindaprabhu","followers_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/followers","following_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/orgs","repos_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/repos","events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vrindaprabhu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! Let me know if any more comments! Will fix it! :-)"],"created_at":1607358937000,"updated_at":1607610655000,"closed_at":1607536891000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1263","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1263","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1263.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1263.patch"},"body":"Manual Download of a kaggle dataset. Mostly followed process as ms_terms.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1263\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1262","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1262\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1262\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1262\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1262","id":758637124,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzc3OTcy","number":1262,"title":"Adding msr_genomics_kbcomp dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607356890000,"updated_at":1607450935000,"closed_at":1607450927000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1262","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1262","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1262.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1262.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1262\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1261","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1261\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1261\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1261\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1261","id":758626112,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzY4OTgy","number":1261,"title":"Add Google Sentence Compression dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607356063000,"updated_at":1607446919000,"closed_at":1607446919000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1261","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1261","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1261.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1261.patch"},"body":"For more information: https:\/\/www.aclweb.org\/anthology\/D13-1155.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1261\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1260","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1260\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1260\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1260\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1260","id":758601828,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzQ4ODM3","number":1260,"title":"Added NewsPH Raw Dataset","user":{"login":"jcblaisecruz02","id":24757547,"node_id":"MDQ6VXNlcjI0NzU3NTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24757547?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jcblaisecruz02","html_url":"https:\/\/github.com\/jcblaisecruz02","followers_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/followers","following_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/orgs","repos_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/repos","events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR has changes to many files other than the ones for `NewsPH`\r\n\r\nCan you create another branch and another PR please ?"],"created_at":1607354273000,"updated_at":1607444835000,"closed_at":1607444835000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1260","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1260","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1260.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1260.patch"},"body":"Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset.\r\n\r\nPaper: https:\/\/arxiv.org\/abs\/2010.11574\r\nRepo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1260\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1259","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1259\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1259\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1259\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1259","id":758565320,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzE4NjMz","number":1259,"title":"Add KorQPair dataset","user":{"login":"jaketae","id":25360440,"node_id":"MDQ6VXNlcjI1MzYwNDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25360440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jaketae","html_url":"https:\/\/github.com\/jaketae","followers_url":"https:\/\/api.github.com\/users\/jaketae\/followers","following_url":"https:\/\/api.github.com\/users\/jaketae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jaketae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jaketae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jaketae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jaketae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jaketae\/repos","events_url":"https:\/\/api.github.com\/users\/jaketae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jaketae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["dummy data is missing","Hey @cceyda, thanks for pointing that out. I thought I'd added it, but seems like that wasn't the case. Just pushed a new commit with the dummy data."],"created_at":1607351637000,"updated_at":1607440301000,"closed_at":1607440301000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1259","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1259","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1259.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1259.patch"},"body":"This PR adds a [Korean paired question dataset](https:\/\/github.com\/songys\/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https:\/\/github.com\/SKT-AI\/KoGPT2#subtask-evaluations) on a phrase detection downstream task. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1259\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1258","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1258\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1258\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1258\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1258","id":758557169,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzExOTQz","number":1258,"title":"arXiv dataset added","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Need help"],"created_at":1607351013000,"updated_at":1607436435000,"closed_at":1607436435000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1258","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1258","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1258.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1258.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1258\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1257","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1257\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1257\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1257\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1257","id":758550490,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNzA2NDQy","number":1257,"title":"Add Swahili news classification dataset","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607350513000,"updated_at":1607438659000,"closed_at":1607438659000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1257","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1257","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1257.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1257.patch"},"body":"Add Swahili news classification dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1257\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1256","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1256\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1256\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1256\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1256","id":758531980,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjkwMTQ2","number":1256,"title":"adding LiMiT dataset","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607349641000,"updated_at":1607439508000,"closed_at":1607438571000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1256","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1256","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1256.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1256.patch"},"body":"Adding LiMiT: The Literal Motion in Text Dataset\r\nhttps:\/\/github.com\/ilmgut\/limit_dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1256\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1255","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1255\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1255\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1255\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1255","id":758530243,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjg4Njg2","number":1255,"title":"[doc] nlp\/viewer \u27a1\ufe0fdatasets\/viewer","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607349521000,"updated_at":1607447874000,"closed_at":1607447873000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1255","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1255","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1255.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1255.patch"},"body":"cc @srush","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1255\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1254","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1254\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1254\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1254\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1254","id":758518774,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjc5MTYy","number":1254,"title":"Added WikiText-TL-39","user":{"login":"jcblaisecruz02","id":24757547,"node_id":"MDQ6VXNlcjI0NzU3NTQ3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24757547?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jcblaisecruz02","html_url":"https:\/\/github.com\/jcblaisecruz02","followers_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/followers","following_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/orgs","repos_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/repos","events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jcblaisecruz02\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR also includes changes about another dataset `covid_qa_deepset`\r\n\r\nCould you create another branch and another PR that only includes the changes for the wikitext-tl-39 dataset ?"],"created_at":1607348628000,"updated_at":1607443258000,"closed_at":1607443258000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1254","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1254","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1254.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1254.patch"},"body":"This PR adds the WikiText-TL-39 Filipino Language Modeling dataset.\r\n\r\nPaper: https:\/\/arxiv.org\/abs\/1907.00409\r\nRepo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1254\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1253","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1253\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1253\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1253\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1253","id":758517391,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjc4MDE1","number":1253,"title":"add thainer","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607348514000,"updated_at":1607438689000,"closed_at":1607438689000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1253","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1253","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1253.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1253.patch"},"body":"ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence\r\n[unnamed dataset](http:\/\/pioneer.chula.ac.th\/~awirote\/Data-Nutcha.zip) by\r\n[Tirasaroj and Aroonmanakun (2012)](http:\/\/pioneer.chula.ac.th\/~awirote\/publications\/).\r\nIt is used to train NER taggers in [PyThaiNLP](https:\/\/github.com\/PyThaiNLP\/pythainlp).\r\nThe NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http:\/\/pioneer.chula.ac.th\/~awirote\/publications\/))\r\nfor 2,258 sentences and the rest by [@wannaphong](https:\/\/github.com\/wannaphong\/).\r\nThe POS tags are done by [PyThaiNLP](https:\/\/github.com\/PyThaiNLP\/pythainlp)'s `perceptron` engine trained on `orchid_ud`.\r\n[@wannaphong](https:\/\/github.com\/wannaphong\/) is now the only maintainer of this dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1253\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1252","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1252\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1252\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1252\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1252","id":758511388,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjczMDcx","number":1252,"title":"Add Naver sentiment movie corpus","user":{"login":"jaketae","id":25360440,"node_id":"MDQ6VXNlcjI1MzYwNDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25360440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jaketae","html_url":"https:\/\/github.com\/jaketae","followers_url":"https:\/\/api.github.com\/users\/jaketae\/followers","following_url":"https:\/\/api.github.com\/users\/jaketae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jaketae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jaketae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jaketae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jaketae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jaketae\/repos","events_url":"https:\/\/api.github.com\/users\/jaketae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jaketae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607348025000,"updated_at":1607437953000,"closed_at":1607437297000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1252","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1252","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1252.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1252.patch"},"body":"Supersedes #1168 \r\n\r\n> This PR adds the [Naver sentiment movie corpus](https:\/\/github.com\/e9t\/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.199.pdf). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1252\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1251","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1251\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1251\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1251\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1251","id":758503689,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjY2NTg2","number":1251,"title":"Add Wiki Atomic Edits Dataset (43M edits)","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq fixed :)"],"created_at":1607347388000,"updated_at":1607940301000,"closed_at":1607940300000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1251","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1251","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1251.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1251.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1251\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1250","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1250\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1250\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1250\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1250","id":758491704,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjU2NTI4","number":1250,"title":"added Nergrit dataset","user":{"login":"cahya-wirawan","id":7669893,"node_id":"MDQ6VXNlcjc2Njk4OTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7669893?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cahya-wirawan","html_url":"https:\/\/github.com\/cahya-wirawan","followers_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/followers","following_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/orgs","repos_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/repos","events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cahya-wirawan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607346372000,"updated_at":1607438009000,"closed_at":1607438009000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1250","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1250","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1250.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1250.patch"},"body":"Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1250\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1249","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1249\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1249\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1249\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1249","id":758472863,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjQwNjA1","number":1249,"title":"Add doc2dial dataset","user":{"login":"KMFODA","id":35491698,"node_id":"MDQ6VXNlcjM1NDkxNjk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35491698?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/KMFODA","html_url":"https:\/\/github.com\/KMFODA","followers_url":"https:\/\/api.github.com\/users\/KMFODA\/followers","following_url":"https:\/\/api.github.com\/users\/KMFODA\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/KMFODA\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/KMFODA\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/KMFODA\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/KMFODA\/orgs","repos_url":"https:\/\/api.github.com\/users\/KMFODA\/repos","events_url":"https:\/\/api.github.com\/users\/KMFODA\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/KMFODA\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It not always practical to use nested `Sequence`. If you have troubles with sequence you can use lists instead. \r\n\r\nFor example\r\n```python\r\n\r\nfeatures=datasets.Features(\r\n {\r\n \"dial_id\": datasets.Value(\"string\"),\r\n \"doc_id\": datasets.Value(\"string\"),\r\n \"domain\": datasets.Value(\"string\"),\r\n \"turns\": [\r\n {\r\n \"turn_id\": datasets.Value(\"int32\"),\r\n \"role\": datasets.Value(\"string\"),\r\n \"da\": datasets.Value(\"string\"),\r\n \"reference\": [\r\n {\r\n \"keys\" : datasets.Value(\"string\"),\r\n \"values\": datasets.Value(\"string\"), \r\n }\r\n\r\n ],\r\n \"utterance\": datasets.Value(\"string\"),\r\n }\r\n ],\r\n }\r\n),\r\n```\r\n\r\nthis way `turns` will be a list of dict, and the \"reference\" key of `turns` will be a list of dict as well","No problem thanks for all your help getting this to the final stages! Added .gitignore, removed .lock and applied the changes you asked for."],"created_at":1607344749000,"updated_at":1607962634000,"closed_at":1607962634000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1249","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1249","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1249.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1249.patch"},"body":"### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9\r\n\r\nOnce complete this will add the [Doc2dial](https:\/\/doc2dial.github.io\/data.html) dataset from the generic data sets list.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1249\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1248","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1248\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1248\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1248\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1248","id":758454438,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjI0ODY5","number":1248,"title":"Update step-by-step guide about the dataset cards","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607343132000,"updated_at":1607347164000,"closed_at":1607347163000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1248","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1248","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1248.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1248.patch"},"body":"Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1248\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1247","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1247\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1247\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1247\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1247","id":758431640,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNjA1NzE2","number":1247,"title":"Adding indonlu dataset","user":{"login":"yasirabd","id":6518504,"node_id":"MDQ6VXNlcjY1MTg1MDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6518504?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yasirabd","html_url":"https:\/\/github.com\/yasirabd","followers_url":"https:\/\/api.github.com\/users\/yasirabd\/followers","following_url":"https:\/\/api.github.com\/users\/yasirabd\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yasirabd\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yasirabd\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yasirabd\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yasirabd\/orgs","repos_url":"https:\/\/api.github.com\/users\/yasirabd\/repos","events_url":"https:\/\/api.github.com\/users\/yasirabd\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yasirabd\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes about many files other than the ones for IndoNLU\r\nCould you create another branch and another PR please ?","> looks like this PR includes changes about many files other than the ones for IndoNLU\r\n> Could you create another branch and another PR please ?\r\n\r\nOkay I'll make it"],"created_at":1607341125000,"updated_at":1607436710000,"closed_at":1607436710000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1247","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1247","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1247.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1247.patch"},"body":"IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1247\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1246","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1246\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1246\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1246\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1246","id":758418652,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTk0NjIz","number":1246,"title":"arXiv dataset added","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607340023000,"updated_at":1607350978000,"closed_at":1607350978000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1246","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1246","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1246.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1246.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1246\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1245","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1245\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1245\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1245\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1245","id":758411233,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTg4NDUw","number":1245,"title":"Add Google Turkish Treebank Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607339357000,"updated_at":1608136224000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1245","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1245","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1245.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1245.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1245\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1244","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1244\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1244\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1244\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1244","id":758384417,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTY1ODMz","number":1244,"title":"arxiv dataset added","user":{"login":"tanmoyio","id":33005287,"node_id":"MDQ6VXNlcjMzMDA1Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33005287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tanmoyio","html_url":"https:\/\/github.com\/tanmoyio","followers_url":"https:\/\/api.github.com\/users\/tanmoyio\/followers","following_url":"https:\/\/api.github.com\/users\/tanmoyio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tanmoyio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tanmoyio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tanmoyio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tanmoyio\/orgs","repos_url":"https:\/\/api.github.com\/users\/tanmoyio\/repos","events_url":"https:\/\/api.github.com\/users\/tanmoyio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tanmoyio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607337174000,"updated_at":1607339063000,"closed_at":1607339063000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1244","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1244","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1244.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1244.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1244\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1243","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1243\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1243\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1243\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1243","id":758378904,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTYxNDAx","number":1243,"title":"Add Google Noun Verb Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607336765000,"updated_at":1608641236000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1243","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1243","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1243.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1243.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1243\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1242","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1242\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1242\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1242\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1242","id":758370579,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTU0MzAx","number":1242,"title":"adding bprec","user":{"login":"kldarek","id":15803781,"node_id":"MDQ6VXNlcjE1ODAzNzgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15803781?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kldarek","html_url":"https:\/\/github.com\/kldarek","followers_url":"https:\/\/api.github.com\/users\/kldarek\/followers","following_url":"https:\/\/api.github.com\/users\/kldarek\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kldarek\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kldarek\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kldarek\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kldarek\/orgs","repos_url":"https:\/\/api.github.com\/users\/kldarek\/repos","events_url":"https:\/\/api.github.com\/users\/kldarek\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kldarek\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes to many files other than the ones related to bprec\r\nCan you create another branch and another PR please ?","> looks like this PR includes changes to many files other than the ones related to bprec\r\n> Can you create another branch and another PR please ?\r\n\r\nYes, I realized I messed this one up, learning my way :) I'll close this one and open another hopefully clean PR :) Thanks!"],"created_at":1607336149000,"updated_at":1607438029000,"closed_at":1607438028000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1242","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1242","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1242.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1242.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1242\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1241","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1241\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1241\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1241\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1241","id":758360643,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTQ1OTQ0","number":1241,"title":"Opus elhuyar dataset for MT task having languages pair in Spanish to Basque","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607335414000,"updated_at":1608389712000,"closed_at":1607526768000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1241","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1241","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1241.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1241.patch"},"body":"Opus elhuyar dataset for MT task having languages pair in Spanish to Basque\r\nMore info : http:\/\/opus.nlpl.eu\/Elhuyar.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1241\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1240","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1240\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1240\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1240\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1240","id":758355523,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTQxNjk5","number":1240,"title":"Multi Domain Sentiment Analysis Dataset (MDSA)","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["can you also run `make style` to format the code ?","I'll come back to this one in sometime :) @lhoestq ","Also if you would use `xml.etree.ElementTree` to parse the XML it would be awesome, because right now you're using an external dependency `xmltodict `","> Also if you would use xml.etree.ElementTree to parse the XML it would be awesome, because right now you're using an external dependency xmltodict\r\n\r\nIts pseudo xml so elementtree fails. xmltodict seems to be working quite good for this. do we have examples of pseudo xml datasets?","for the other pseudo xml the text is parsed manually","Can you add `xmltodict` to the test dependencies in setup.py please to fix the CI please ?","Also can you add the dataset card with the tags and run `make style` ?","Hi :) have you had a chance to fix the test dependency and apply `make style` ?\r\n\r\nFeel fee to ping me when it's ready for a review"],"created_at":1607335035000,"updated_at":1608135983000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1240","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1240","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1240.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1240.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1240\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1239","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1239\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1239\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1239\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1239","id":758339593,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTI4NTU5","number":1239,"title":"add yelp_review_full dataset","user":{"login":"hfawaz","id":29229602,"node_id":"MDQ6VXNlcjI5MjI5NjAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29229602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hfawaz","html_url":"https:\/\/github.com\/hfawaz","followers_url":"https:\/\/api.github.com\/users\/hfawaz\/followers","following_url":"https:\/\/api.github.com\/users\/hfawaz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hfawaz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hfawaz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hfawaz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hfawaz\/orgs","repos_url":"https:\/\/api.github.com\/users\/hfawaz\/repos","events_url":"https:\/\/api.github.com\/users\/hfawaz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hfawaz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Moved to https:\/\/github.com\/huggingface\/datasets\/pull\/1315"],"created_at":1607333736000,"updated_at":1607442204000,"closed_at":1607439650000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1239","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1239","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1239.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1239.patch"},"body":"This corresponds to the Yelp-5 requested in https:\/\/github.com\/huggingface\/datasets\/issues\/353 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1239\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1238","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1238\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1238\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1238\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1238","id":758321688,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTEzODUw","number":1238,"title":"adding poem_sentiment","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607332312000,"updated_at":1607531770000,"closed_at":1607529765000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1238","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1238","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1238.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1238.patch"},"body":"Adding poem_sentiment dataset.\r\nhttps:\/\/github.com\/google-research-datasets\/poem-sentiment","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1238\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1237","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1237\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1237\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1237\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1237","id":758318353,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNTExMDky","number":1237,"title":"Add AmbigQA dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607332039000,"updated_at":1607434732000,"closed_at":1607434732000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1237","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1237","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1237.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1237.patch"},"body":"# AmbigQA: Answering Ambiguous Open-domain Questions Dataset\r\nAdding the [AmbigQA](https:\/\/nlp.cs.washington.edu\/ambigqa\/) dataset as part of the sprint \ud83c\udf89 (from Open dataset list for Dataset sprint)\r\n\r\nAdded both the light and full versions (as seen on the dataset homepage)\r\nThe json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields\r\n\r\n```py\r\ntrain_light_dataset = load_dataset('.\/datasets\/ambig_qa',\"light\",split=\"train\")\r\nval_light_dataset = load_dataset('.\/datasets\/ambig_qa',\"light\",split=\"validation\")\r\ntrain_full_dataset = load_dataset('.\/datasets\/ambig_qa',\"full\",split=\"train\")\r\nval_full_dataset = load_dataset('.\/datasets\/ambig_qa',\"full\",split=\"validation\")\r\n\r\n\r\nfor example in train_light_dataset:\r\n for i,t in enumerate(example['annotations']['type']):\r\n if t =='singleAnswer':\r\n # use the example['annotations']['answer'][i]\r\n # example['annotations']['qaPairs'][i] - > is []\r\n print(example['annotations']['answer'][i])\r\n else:\r\n # use the example['annotations']['qaPairs'][i]\r\n # example['annotations']['answer'][i] - > is []\r\n print(example['annotations']['qaPairs'][i])\r\n\r\n```\r\n\r\n- [x] All tests passed\r\n- [x] Added dummy data\r\n- [x] Added data card (as much as I could)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1237\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1236","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1236\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1236\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1236\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1236","id":758263012,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNDYzOTg2","number":1236,"title":"Opus finlex dataset of language pair Finnish and Swedish","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607327637000,"updated_at":1607434233000,"closed_at":1607434233000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1236","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1236","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1236.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1236.patch"},"body":"Added Opus_finlex dataset of language pair Finnish and Swedish\r\nMore info : http:\/\/opus.nlpl.eu\/Finlex.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1236\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1235","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1235\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1235\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1235\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1235","id":758234511,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNDM5NDk4","number":1235,"title":"Wino bias","user":{"login":"akshayb7","id":29649801,"node_id":"MDQ6VXNlcjI5NjQ5ODAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29649801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akshayb7","html_url":"https:\/\/github.com\/akshayb7","followers_url":"https:\/\/api.github.com\/users\/akshayb7\/followers","following_url":"https:\/\/api.github.com\/users\/akshayb7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akshayb7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akshayb7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akshayb7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akshayb7\/orgs","repos_url":"https:\/\/api.github.com\/users\/akshayb7\/repos","events_url":"https:\/\/api.github.com\/users\/akshayb7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akshayb7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this PR because of messed up history and opening another one after discussion with Quentin Lhoest.\r\n"],"created_at":1607325162000,"updated_at":1607633292000,"closed_at":1607633281000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1235","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1235","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1235.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1235.patch"},"body":"The PR will fail circleCi tests because of the requirement of manual loading of data. Fresh PR because of messed up history of the previous one. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1235\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1234","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1234\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1234\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1234\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1234","id":758229304,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzNDM0ODkz","number":1234,"title":"Added ade_corpus_v2, with 3 configs for relation extraction and classification task","user":{"login":"Nilanshrajput","id":28673745,"node_id":"MDQ6VXNlcjI4NjczNzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28673745?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nilanshrajput","html_url":"https:\/\/github.com\/Nilanshrajput","followers_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/followers","following_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/repos","events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I have added the tags they are in separate files for 3 different configs","@lhoestq thanks for the review I added your suggested changes.","merging since the CI is fixed on master"],"created_at":1607324714000,"updated_at":1607968154000,"closed_at":1607968154000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1234","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1234","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1234.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1234.patch"},"body":"Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1234\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1233","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1233\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1233\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1233\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1233","id":758188699,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMzk5NTY3","number":1233,"title":"Add Curiosity Dialogs Dataset","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I tried manually creating the dummy files. But unfortunately it was raising an error during testing the dummy data (regarding JSON parsing).\r\n\r\nThe JSONs are pretty big so I cannot actually open it without crashing the text editor.\r\n\r\n Do you have any suggestions?","@lhoestq I have made all the changes you mentioned."],"created_at":1607320860000,"updated_at":1608471249000,"closed_at":1607525429000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1233","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1233","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1233.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1233.patch"},"body":"Add Facebook [Curiosity Dialogs](https:\/\/github.com\/facebookresearch\/curiosity) Dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1233\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1232","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1232\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1232\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1232\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1232","id":758180669,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMzkyNTc0","number":1232,"title":"Add Grail QA dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607320005000,"updated_at":1607432599000,"closed_at":1607432599000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1232","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1232","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1232.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1232.patch"},"body":"For more information: https:\/\/dki-lab.github.io\/GrailQA\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1232\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1231","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1231\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1231\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1231\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1231","id":758121398,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMzQzMzAz","number":1231,"title":"Add Urdu Sentiment Corpus (USC)","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607311520000,"updated_at":1607364316000,"closed_at":1607359403000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1231","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1231","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1231.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1231.patch"},"body":"@lhoestq opened a clean PR containing only relevant files.\r\n\r\nold PR #1140","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1231\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1230","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1230\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1230\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1230\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1230","id":758119342,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMzQxNTg0","number":1230,"title":"Add Urdu fake news dataset","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607311190000,"updated_at":1607364295000,"closed_at":1607360274000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1230","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1230","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1230.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1230.patch"},"body":"@lhoestq opened a clean PR containing only relevant files.\r\n\r\nold PR #1125 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1230\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1229","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1229\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1229\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1229\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1229","id":758100707,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMzI2OTgw","number":1229,"title":"Muchocine - Spanish movie reviews dataset","user":{"login":"mapmeld","id":643918,"node_id":"MDQ6VXNlcjY0MzkxOA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/643918?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mapmeld","html_url":"https:\/\/github.com\/mapmeld","followers_url":"https:\/\/api.github.com\/users\/mapmeld\/followers","following_url":"https:\/\/api.github.com\/users\/mapmeld\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mapmeld\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mapmeld\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mapmeld\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mapmeld\/orgs","repos_url":"https:\/\/api.github.com\/users\/mapmeld\/repos","events_url":"https:\/\/api.github.com\/users\/mapmeld\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mapmeld\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @mapmeld !\r\nhave you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me if you have questions or when you're ready for a review","@lhoestq unfortunately I don't have any more information about where the dataset comes from","It's fine, you can just add the sections titles back and leave the content with `[More Information Needed]`\r\n\r\n","added missing sections, updated the Python code \u2705 "],"created_at":1607307809000,"updated_at":1608545349000,"closed_at":1608545349000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1229","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1229","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1229.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1229.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1229\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1228","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1228\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1228\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1228\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1228","id":758049068,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODI2","number":1228,"title":"add opus_100 dataset","user":{"login":"vasudevgupta7","id":53136577,"node_id":"MDQ6VXNlcjUzMTM2NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53136577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vasudevgupta7","html_url":"https:\/\/github.com\/vasudevgupta7","followers_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/followers","following_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/orgs","repos_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/repos","events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["done."],"created_at":1607296644000,"updated_at":1607525640000,"closed_at":1607525640000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1228","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1228","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1228.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1228.patch"},"body":"This PR will add [opus100 dataset](http:\/\/opus.nlpl.eu\/opus-100.php).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1228\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1227","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1227\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1227\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1227\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1227","id":758049060,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODIx","number":1227,"title":"readme: remove link to Google's responsible AI practices","user":{"login":"stefan-it","id":20651387,"node_id":"MDQ6VXNlcjIwNjUxMzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20651387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stefan-it","html_url":"https:\/\/github.com\/stefan-it","followers_url":"https:\/\/api.github.com\/users\/stefan-it\/followers","following_url":"https:\/\/api.github.com\/users\/stefan-it\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stefan-it\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stefan-it\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stefan-it\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stefan-it\/orgs","repos_url":"https:\/\/api.github.com\/users\/stefan-it\/repos","events_url":"https:\/\/api.github.com\/users\/stefan-it\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stefan-it\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607296642000,"updated_at":1607330119000,"closed_at":1607296841000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1227","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1227","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1227.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1227.patch"},"body":"...maybe we'll find a company that reallly stands behind responsible AI practices ;)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1227\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1226","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1226\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1226\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1226\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1226","id":758036979,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjc2OTU3","number":1226,"title":"Add menyo_20k_mt dataset","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like your PR includes changes about many other files than the ones for menyo 20k mt\r\nCan you create another branch and another PR please ?","Yes, I will"],"created_at":1607292975000,"updated_at":1607628134000,"closed_at":1607628134000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1226","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1226","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1226.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1226.patch"},"body":"Add menyo_20k_mt dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1226\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1225","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1225\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1225\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1225\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1225","id":758035501,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjc1ODcx","number":1225,"title":"Add Winobias dataset","user":{"login":"akshayb7","id":29649801,"node_id":"MDQ6VXNlcjI5NjQ5ODAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29649801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akshayb7","html_url":"https:\/\/github.com\/akshayb7","followers_url":"https:\/\/api.github.com\/users\/akshayb7\/followers","following_url":"https:\/\/api.github.com\/users\/akshayb7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akshayb7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akshayb7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akshayb7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akshayb7\/orgs","repos_url":"https:\/\/api.github.com\/users\/akshayb7\/repos","events_url":"https:\/\/api.github.com\/users\/akshayb7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akshayb7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Will make another pull request with cleaner history"],"created_at":1607292500000,"updated_at":1607323559000,"closed_at":1607323250000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1225","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1225","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1225.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1225.patch"},"body":"Pardon me for different commits with same message. There were conflicts after I rebased master while simultaneously pushing my changes to local repo, hence the duplicate entries.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1225\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1224","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1224\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1224\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1224\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1224","id":758022998,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjY2Njg1","number":1224,"title":"adding conceptnet5","user":{"login":"ontocord","id":8900094,"node_id":"MDQ6VXNlcjg5MDAwOTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8900094?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ontocord","html_url":"https:\/\/github.com\/ontocord","followers_url":"https:\/\/api.github.com\/users\/ontocord\/followers","following_url":"https:\/\/api.github.com\/users\/ontocord\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ontocord\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ontocord\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ontocord\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ontocord\/orgs","repos_url":"https:\/\/api.github.com\/users\/ontocord\/repos","events_url":"https:\/\/api.github.com\/users\/ontocord\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ontocord\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you. I'll make those changes. but I'm having problems trying to push my changes to my fork\r\n","Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n","Also, what docstring are you recommending?\r\n","> Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n\r\nyou can just commit and push your changes to the same branch as your first commit.","@ghomasHudson I've tried it but still getting code quality error. I've removed all blank lines, etc. required by flake8. Don't know what else to do","> @ghomasHudson I've tried it but still getting code quality error. I've removed all blank lines, etc. required by flake8. Don't know what else to do\r\n\r\nDid you run `make style` before committing? When I run it, it fixes some things (e.g. Splitting line 96 which is currently too long).","I think @yjernite is looking into this. I did \"make style\" but nothing happens","looks like your PR includes changes about many other files than the ones related to conceptnet5\r\n\r\ncould you create another branch and another PR please ?","@lhoestq I'm not sure what I did wrong. What did I push that wasn't conceptnet5? How do I see this?\r\n\r\n did this\r\n\r\nmake style\r\nflake8 datasets\r\ngit add datasets\/<your_dataset_name>\r\ngit commit\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit pull\r\ngit push -u origin conceptnet5","Thanks for rebasing and force push :) ","Yeah! Thank you @lhoestq, @ghomasHudson and @yjernite !"],"created_at":1607288813000,"updated_at":1607531896000,"closed_at":1607524637000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1224","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1224","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1224.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1224.patch"},"body":"Adding the conceptnet5 and omcs txt files used to create the conceptnet5 dataset. Conceptne5 is a common sense dataset. More info can be found here: https:\/\/github.com\/commonsense\/conceptnet5\/wiki","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1224\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1223","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1223\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1223\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1223\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1223","id":758022208,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjY2MDc4","number":1223,"title":"\ud83c\uddf8\ud83c\uddea Added Swedish Reviews dataset for sentiment classification in Sw\u2026","user":{"login":"timpal0l","id":6556710,"node_id":"MDQ6VXNlcjY1NTY3MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6556710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timpal0l","html_url":"https:\/\/github.com\/timpal0l","followers_url":"https:\/\/api.github.com\/users\/timpal0l\/followers","following_url":"https:\/\/api.github.com\/users\/timpal0l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timpal0l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timpal0l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timpal0l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timpal0l\/orgs","repos_url":"https:\/\/api.github.com\/users\/timpal0l\/repos","events_url":"https:\/\/api.github.com\/users\/timpal0l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timpal0l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607288574000,"updated_at":1607424896000,"closed_at":1607424896000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1223","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1223","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1223.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1223.patch"},"body":"perhaps: @lhoestq \ud83e\udd17 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1223\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1222","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1222\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1222\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1222\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1222","id":758018953,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjYzODIx","number":1222,"title":"Add numeric fused head dataset","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Thanks for adding this @ghomasHudson!\r\n> I added some comments for some of the fields.\r\n> \r\n> Also, I'm not sure about this since I haven't used the library yet, but maybe it's worth adding the identification and resolution as two separate datasets?\r\n\r\nThanks for replying @yanaiela - I hope this will make your dataset more accessible to a wider audience - I've added the changes to the model card you suggested.\r\n\r\nIn terms of the identification and resolution tasks, I've currently added them as separate `splits` in huggingface\/datasets so you can load identification like this:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset(\"numeric_fused_head\", \"identification\")\r\nprint(dataset[\"train\"][0])\r\n>> {\"tokens\": [\"The\", \"quick\", \"brown\", \"fox\",....], \"start_index\": 11, \"end_index\": 12, \"label\": 0}\r\n```\r\nAnd resolution like this:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset(\"numeric_fused_head\", \"resolution\")\r\nprint(dataset[\"train\"][0])\r\n>> {\"tokens\": [\"The\", \"quick\", \"brown\", \"fox\",....], \"head\": [\"AGE\"], \"anchors_indices\": [12], ...}\r\n```","I hope so too, thanks!\r\n\r\nRe the splits, that makes sense to me."],"created_at":1607287613000,"updated_at":1607426276000,"closed_at":1607426275000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1222","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1222","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1222.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1222.patch"},"body":"Adding the [NFH: Numeric Fused Head](https:\/\/nlp.biu.ac.il\/~lazary\/fh\/) dataset.\r\n\r\nEverything looks sensible and I've included both the identification and resolution tasks. I haven't personally used this dataset in my research so am unable to specify what the default configuration \/ supervised keys should be.\r\n\r\nI've filled out the basic info on the model card to the best of my knowledge but it's a little tricky to understand exactly what the fields represent.\r\n\r\nDataset author: @yanaiela ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1222\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1221","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1221\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1221\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1221\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1221","id":758016032,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjYxNjkw","number":1221,"title":"Add HKCanCor","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607286727000,"updated_at":1607531658000,"closed_at":1607531658000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1221","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1221","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1221.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1221.patch"},"body":"This PR adds the [Hong Kong Cantonese Corpus](http:\/\/compling.hss.ntu.edu.sg\/hkcancor\/), by [Luke and Wong 2015](http:\/\/compling.hss.ntu.edu.sg\/hkcancor\/data\/LukeWong_Hong-Kong-Cantonese-Corpus.pdf). \r\n\r\nThe dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https:\/\/github.com\/fcbond\/hkcancor\/blob\/master\/sample\/d1_v.txt) for example) that requires a few processing steps. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1221\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1220","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1220\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1220\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1220\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1220","id":758015894,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjYxNTgw","number":1220,"title":"add Korean HateSpeech dataset","user":{"login":"stevhliu","id":59462357,"node_id":"MDQ6VXNlcjU5NDYyMzU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59462357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stevhliu","html_url":"https:\/\/github.com\/stevhliu","followers_url":"https:\/\/api.github.com\/users\/stevhliu\/followers","following_url":"https:\/\/api.github.com\/users\/stevhliu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stevhliu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stevhliu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stevhliu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stevhliu\/orgs","repos_url":"https:\/\/api.github.com\/users\/stevhliu\/repos","events_url":"https:\/\/api.github.com\/users\/stevhliu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stevhliu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like you forgot to `make style` (I forget it a lot too \ud83e\udd26 )\r\n+ add dummy data","hi @cceyda \ud83d\udc4b, thanks for the hint! it looks like i've run into some other errors though in `_split_generators` or `_generate_examples`. do you have any idea of what's wrong here? \ud83d\ude05","I get the same errors on another pr too, so it probably has something to do with circleci, waiting on help.","the `RemoteDatasetTest ` error on the CI is fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1607286689000,"updated_at":1607440869000,"closed_at":1607425542000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1220","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1220","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1220.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1220.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1220\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1219","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1219\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1219\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1219\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1219","id":758013368,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjU5NzMw","number":1219,"title":"Add Korean NER dataset","user":{"login":"jaketae","id":25360440,"node_id":"MDQ6VXNlcjI1MzYwNDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25360440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jaketae","html_url":"https:\/\/github.com\/jaketae","followers_url":"https:\/\/api.github.com\/users\/jaketae\/followers","following_url":"https:\/\/api.github.com\/users\/jaketae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jaketae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jaketae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jaketae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jaketae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jaketae\/repos","events_url":"https:\/\/api.github.com\/users\/jaketae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jaketae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607285946000,"updated_at":1607423133000,"closed_at":1607423133000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1219","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1219","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1219.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1219.patch"},"body":"Supersedes #1177 \r\n\r\n> This PR adds the [Korean named entity recognition dataset](https:\/\/github.com\/kmounlp\/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https:\/\/github.com\/SKTBrain\/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https:\/\/github.com\/eagle705\/pytorch-bert-crf-ner).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1219\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1218","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1218\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1218\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1218\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1218","id":758009113,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjU2NzIz","number":1218,"title":"Add WMT20 MLQE 3 shared tasks","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the comments Quentin!\r\nI integrated them","It should be ok now!\r\nSorry I wasn't attentive enough.\r\n(tests are currently failing, I understand it's from other datasets)","merging since the CI is fixed on master"],"created_at":1607284752000,"updated_at":1608046050000,"closed_at":1608046049000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1218","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1218","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1218.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1218.patch"},"body":"3 tasks for the WMT 20 MLQE shared tasks -> 3 different datasets\r\n\r\n(I re-created #1137 because it was too messy).\r\n\r\nNote that in L199 `task3.py`, I used `logging.warning` to print some missing data in the train set.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1218\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1217","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1217\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1217\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1217\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1217","id":758008321,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjU2MjU4","number":1217,"title":"adding DataCommons fact checking","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607284572000,"updated_at":1608135768000,"closed_at":1608135768000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1217","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1217","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1217.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1217.patch"},"body":"Adding the data from: https:\/\/datacommons.org\/factcheck\/\r\n\r\nHad to cheat a bit with the dummy data as the test doesn't recognize `.txt.gz`: had to rename uncompressed files with the `.gz` extension manually without actually compressing","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1217\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1216","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1216\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1216\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1216\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1216","id":758005982,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjU0ODE2","number":1216,"title":"Add limit","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["My bad, didn't see this on the open dataset list. Closing this since it overlaps with PR#1256"],"created_at":1607283978000,"updated_at":1607413931000,"closed_at":1607413931000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1216","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1216","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1216.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1216.patch"},"body":"This PR adds [LiMiT](https:\/\/github.com\/ilmgut\/limit_dataset), a dataset for literal motion classification\/extraction by [Manotas et al., 2020](https:\/\/www.aclweb.org\/anthology\/2020.findings-emnlp.88.pdf).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1216\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1215","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1215\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1215\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1215\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1215","id":758002885,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjUyNjUx","number":1215,"title":"Add irc disentanglement","user":{"login":"dhruvjoshi1998","id":32560035,"node_id":"MDQ6VXNlcjMyNTYwMDM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32560035?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dhruvjoshi1998","html_url":"https:\/\/github.com\/dhruvjoshi1998","followers_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/followers","following_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/orgs","repos_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/repos","events_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dhruvjoshi1998\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes about many files other than the ones for irc_disentanglement\r\n\r\nCould you please create a new branch and create another PR please ?","closing in favor of #1586 "],"created_at":1607283046000,"updated_at":1608135505000,"closed_at":1608135505000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1215","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1215","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1215.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1215.patch"},"body":"added files for irc disentanglement dataset\r\nwas unable to test dummy data as a result of vpn\/proxy issues","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1215\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1214","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1214\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1214\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1214\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1214","id":758002786,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjUyNTgx","number":1214,"title":"adding medical-questions-pairs dataset","user":{"login":"tuner007","id":46425391,"node_id":"MDQ6VXNlcjQ2NDI1Mzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46425391?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tuner007","html_url":"https:\/\/github.com\/tuner007","followers_url":"https:\/\/api.github.com\/users\/tuner007\/followers","following_url":"https:\/\/api.github.com\/users\/tuner007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tuner007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tuner007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tuner007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tuner007\/orgs","repos_url":"https:\/\/api.github.com\/users\/tuner007\/repos","events_url":"https:\/\/api.github.com\/users\/tuner007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tuner007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607283012000,"updated_at":1607524973000,"closed_at":1607524973000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1214","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1214","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1214.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1214.patch"},"body":"This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors.\r\nDataset : https:\/\/github.com\/curai\/medical-question-pair-dataset\r\nPaper : https:\/\/drive.google.com\/file\/d\/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s\/view","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1214\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1213","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1213\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1213\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1213\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1213","id":757983884,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjM4NzEz","number":1213,"title":"add taskmaster3","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["(you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')","> (you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')\r\n\r\nOops :(\r\n\r\nThanks for the suggestion, will reduce the size"],"created_at":1607277363000,"updated_at":1607511910000,"closed_at":1607511629000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1213","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1213","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1213.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1213.patch"},"body":"Adding Taskmaster-3 dataset\r\nhttps:\/\/github.com\/google-research-datasets\/Taskmaster\/tree\/master\/TM-3-2020.\r\n\r\nThe dataset structure almost same as original dataset with these two changes\r\n\r\n1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex.\r\n\r\n```python\r\nargs = {\"name.movie\": \"Mulan\", \"name.theater\": \": \"Mountain AMC 16\"}\r\n```\r\nbecomes \r\n```python\r\n[\r\n {\r\n \"arg_name\": \"name.movie\",\r\n \"arg_value\": \"Mulan\"\r\n },\r\n {\r\n \"arg_name\": \"name.theater\",\r\n \"arg_value\": \"Mountain AMC 16\"\r\n }\r\n]\r\n```\r\n\r\n2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name\/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1213\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1212","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1212\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1212\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1212\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1212","id":757978795,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjM1MTky","number":1212,"title":"Add Sanskrit Classic texts in datasets","user":{"login":"parmarsuraj99","id":9317265,"node_id":"MDQ6VXNlcjkzMTcyNjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9317265?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/parmarsuraj99","html_url":"https:\/\/github.com\/parmarsuraj99","followers_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/followers","following_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/orgs","repos_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/repos","events_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607275891000,"updated_at":1607367848000,"closed_at":1607367848000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1212","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1212","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1212.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1212.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1212\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1211","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1211\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1211\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1211\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1211","id":757973719,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjMxNDY3","number":1211,"title":"Add large spanish corpus","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607274410000,"updated_at":1607520996000,"closed_at":1607520996000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1211","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1211","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1211.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1211.patch"},"body":"Adds a collection of Spanish corpora that can be useful for pretraining language models. \r\n\r\nFollowing a nice suggestion from @yjernite we provide the user with three main ways to preprocess \/ load either \r\n\r\n* the whole corpus (17GB!)\r\n* one specific sub-corpus\r\n* the whole corpus, but return a single split. this is useful if you want to cache the whole preprocessing step once and interact with individual sub-corpora\r\n\r\nSee the dataset card for more details.\r\n\r\nReady for review!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1211\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1210","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1210\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1210\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1210\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1210","id":757966959,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjI2NDQ2","number":1210,"title":"Add XSUM Hallucination Annotations Dataset","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq All necessary modifications have been done."],"created_at":1607272819000,"updated_at":1608471296000,"closed_at":1608137831000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1210","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1210","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1210.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1210.patch"},"body":"Adding Google [XSum Hallucination Annotations](https:\/\/github.com\/google-research-datasets\/xsum_hallucination_annotations) dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1210\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1209","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1209\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1209\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1209\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1209","id":757965934,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjI1NzMw","number":1209,"title":"[AfriBooms] Dataset exists already","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's so cool seeing all these datasets fly by and see how they are still of interest. I did my internship at the research group of Liesbeth Augustinus et al. They're a very kind group of people!","merging since the CI is fixed on master"],"created_at":1607272513000,"updated_at":1607359944000,"closed_at":1607359943000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1209","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1209","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1209.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1209.patch"},"body":"When trying to add \"AfriBooms\": https:\/\/docs.google.com\/spreadsheets\/d\/12ShVow0M6RavnzbBEabm5j5dv12zBaf0y-niwEPPlo4\/edit#gid=1386399609 I noticed that the dataset exists already as a config of Universal Dependencies (universal_dependencies.py). I checked and the data exactly matches so that the new data link does not give any new data.\r\n\r\nThis PR improves the config's description a bit by linking to the paper.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1209\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1208","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1208\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1208\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1208\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1208","id":757961368,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjIyMzQ4","number":1208,"title":"Add HKCanCor","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607271283000,"updated_at":1607286197000,"closed_at":1607286114000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1208","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1208","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1208.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1208.patch"},"body":"(Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1208\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1207","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1207\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1207\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1207\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1207","id":757953830,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjE3MDA4","number":1207,"title":"Add msr_genomics_kbcomp Dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607269205000,"updated_at":1607356517000,"closed_at":1607356511000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1207","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1207","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1207.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1207.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1207\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1206","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1206\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1206\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1206\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1206","id":757952992,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjE2NDYw","number":1206,"title":"Adding Enriched WebNLG dataset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD","Aaaaand it's rebase time - the new one is at #1264 !","closing this one since a new PR was created"],"created_at":1607268980000,"updated_at":1607506832000,"closed_at":1607506832000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1206","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1206","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1206.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1206.patch"},"body":"This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https:\/\/github.com\/ThiagoCF05\/webnlg) dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1206\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1205","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1205\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1205\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1205\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1205","id":757942403,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjA4NDI1","number":1205,"title":"add lst20 with manual download","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead","@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https:\/\/github.com\/huggingface\/datasets\/pull\/1253 too. Will fix them before review."],"created_at":1607266150000,"updated_at":1607531590000,"closed_at":1607531590000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1205","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1205","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1205.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1205.patch"},"body":"passed on local:\r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20\r\n```\r\nNot sure how to test:\r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20\r\n```\r\n\r\n```\r\nLST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand.\r\nIt offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries.\r\nAt a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with\r\n16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is\r\nconsidered large enough for developing joint neural models for NLP.\r\nManually download at https:\/\/aiforthai.in.th\/corpus.php\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1205\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1204","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1204\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1204\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1204\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1204","id":757939475,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjA2MzE3","number":1204,"title":"adding meta_woz dataset","user":{"login":"pacman100","id":13534540,"node_id":"MDQ6VXNlcjEzNTM0NTQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13534540?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pacman100","html_url":"https:\/\/github.com\/pacman100","followers_url":"https:\/\/api.github.com\/users\/pacman100\/followers","following_url":"https:\/\/api.github.com\/users\/pacman100\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pacman100\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pacman100\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pacman100\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pacman100\/orgs","repos_url":"https:\/\/api.github.com\/users\/pacman100\/repos","events_url":"https:\/\/api.github.com\/users\/pacman100\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pacman100\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607265253000,"updated_at":1608131125000,"closed_at":1608131124000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1204","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1204","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1204.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1204.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1204\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1203","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1203\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1203\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1203\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1203","id":757935170,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjAzMTc0","number":1203,"title":"Add Neural Code Search Dataset","user":{"login":"vinaykudari","id":34424769,"node_id":"MDQ6VXNlcjM0NDI0NzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34424769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vinaykudari","html_url":"https:\/\/github.com\/vinaykudari","followers_url":"https:\/\/api.github.com\/users\/vinaykudari\/followers","following_url":"https:\/\/api.github.com\/users\/vinaykudari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vinaykudari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vinaykudari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vinaykudari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vinaykudari\/orgs","repos_url":"https:\/\/api.github.com\/users\/vinaykudari\/repos","events_url":"https:\/\/api.github.com\/users\/vinaykudari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vinaykudari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Really good thanks !\r\n> \r\n> I left a few comments\r\n\r\nThanks, resolved them :) ","looks like this PR includes changes about many other files than the ones for Code Search\r\n\r\ncan you create another branch and another PR please ?","> looks like this PR includes changes about many other files than the ones for Code Search\r\n> \r\n> can you create another branch and another PR please ?\r\n\r\nOkay sure"],"created_at":1607263959000,"updated_at":1607532015000,"closed_at":1607532015000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1203","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1203","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1203.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1203.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1203\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1202","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1202\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1202\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1202\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1202","id":757934408,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMjAyNjE0","number":1202,"title":"Medical question pairs","user":{"login":"tuner007","id":46425391,"node_id":"MDQ6VXNlcjQ2NDI1Mzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46425391?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tuner007","html_url":"https:\/\/github.com\/tuner007","followers_url":"https:\/\/api.github.com\/users\/tuner007\/followers","following_url":"https:\/\/api.github.com\/users\/tuner007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tuner007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tuner007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tuner007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tuner007\/orgs","repos_url":"https:\/\/api.github.com\/users\/tuner007\/repos","events_url":"https:\/\/api.github.com\/users\/tuner007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tuner007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607263747000,"updated_at":1607276488000,"closed_at":1607276488000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1202","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1202","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1202.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1202.patch"},"body":"This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors.\r\nDataset : https:\/\/github.com\/curai\/medical-question-pair-dataset\r\nPaper : https:\/\/drive.google.com\/file\/d\/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s\/view\r\n**No splits added**","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1202\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1201","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1201\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1201\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1201\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1201","id":757927941,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTk3OTI2","number":1201,"title":"adding medical-questions-pairs","user":{"login":"tuner007","id":46425391,"node_id":"MDQ6VXNlcjQ2NDI1Mzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46425391?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tuner007","html_url":"https:\/\/github.com\/tuner007","followers_url":"https:\/\/api.github.com\/users\/tuner007\/followers","following_url":"https:\/\/api.github.com\/users\/tuner007\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tuner007\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tuner007\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tuner007\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tuner007\/orgs","repos_url":"https:\/\/api.github.com\/users\/tuner007\/repos","events_url":"https:\/\/api.github.com\/users\/tuner007\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tuner007\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607261812000,"updated_at":1607261984000,"closed_at":1607261972000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1201","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1201","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1201.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1201.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1201\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1200","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1200\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1200\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1200\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1200","id":757926823,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTk3MDk0","number":1200,"title":"Update ADD_NEW_DATASET.md","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607261492000,"updated_at":1607329959000,"closed_at":1607329959000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1200","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1200","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1200.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1200.patch"},"body":"Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands.\r\n\r\nThis issue arises all the time when adding torch as a dependency, but because so many novice users seem to participate in adding datasets, it may be useful to add an explicit note for Windows users to ensure that they do not run into issues.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1200\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1199","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1199\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1199\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1199\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1199","id":757909237,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTg0Nzk3","number":1199,"title":"Turkish NER dataset, script works fine, couldn't generate dummy data","user":{"login":"merveenoyan","id":53175384,"node_id":"MDQ6VXNlcjUzMTc1Mzg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53175384?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/merveenoyan","html_url":"https:\/\/github.com\/merveenoyan","followers_url":"https:\/\/api.github.com\/users\/merveenoyan\/followers","following_url":"https:\/\/api.github.com\/users\/merveenoyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/merveenoyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/merveenoyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/merveenoyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/merveenoyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/merveenoyan\/repos","events_url":"https:\/\/api.github.com\/users\/merveenoyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/merveenoyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the .DUMP file looks like a txt with one example per line so adding `--match_text_files *.DUMP --n_lines 50` to the dummy generation command might work .","We can close this PR since a new PR was open at #1268 "],"created_at":1607256003000,"updated_at":1608135204000,"closed_at":1608135204000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1199","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1199","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1199.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1199.patch"},"body":"I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1199\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1198","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1198\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1198\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1198\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1198","id":757903453,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTgwNjAz","number":1198,"title":"Add ALT","user":{"login":"chameleonTK","id":6429850,"node_id":"MDQ6VXNlcjY0Mjk4NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6429850?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chameleonTK","html_url":"https:\/\/github.com\/chameleonTK","followers_url":"https:\/\/api.github.com\/users\/chameleonTK\/followers","following_url":"https:\/\/api.github.com\/users\/chameleonTK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chameleonTK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chameleonTK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chameleonTK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chameleonTK\/orgs","repos_url":"https:\/\/api.github.com\/users\/chameleonTK\/repos","events_url":"https:\/\/api.github.com\/users\/chameleonTK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chameleonTK\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the `RemoteDatasetTest ` erros in the CI are fixed on master so it's fine","used `Translation ` feature type and fixed few typos as you suggested.","Sorry, I made a mistake. please see new PR here. https:\/\/github.com\/huggingface\/datasets\/pull\/1436"],"created_at":1607253930000,"updated_at":1607573892000,"closed_at":1607573892000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1198","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1198","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1198.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1198.patch"},"body":"ALT dataset -- https:\/\/www2.nict.go.jp\/astrec-att\/member\/mutiyama\/ALT\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1198\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1197","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1197\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1197\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1197\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1197","id":757900160,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTc4MTIz","number":1197,"title":"add taskmaster-2","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607252718000,"updated_at":1607354563000,"closed_at":1607354563000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1197","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1197","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1197.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1197.patch"},"body":"Adding taskmaster-2 dataset.\r\nhttps:\/\/github.com\/google-research-datasets\/Taskmaster\/tree\/master\/TM-2-2020","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1197\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1196","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1196\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1196\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1196\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1196","id":757894920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTc0NjU2","number":1196,"title":"Add IWSLT'15 English-Vietnamese machine translation Data","user":{"login":"Nilanshrajput","id":28673745,"node_id":"MDQ6VXNlcjI4NjczNzQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28673745?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nilanshrajput","html_url":"https:\/\/github.com\/Nilanshrajput","followers_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/followers","following_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/repos","events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nilanshrajput\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! feel free to ping me once you've added the tags in the dataset card :) ","merging since the CI is fixed on master"],"created_at":1607250991000,"updated_at":1607711211000,"closed_at":1607711211000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1196","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1196","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1196.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1196.patch"},"body":"Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese.\r\n\r\nfrom https:\/\/nlp.stanford.edu\/projects\/nmt\/data\/iwslt15.en-vi\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1196\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1195","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1195\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1195\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1195\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1195","id":757889045,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTcwMjY2","number":1195,"title":"addition of py_ast","user":{"login":"reshinthadithyan","id":36307201,"node_id":"MDQ6VXNlcjM2MzA3MjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36307201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/reshinthadithyan","html_url":"https:\/\/github.com\/reshinthadithyan","followers_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/followers","following_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/repos","events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @reshinthadithyan !\r\n\r\nAs mentioned on the Slack, it would be better in this case to parse the file lines into the following feature structure:\r\n```python\r\n\"ast\": datasets.Sequence(\r\n {\r\n \"type\": datasets.Value(\"string\"),\r\n \"value\": datasets.Value(\"string\"),\r\n \"children\": datasets.Sequence(datasets.Value(\"int32\")),\r\n },\r\n)\r\n```\r\n\r\nHere are a few more things to fix before we can move forward:\r\n- the class name needs to be the CamelCase equivalent of the script name, so here it will have to be `PyAst`\r\n- the `README.md` needs to have the tags at the top\r\n- The homepage\/info list at the top should be in the same format as the template (added a suggestion)\r\n- You should add the dataset tags and field description to the README as described here: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nGood luck, let us know if you need any help!","Hello @yjernite, changes have been made as we talked. Hope this would suffice. Thanks. Feel free to point out any room to improvement.","Good progress! Here's what still needs to be done:\r\n- first, you need to rebase to master for the tests to pass :)\r\n- the information in your `Data Fields` paragraph should go into `Data Instances`. Data fields should describe the fields one by one, as in e.g. https:\/\/github.com\/huggingface\/datasets\/tree\/master\/datasets\/eli5#data-fields\r\n- you still need to add the YAML tags obtained with the tagging app\r\n\r\nShould be good to go after that!","Hello @yjernite, changes as talked are being done.","Looks like this PR includes changes about many other files than the ones for py_ast\r\n\r\nCould you create another branch and another PR please ?"],"created_at":1607248852000,"updated_at":1607408364000,"closed_at":1607408364000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1195","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1195","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1195.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1195.patch"},"body":"The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool. \r\nThe Python programs are collected from GitHub repositories\r\nby removing duplicate files, removing project forks (copy of another existing repository)\r\n,keeping only programs that parse and have at most 30'000 nodes in the AST and \r\nwe aim to remove obfuscated files","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1195\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1194","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1194\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1194\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1194\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1194","id":757880647,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTY0MDcz","number":1194,"title":"Add msr_text_compression","user":{"login":"jeromeku","id":2455711,"node_id":"MDQ6VXNlcjI0NTU3MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2455711?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeromeku","html_url":"https:\/\/github.com\/jeromeku","followers_url":"https:\/\/api.github.com\/users\/jeromeku\/followers","following_url":"https:\/\/api.github.com\/users\/jeromeku\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeromeku\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeromeku\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeromeku\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeromeku\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeromeku\/repos","events_url":"https:\/\/api.github.com\/users\/jeromeku\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeromeku\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine"],"created_at":1607245571000,"updated_at":1607511225000,"closed_at":1607511225000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1194","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1194","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1194.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1194.patch"},"body":"Add [MSR Abstractive Text Compression Dataset](https:\/\/msropendata.com\/datasets\/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1194\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1193","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1193\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1193\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1193\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1193","id":757840830,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTM1NDAy","number":1193,"title":"add taskmaster-1","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607227797000,"updated_at":1607354604000,"closed_at":1607353719000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1193","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1193","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1193.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1193.patch"},"body":"Adding Taskmaster-1 dataset\r\nhttps:\/\/github.com\/google-research-datasets\/Taskmaster\/tree\/master\/TM-1-2019","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1193\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1192","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1192\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1192\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1192\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1192","id":757839671,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTM0NjI3","number":1192,"title":"Add NewsPH_NLI dataset","user":{"login":"anaerobeth","id":3663322,"node_id":"MDQ6VXNlcjM2NjMzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3663322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anaerobeth","html_url":"https:\/\/github.com\/anaerobeth","followers_url":"https:\/\/api.github.com\/users\/anaerobeth\/followers","following_url":"https:\/\/api.github.com\/users\/anaerobeth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anaerobeth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anaerobeth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anaerobeth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anaerobeth\/orgs","repos_url":"https:\/\/api.github.com\/users\/anaerobeth\/repos","events_url":"https:\/\/api.github.com\/users\/anaerobeth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anaerobeth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607227231000,"updated_at":1607355583000,"closed_at":1607355583000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1192","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1192","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1192.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1192.patch"},"body":"This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing.\r\n\r\nLink to the paper: https:\/\/arxiv.org\/pdf\/2010.11574.pdf\r\n\r\nLink to the dataset\/repo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1192\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1191","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1191\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1191\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1191\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1191","id":757836654,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTMyNTg1","number":1191,"title":"Added Translator Human Parity Data For a Chinese-English news transla\u2026","user":{"login":"leoxzhao","id":7915719,"node_id":"MDQ6VXNlcjc5MTU3MTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7915719?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leoxzhao","html_url":"https:\/\/github.com\/leoxzhao","followers_url":"https:\/\/api.github.com\/users\/leoxzhao\/followers","following_url":"https:\/\/api.github.com\/users\/leoxzhao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leoxzhao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leoxzhao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leoxzhao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leoxzhao\/orgs","repos_url":"https:\/\/api.github.com\/users\/leoxzhao\/repos","events_url":"https:\/\/api.github.com\/users\/leoxzhao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leoxzhao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you run `make style` to format the code and fix the CI please ?","> Can you run `make style` to format the code and fix the CI please ?\r\n\r\nI ran `make style` before this PR and just a few minutes ago. No changes to the code. Not sure why the CI is failing.","Also, I attempted to see if I can get the source Chinese sentences from `wmt17` dataset. But this call `data = load_dataset('wmt17', \"zh-en\")` failed with this error: `FileNotFoundError: Couldn't find file at https:\/\/storage.googleapis.com\/tfdataset-data\/downloadataset\/uncorpus\/UNv1.0.en-zh.tar.gz`. I think it should be possible and fairly straightforward to get the pairing source sentences from it. I just can not test it right now.","The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1607225653000,"updated_at":1607520165000,"closed_at":1607520165000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1191","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1191","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1191.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1191.patch"},"body":"\u2026tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1191\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1190","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1190\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1190\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1190\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1190","id":757833698,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTMwNTM0","number":1190,"title":"Add Fake News Detection in Filipino dataset","user":{"login":"anaerobeth","id":3663322,"node_id":"MDQ6VXNlcjM2NjMzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3663322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anaerobeth","html_url":"https:\/\/github.com\/anaerobeth","followers_url":"https:\/\/api.github.com\/users\/anaerobeth\/followers","following_url":"https:\/\/api.github.com\/users\/anaerobeth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anaerobeth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anaerobeth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anaerobeth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anaerobeth\/orgs","repos_url":"https:\/\/api.github.com\/users\/anaerobeth\/repos","events_url":"https:\/\/api.github.com\/users\/anaerobeth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anaerobeth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi! I'm the author of this paper (surprised to see our datasets have been added already).\r\n\r\nThat paper link only leads to the conference index, here's a link to the actual paper: https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.316\/\r\n\r\nWould it be fine if I also edited your gsheet entry to reflect this change?","Hi Jan, please go ahead and update. I see you are also in the sprint slack channel. Let me know if what else needs updating. Thanks.\r\n"],"created_at":1607224335000,"updated_at":1607355567000,"closed_at":1607355567000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1190","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1190","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1190.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1190.patch"},"body":"This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake.\r\n\r\nLink to the paper: http:\/\/www.lrec-conf.org\/proceedings\/lrec2020\/index.html\r\n\r\nLink to the dataset\/repo: https:\/\/github.com\/jcblaisecruz02\/Tagalog-fake-news","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1190\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1189","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1189\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1189\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1189\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1189","id":757831035,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTI4NjY1","number":1189,"title":"Add Dengue dataset in Filipino","user":{"login":"anaerobeth","id":3663322,"node_id":"MDQ6VXNlcjM2NjMzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3663322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anaerobeth","html_url":"https:\/\/github.com\/anaerobeth","followers_url":"https:\/\/api.github.com\/users\/anaerobeth\/followers","following_url":"https:\/\/api.github.com\/users\/anaerobeth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anaerobeth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anaerobeth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anaerobeth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anaerobeth\/orgs","repos_url":"https:\/\/api.github.com\/users\/anaerobeth\/repos","events_url":"https:\/\/api.github.com\/users\/anaerobeth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anaerobeth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607223047000,"updated_at":1607355538000,"closed_at":1607355538000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1189","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1189","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1189.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1189.patch"},"body":"This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets.\r\n\r\nLink to the paper: https:\/\/ieeexplore.ieee.org\/document\/8459963\r\n\r\nLink to the dataset\/repo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1189\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1188","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1188\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1188\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1188\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1188","id":757827407,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTI2MTcw","number":1188,"title":"adding hind_encorp dataset","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["help needed in dummy data","extension of the file is .plaintext so dummy data generation is failing\r\n","you can add the `--match_text_file \"*.plaintext\"` flag when generating the dummy data\r\n\r\nalso it looks like the PR is empty, is this expected ?","yes it is expected because I made all my changes in PR #1186 then I again run code and open PR #1188 to see if this time test passes or not only so there is no code change from #1186 to #1188 \r\ni tried --match_text_file \"*.plaintext\" this time it is also not generating dummy data don't know why","well this PR includes no code change at all, can you make sure you added your changes in this one ?","feel free to ping me when you have added the files so I can take a look and help you with the dummy data","how to do that i dont know did i have to open new PR\r\n"," actually all my changes are visible in #1186 but don't know how to show same changes here","these are a the which i did in #1186 and same in #1188 \r\n![1](https:\/\/user-images.githubusercontent.com\/56379013\/101646577-b4864500-3a5d-11eb-8a5a-91b1b441040a.png)\r\n![2](https:\/\/user-images.githubusercontent.com\/56379013\/101646965-32e2e700-3a5e-11eb-94d9-276e602c6ded.png)\r\n![4](https:\/\/user-images.githubusercontent.com\/56379013\/101646989-38d8c800-3a5e-11eb-92bb-d9c4cb2c3595.png)\r\n![5](https:\/\/user-images.githubusercontent.com\/56379013\/101647017-41c99980-3a5e-11eb-87cf-5268e79df19d.png)\r\n![6](https:\/\/user-images.githubusercontent.com\/56379013\/101647038-48581100-3a5e-11eb-8d05-f67834fcaa7b.png)\r\n\r\n![8](https:\/\/user-images.githubusercontent.com\/56379013\/101647080-55750000-3a5e-11eb-8455-8936a35b35c2.png)\r\n![9](https:\/\/user-images.githubusercontent.com\/56379013\/101647084-55750000-3a5e-11eb-988e-ae87f0b252a0.png)\r\n![10](https:\/\/user-images.githubusercontent.com\/56379013\/101647182-6f164780-3a5e-11eb-8af3-f0b0186483c9.png)\r\n![11](https:\/\/user-images.githubusercontent.com\/56379013\/101647230-7c333680-3a5e-11eb-9aeb-2b4ce65965e0.png)\r\n![13](https:\/\/user-images.githubusercontent.com\/56379013\/101647257-848b7180-3a5e-11eb-871c-2fd77b047320.png)\r\n![14](https:\/\/user-images.githubusercontent.com\/56379013\/101647268-89502580-3a5e-11eb-9e2a-b9f7ff1fc95e.png)\r\nthese same codes are in both #1186 and #1188 so because it is already present from PR #1186 because of that it is showing zeor code change in #1188 because it is already present from #1186 how i can show or highlight those changes\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n","well for me https:\/\/github.com\/huggingface\/datasets\/pull\/1188\/files is blank","This PR tries to merge the master branch of you fork into this repo, however I can't find changes with your files inside your master branch.\r\n\r\nMaybe you can fork again the repo and try to create another PR ?","@lhoestq i opened a new pr #1438 but this time it fails many circl ci tests","Closing this one since a new PR was created"],"created_at":1607221125000,"updated_at":1607708441000,"closed_at":1607708441000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1188","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1188","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1188.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1188.patch"},"body":"adding Hindi_Encorp05 dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1188\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1187","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1187\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1187\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1187\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1187","id":757826707,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTI1NjU3","number":1187,"title":"Added AQUA-RAT (Algebra Question Answering with Rationales) Dataset","user":{"login":"arkhalid","id":14899066,"node_id":"MDQ6VXNlcjE0ODk5MDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14899066?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arkhalid","html_url":"https:\/\/github.com\/arkhalid","followers_url":"https:\/\/api.github.com\/users\/arkhalid\/followers","following_url":"https:\/\/api.github.com\/users\/arkhalid\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arkhalid\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arkhalid\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arkhalid\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arkhalid\/orgs","repos_url":"https:\/\/api.github.com\/users\/arkhalid\/repos","events_url":"https:\/\/api.github.com\/users\/arkhalid\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arkhalid\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607220772000,"updated_at":1607355432000,"closed_at":1607355432000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1187","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1187","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1187.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1187.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1187\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1186","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1186\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1186\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1186\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1186","id":757826660,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTI1NjE4","number":1186,"title":"all test passed ","user":{"login":"rahul-art","id":56379013,"node_id":"MDQ6VXNlcjU2Mzc5MDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56379013?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rahul-art","html_url":"https:\/\/github.com\/rahul-art","followers_url":"https:\/\/api.github.com\/users\/rahul-art\/followers","following_url":"https:\/\/api.github.com\/users\/rahul-art\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rahul-art\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rahul-art\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rahul-art\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rahul-art\/orgs","repos_url":"https:\/\/api.github.com\/users\/rahul-art\/repos","events_url":"https:\/\/api.github.com\/users\/rahul-art\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rahul-art\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes to 5000 files\r\ncould you create a new branch and a new PR ?"],"created_at":1607220752000,"updated_at":1607353615000,"closed_at":1607353615000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1186","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1186","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1186.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1186.patch"},"body":"need help creating dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1186\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1185","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1185\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1185\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1185\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1185","id":757825413,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTI0NzE1","number":1185,"title":"Add Hate Speech Dataset in Filipino","user":{"login":"anaerobeth","id":3663322,"node_id":"MDQ6VXNlcjM2NjMzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3663322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/anaerobeth","html_url":"https:\/\/github.com\/anaerobeth","followers_url":"https:\/\/api.github.com\/users\/anaerobeth\/followers","following_url":"https:\/\/api.github.com\/users\/anaerobeth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/anaerobeth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/anaerobeth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/anaerobeth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/anaerobeth\/orgs","repos_url":"https:\/\/api.github.com\/users\/anaerobeth\/repos","events_url":"https:\/\/api.github.com\/users\/anaerobeth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/anaerobeth\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607220116000,"updated_at":1607355333000,"closed_at":1607355333000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1185","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1185","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1185.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1185.patch"},"body":"This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.\r\n\r\nLink to the paper: https:\/\/pcj.csp.org.ph\/index.php\/pcj\/issue\/download\/29\/PCJ%20V14%20N1%20pp1-14%202019\r\n\r\nLink to the dataset\/repo: https:\/\/github.com\/jcblaisecruz02\/Filipino-Text-Benchmarks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1185\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1184","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1184\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1184\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1184\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1184","id":757807583,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTExNjk4","number":1184,"title":"Add Adversarial SQuAD dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the CI error was just a connection error due to all the activity on the repo this week ^^'\r\nI re-ran it so it should be good now","I hadn't realized the problem with the dummies since it had passed without errors.\r\nSuggestion: maybe we can show the user a warning based on the generated dummy size.","Thanks for changing to configs ! Looks all good now :) \r\n\r\nBefore we merge, can you re-lighten the dummy data please if you don't mind ? The idea is to have them weigh only a few KB (currently it's 50KB each). Feel free to remove any unnecessary files or chunk of text","(also you can ignore the `RemoteDatasetTest ` CI errors, they're fixed on master )","merging since the CI is fixed on master"],"created_at":1607212317000,"updated_at":1608135178000,"closed_at":1608135178000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1184","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1184","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1184.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1184.patch"},"body":"# Adversarial SQuAD\r\n\r\nAdding the Adversarial [SQuAD](https:\/\/github.com\/robinjia\/adversarial-squad) dataset as part of the sprint \ud83c\udf89 \r\nThis dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data is intended for use in evaluation. (Which could of course be also used for training if one wants). So there is no classical train\/val\/test split, but a split based on the number of adversaries added.\r\n\r\nThere are 2 splits of this dataset:\r\n\r\n- AddSent: Has up to five candidate adversarial sentences that don't answer the question, but have a lot of words in common with the question. This adversary is does not query the model in any way.\r\n- AddOneSent: Similar to AddSent, but just one candidate sentences was picked at random. This adversary is does not query the model in any way.\r\n\r\n(The AddAny and AddCommon datasets mentioned in the paper are dynamically generated based on model's output distribution thus are not included here)\r\n\r\nThe failing test look like some unrelated timeout thing, will probably clear if rerun.\r\n- [x] All tests passed\r\n- [x] Added dummy data\r\n- [x] Added data card (as much as I could)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1184\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1183","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1183\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1183\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1183\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1183","id":757806570,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTEwOTY4","number":1183,"title":"add mkb dataset","user":{"login":"vasudevgupta7","id":53136577,"node_id":"MDQ6VXNlcjUzMTM2NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53136577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vasudevgupta7","html_url":"https:\/\/github.com\/vasudevgupta7","followers_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/followers","following_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/orgs","repos_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/repos","events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you update the languages tags before we merge @VasudevGupta7 ?","done.","thanks !"],"created_at":1607211873000,"updated_at":1607506730000,"closed_at":1607506730000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1183","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1183","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1183.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1183.patch"},"body":"This PR will add Mann Ki Baat dataset (parallel data for Indian languages).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1183\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1182","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1182\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1182\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1182\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1182","id":757804877,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTA5Nzgx","number":1182,"title":"ADD COVID-QA dataset","user":{"login":"olinguyen","id":4341867,"node_id":"MDQ6VXNlcjQzNDE4Njc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4341867?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/olinguyen","html_url":"https:\/\/github.com\/olinguyen","followers_url":"https:\/\/api.github.com\/users\/olinguyen\/followers","following_url":"https:\/\/api.github.com\/users\/olinguyen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/olinguyen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/olinguyen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/olinguyen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/olinguyen\/orgs","repos_url":"https:\/\/api.github.com\/users\/olinguyen\/repos","events_url":"https:\/\/api.github.com\/users\/olinguyen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/olinguyen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master","Wow, thanks for including this dataset from my side as well!"],"created_at":1607211116000,"updated_at":1609161794000,"closed_at":1607351007000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1182","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1182","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1182.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1182.patch"},"body":"This PR adds the COVID-QA dataset, a question answering dataset consisting of 2,019 question\/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19\r\n\r\nLink to the paper: https:\/\/openreview.net\/forum?id=JENSKEEzsoU\r\nLink to the dataset\/repo: https:\/\/github.com\/deepset-ai\/COVID-QA","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1182\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1181","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1181\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1181\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1181\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1181","id":757791992,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMTAwNjYz","number":1181,"title":"added emotions detection in arabic dataset","user":{"login":"abdulelahsm","id":28743265,"node_id":"MDQ6VXNlcjI4NzQzMjY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28743265?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abdulelahsm","html_url":"https:\/\/github.com\/abdulelahsm","followers_url":"https:\/\/api.github.com\/users\/abdulelahsm\/followers","following_url":"https:\/\/api.github.com\/users\/abdulelahsm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abdulelahsm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abdulelahsm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abdulelahsm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abdulelahsm\/orgs","repos_url":"https:\/\/api.github.com\/users\/abdulelahsm\/repos","events_url":"https:\/\/api.github.com\/users\/abdulelahsm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abdulelahsm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @abdulelahsm did you manage to fix your issue ?\r\nFeel free to ping me if you have questions or if you're ready for a review","@lhoestq fixed it! ready to merge. I hope haha","merging since the CI is fixed on master"],"created_at":1607206126000,"updated_at":1608544431000,"closed_at":1608544431000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1181","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1181","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1181.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1181.patch"},"body":"Dataset for Emotions detection in Arabic text\r\n\r\nmore info: https:\/\/github.com\/AmrMehasseb\/Emotional-Tone","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1181\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1180","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1180\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1180\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1180\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1180","id":757784612,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDk1MzI2","number":1180,"title":"Add KorQuAD v2 Dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR also includes the changes for the V1\r\nCould you only include the files of the V2 ?","hmm I have made the dummy data lighter retested on local and it passed not sure why it fails here?","merging since the CI is fixed on master"],"created_at":1607204014000,"updated_at":1608135030000,"closed_at":1608135030000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1180","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1180","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1180.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1180.patch"},"body":"# The Korean Question Answering Dataset v2\r\nAdding the [KorQuAD](https:\/\/korquad.github.io\/) v2 dataset as part of the sprint \ud83c\udf89 \r\nThis dataset is very similar to SQuAD and is an extension of [squad_kor_v1](https:\/\/github.com\/huggingface\/datasets\/pull\/1178) which is why I added it as `squad_kor_v2`. \r\n\r\n- Crowd generated questions and answer (1-answer per question) for Wikipedia articles. Differently from V1 it includes the html structure and markup, which makes it a different enough dataset. (doesn't share ids between v1 and v2 either)\r\n\r\n- [x] All tests passed\r\n- [x] Added dummy data\r\n- [x] Added data card (as much as I could)\r\n\r\nEdit: \ud83e\udd26 looks like squad_kor_v1 commit sneaked in here too","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1180\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1179","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1179\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1179\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1179\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1179","id":757784074,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDk0OTYz","number":1179,"title":"Small update to the doc: add flatten_indices in doc","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607203810000,"updated_at":1607348577000,"closed_at":1607348576000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1179","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1179","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1179.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1179.patch"},"body":"Small update to the doc: add flatten_indices in doc","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1179\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1178","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1178\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1178\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1178\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1178","id":757783435,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDk0NTIx","number":1178,"title":"Add KorQuAD v1 Dataset","user":{"login":"cceyda","id":15624271,"node_id":"MDQ6VXNlcjE1NjI0Mjcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15624271?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cceyda","html_url":"https:\/\/github.com\/cceyda","followers_url":"https:\/\/api.github.com\/users\/cceyda\/followers","following_url":"https:\/\/api.github.com\/users\/cceyda\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cceyda\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cceyda\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cceyda\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cceyda\/orgs","repos_url":"https:\/\/api.github.com\/users\/cceyda\/repos","events_url":"https:\/\/api.github.com\/users\/cceyda\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cceyda\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607203546000,"updated_at":1607348497000,"closed_at":1607348497000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1178","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1178","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1178.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1178.patch"},"body":"# The Korean Question Answering Dataset\r\nAdding the [KorQuAD](https:\/\/korquad.github.io\/KorQuad%201.0\/) v1 dataset as part of the sprint \ud83c\udf89 \r\nThis dataset is very similar to SQuAD which is why I added it as `squad_kor_v1`. There is also a v2 which I added [here](https:\/\/github.com\/huggingface\/datasets\/pull\/1180).\r\n\r\n- Crowd generated questions and answer (1-answer per question) for Wikipedia articles.\r\n\r\n- [x] All tests passed\r\n- [x] Added dummy data\r\n- [x] Added data card (as much as I could)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1178\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1177","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1177\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1177\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1177\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1177","id":757778684,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDkxMTQ3","number":1177,"title":"Add Korean NER dataset","user":{"login":"jaketae","id":25360440,"node_id":"MDQ6VXNlcjI1MzYwNDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25360440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jaketae","html_url":"https:\/\/github.com\/jaketae","followers_url":"https:\/\/api.github.com\/users\/jaketae\/followers","following_url":"https:\/\/api.github.com\/users\/jaketae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jaketae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jaketae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jaketae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jaketae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jaketae\/repos","events_url":"https:\/\/api.github.com\/users\/jaketae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jaketae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closed via #1219 "],"created_at":1607201760000,"updated_at":1607285988000,"closed_at":1607285988000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1177","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1177","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1177.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1177.patch"},"body":"This PR adds the [Korean named entity recognition dataset](https:\/\/github.com\/kmounlp\/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https:\/\/github.com\/SKTBrain\/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https:\/\/github.com\/eagle705\/pytorch-bert-crf-ner).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1177\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1176","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1176\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1176\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1176\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1176","id":757778365,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDkwOTMx","number":1176,"title":"Add OpenPI Dataset","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @Bharat123rox ! It looks like some of the dummy data is broken or missing. Did you auto-generate it? Does the local test pass for you?","@yjernite requesting you to have a look as to why the tests are failing only on Windows, there seems to be a backslash error somewhere, could it be the result of `os.path.join` and what should be the fix for this?","This is the `black` output locally:\r\n```\r\n(datasets_env) datasets (openpi) > black --check --line-length 119 --target-version py36 datasets\/openpi\/\r\nAll done! \u2728 \ud83c\udf70 \u2728\r\n1 file would be left unchanged.\r\n```","Can you check your version of black (should be `20.8b1`) and run `make style again`? (And don't forget to rebase before pushing ;) )\r\n\r\nThe other test was a time-out error so should be good on the next commit","Thanks @yjernite the CI tests finally passed!!","Hi @Bharat123rox did you manage to join the different config into one using the IDs ?\r\n\r\nFeel free to ping me when you're ready for the next review :) ","> Hi @Bharat123rox did you manage to join the different config into one using the IDs ?\n> \n> Feel free to ping me when you're ready for the next review :) \n\nNot yet @lhoestq still working on this! Meanwhile please review #1507 where I added the SelQA dataset :)","Ok ! Let me review SelQA then :) \r\nThanks for your help !","Apologies for the very late response. Here is the openpi dataset file with a single file per partition after merging `id_answers, answers.jsonl, question.jsonl , question_metadata.jsonl`\r\n\r\nhttps:\/\/github.com\/allenai\/openpi-dataset\/blob\/main\/data\/gold-v1.1\/dev.jsonl","Nice thank you @nikett !","Hi @Bharat123rox , when you get a chance, please feel free to use the dataset from the repo ( [Link](https:\/\/github.com\/allenai\/openpi-dataset\/blob\/main\/data\/gold-v1.1\/dev.jsonl) ) . Please let me know if any file is missing! Thank you "],"created_at":1607201646000,"updated_at":1630919990000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1176","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1176","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1176.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1176.patch"},"body":"Add the OpenPI Dataset by AI2 (AllenAI)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1176\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1175","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1175\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1175\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1175\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1175","id":757770077,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDg0OTYy","number":1175,"title":"added ReDial dataset","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607198658000,"updated_at":1607347303000,"closed_at":1607347303000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1175","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1175","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1175.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1175.patch"},"body":"Updating README\r\nDataset link: https:\/\/redialdata.github.io\/website\/datasheet","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1175\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1174","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1174\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1174\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1174\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1174","id":757768474,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDgzODUz","number":1174,"title":"Add Universal Morphologies","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry for the delay, changed the default language to \"ady\" (first alphabetical) and only downloading the relevant files for each config (dataset_infos is till 918KB though)","Thanks for merging it ! Looks all good\r\n\r\nLooks like I didn't reply to your last message, sorry about that.\r\nFeel free to ping me when this happens :) "],"created_at":1607198083000,"updated_at":1611679816000,"closed_at":1611679308000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1174","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1174","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1174.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1174.patch"},"body":"Adding unimorph universal morphology annotations for 110 languages, pfew!!!\r\n\r\none lemma per row with all possible forms and annotations\r\n\r\nhttps:\/\/unimorph.github.io\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1174\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1173","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1173\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1173\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1173\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1173","id":757761967,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDc5MTk0","number":1173,"title":"add wikipedia biography dataset","user":{"login":"alejandrocros","id":39712560,"node_id":"MDQ6VXNlcjM5NzEyNTYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/39712560?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alejandrocros","html_url":"https:\/\/github.com\/alejandrocros","followers_url":"https:\/\/api.github.com\/users\/alejandrocros\/followers","following_url":"https:\/\/api.github.com\/users\/alejandrocros\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alejandrocros\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alejandrocros\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alejandrocros\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alejandrocros\/orgs","repos_url":"https:\/\/api.github.com\/users\/alejandrocros\/repos","events_url":"https:\/\/api.github.com\/users\/alejandrocros\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alejandrocros\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Does anyone know why am I getting this \"Some checks were not successful\" message? For the _code_quality_ one, I have successfully run the flake8 command.","Ok, I need to update the README.md, but don't know if that will fix the errors","Hi @ACR0S , thanks for adding the dataset!\r\n\r\nIt looks like `black` is throwing the code quality error: you need to run `make style` with the latest version of `black` (`black --version` should return 20.8b1)\r\n\r\nWe also added a requirement to specify encodings when using the python `open` function (line 163 in the current version of your script)\r\n\r\nFinally, you will need to add the tags and field descriptions to the README as described here https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nLet us know if you have any further questions!","Also, please leave the full template of the readme with the `[More Information Needed]` paragraphs: you don't have to fill them out now but it will make it easier for us to go back to later :) ","Thank you for your help, @yjernite! I have updated everything (finally run the _make style_, added the tags, the ecoding to the _open_ function and put back the empty fields in the README). Hope it works now! :)","LGTM!","merging since the CI is fixed on master"],"created_at":1607195690000,"updated_at":1607339594000,"closed_at":1607339594000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1173","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1173","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1173.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1173.patch"},"body":"My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md). It passes all the tests.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1173\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1172","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1172\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1172\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1172\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1172","id":757758532,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDc2NzY3","number":1172,"title":"Add proto_qa dataset","user":{"login":"bpatidar","id":12439573,"node_id":"MDQ6VXNlcjEyNDM5NTcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12439573?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bpatidar","html_url":"https:\/\/github.com\/bpatidar","followers_url":"https:\/\/api.github.com\/users\/bpatidar\/followers","following_url":"https:\/\/api.github.com\/users\/bpatidar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bpatidar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bpatidar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bpatidar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bpatidar\/orgs","repos_url":"https:\/\/api.github.com\/users\/bpatidar\/repos","events_url":"https:\/\/api.github.com\/users\/bpatidar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bpatidar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607194504000,"updated_at":1607339544000,"closed_at":1607339544000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1172","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1172","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1172.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1172.patch"},"body":"Added dataset tags as required.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1172\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1171","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1171\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1171\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1171\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1171","id":757757000,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDc1NzE3","number":1171,"title":"Add imdb Urdu Reviews dataset.","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607193965000,"updated_at":1607339477000,"closed_at":1607339477000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1171","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1171","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1171.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1171.patch"},"body":"Added the imdb Urdu reviews dataset. More info about the dataset over <a href=\"https:\/\/github.com\/mirfan899\/Urdu\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1171\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1170","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1170\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1170\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1170\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1170","id":757754378,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDczOTU0","number":1170,"title":"Fix path handling for Windows","user":{"login":"edugp","id":17855740,"node_id":"MDQ6VXNlcjE3ODU1NzQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17855740?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/edugp","html_url":"https:\/\/github.com\/edugp","followers_url":"https:\/\/api.github.com\/users\/edugp\/followers","following_url":"https:\/\/api.github.com\/users\/edugp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/edugp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/edugp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/edugp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/edugp\/orgs","repos_url":"https:\/\/api.github.com\/users\/edugp\/repos","events_url":"https:\/\/api.github.com\/users\/edugp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/edugp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq here's the fix!"],"created_at":1607193114000,"updated_at":1607338043000,"closed_at":1607338043000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1170","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1170","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1170.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1170.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1170\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1169","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1169\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1169\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1169\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1169","id":757747997,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDY5MzAx","number":1169,"title":"Add Opus fiskmo dataset for Finnish and Swedish for MT task","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607191015000,"updated_at":1607339051000,"closed_at":1607339051000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1169","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1169","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1169.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1169.patch"},"body":"Adding fiskmo, a massive parallel corpus for Finnish and Swedish.\r\nfor more info : http:\/\/opus.nlpl.eu\/fiskmo.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1169\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1168","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1168\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1168\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1168\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1168","id":757740780,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDYzNjgy","number":1168,"title":"Add Naver sentiment movie corpus","user":{"login":"jaketae","id":25360440,"node_id":"MDQ6VXNlcjI1MzYwNDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25360440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jaketae","html_url":"https:\/\/github.com\/jaketae","followers_url":"https:\/\/api.github.com\/users\/jaketae\/followers","following_url":"https:\/\/api.github.com\/users\/jaketae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jaketae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jaketae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jaketae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jaketae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jaketae\/repos","events_url":"https:\/\/api.github.com\/users\/jaketae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jaketae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closed via #1252 "],"created_at":1607189123000,"updated_at":1607348049000,"closed_at":1607348049000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1168","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1168","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1168.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1168.patch"},"body":"This PR adds the [Naver sentiment movie corpus](https:\/\/github.com\/e9t\/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.199.pdf). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1168\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1167","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1167\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1167\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1167\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1167","id":757722921,"node_id":"MDU6SXNzdWU3NTc3MjI5MjE=","number":1167,"title":"\u2753 On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders","user":{"login":"pietrolesci","id":61748653,"node_id":"MDQ6VXNlcjYxNzQ4NjUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/61748653?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pietrolesci","html_url":"https:\/\/github.com\/pietrolesci","followers_url":"https:\/\/api.github.com\/users\/pietrolesci\/followers","following_url":"https:\/\/api.github.com\/users\/pietrolesci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pietrolesci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pietrolesci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pietrolesci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pietrolesci\/orgs","repos_url":"https:\/\/api.github.com\/users\/pietrolesci\/repos","events_url":"https:\/\/api.github.com\/users\/pietrolesci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pietrolesci\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We're working on adding on-the-fly transforms in datasets.\r\nCurrently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy\/torch\/tf tensors or pandas.\r\nFor example\r\n```python\r\ndataset.set_format(\"torch\")\r\n```\r\napplies `torch.Tensor` to the dataset entries on-the-fly.\r\n\r\nWe plan to extend this to user-defined formatting transforms.\r\nFor example\r\n```python\r\ndataset.set_format(transform=tokenize)\r\n```\r\n\r\nWhat do you think ?"],"created_at":1607187776000,"updated_at":1610534888000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there,\r\n\r\nI have a question regarding \"on-the-fly\" tokenization. This question was elicited by reading the \"How to train a new language model from scratch using Transformers and Tokenizers\" [here](https:\/\/huggingface.co\/blog\/how-to-train). Towards the end there is this sentence: \"If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step\". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern.\r\n\r\nI guess the solution would entail wrapping a dataset into a Pytorch dataset.\r\n\r\nAs a concrete example from the [docs](https:\/\/huggingface.co\/transformers\/custom_datasets.html)\r\n\r\n```python\r\nimport torch\r\n\r\nclass SquadDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n # instead of doing this beforehand, I'd like to do tokenization on the fly\r\n self.encodings = encodings \r\n\r\n def __getitem__(self, idx):\r\n return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}\r\n\r\n def __len__(self):\r\n return len(self.encodings.input_ids)\r\n\r\ntrain_dataset = SquadDataset(train_encodings)\r\n```\r\n\r\nHow would one implement this with \"on-the-fly\" tokenization exploiting the vectorized capabilities of tokenizers?\r\n\r\n\r\n----\r\n\r\nEdit: I have come up with this solution. It does what I want, but I feel it's not very elegant\r\n\r\n```python\r\nclass CustomPytorchDataset(Dataset):\r\n def __init__(self):\r\n self.dataset = some_hf_dataset(...)\r\n self.tokenizer = BertTokenizerFast.from_pretrained(\"bert-base-uncased\")\r\n\r\n def __getitem__(self, batch_idx):\r\n instance = self.dataset[text_col][batch_idx]\r\n tokenized_text = self.tokenizer(instance, truncation=True, padding=True)\r\n return tokenized_text\r\n\r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n @staticmethod\r\n def collate_fn(batch):\r\n # batch is a list, however it will always contain 1 item because we should not use the\r\n # batch_size argument as batch_size is controlled by the sampler\r\n return {k: torch.tensor(v) for k, v in batch[0].items()}\r\n\r\ntorch_ds = CustomPytorchDataset()\r\n\r\n# NOTE: batch_sampler returns list of integers and since here we have SequentialSampler\r\n# it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)`\r\nbatch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True)\r\n\r\n# NOTE: no `batch_size` as now the it is controlled by the sampler!\r\ndl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1167\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1166","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1166\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1166\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1166\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1166","id":757721208,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDQ1NDUy","number":1166,"title":"Opus montenegrinsubs","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607187644000,"updated_at":1607338969000,"closed_at":1607338969000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1166","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1166","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1166.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1166.patch"},"body":"Opus montenegrinsubs - language pair en-me\r\nmore info : http:\/\/opus.nlpl.eu\/MontenegrinSubs.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1166\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1165","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1165\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1165\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1165\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1165","id":757720226,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDQ0NzEy","number":1165,"title":"Add ar rest reviews","user":{"login":"abdulelahsm","id":28743265,"node_id":"MDQ6VXNlcjI4NzQzMjY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28743265?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abdulelahsm","html_url":"https:\/\/github.com\/abdulelahsm","followers_url":"https:\/\/api.github.com\/users\/abdulelahsm\/followers","following_url":"https:\/\/api.github.com\/users\/abdulelahsm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abdulelahsm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abdulelahsm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abdulelahsm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abdulelahsm\/orgs","repos_url":"https:\/\/api.github.com\/users\/abdulelahsm\/repos","events_url":"https:\/\/api.github.com\/users\/abdulelahsm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abdulelahsm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Copy-pasted from the Slack discussion:\r\nthe annotation and language creators should be found , not unknown\r\nthe example should go under the \"Data Instances\" paragraph, not \"Data fields\"\r\ncan you remove the abstract from the citation and add it to the dataset description? More people will see that","@yjernite done! thanks for the feedback","@lhoestq not sure why it's failing tests now, I only changed cosmetics","You can ignores these errors\r\n```\r\n\r\n=========================== short test summary info ===========================\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_chr_en\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_chr_en\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_great_code\r\n```\r\n\r\nthey're fixed on master","Feel free to ping me for the final review once you managed to change to ClassLabel :) ","Hey @lhoestq I was able to fix it !! I think the same errors appeared on circleCI and now it's hopefully ready to be merged?","@lhoestq done! thanks for your review ","merging since the CI is fixed on master"],"created_at":1607187402000,"updated_at":1608570383000,"closed_at":1608570383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1165","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1165","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1165.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1165.patch"},"body":"added restaurants reviews in Arabic for sentiment analysis tasks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1165\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1164","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1164\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1164\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1164\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1164","id":757716575,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDQyMjA1","number":1164,"title":"Add DaNe dataset","user":{"login":"ophelielacroix","id":28562991,"node_id":"MDQ6VXNlcjI4NTYyOTkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28562991?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ophelielacroix","html_url":"https:\/\/github.com\/ophelielacroix","followers_url":"https:\/\/api.github.com\/users\/ophelielacroix\/followers","following_url":"https:\/\/api.github.com\/users\/ophelielacroix\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ophelielacroix\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ophelielacroix\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ophelielacroix\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ophelielacroix\/orgs","repos_url":"https:\/\/api.github.com\/users\/ophelielacroix\/repos","events_url":"https:\/\/api.github.com\/users\/ophelielacroix\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ophelielacroix\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes"],"created_at":1607186210000,"updated_at":1607431818000,"closed_at":1607431795000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1164","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1164","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1164.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1164.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1164\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1163","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1163\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1163\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1163\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1163","id":757711340,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDM4Mzc3","number":1163,"title":"Added memat : Xhosa-English parallel corpora","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The `RemoteDatasetTest` CI fail is fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1607184530000,"updated_at":1607337624000,"closed_at":1607337624000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1163","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1163","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1163.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1163.patch"},"body":"Added memat : Xhosa-English parallel corpora\r\nfor more info : http:\/\/opus.nlpl.eu\/memat.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1163\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1162","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1162\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1162\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1162\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1162","id":757707085,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDM1MzEw","number":1162,"title":"Add Mocha dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607183114000,"updated_at":1607335779000,"closed_at":1607335779000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1162","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1162","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1162.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1162.patch"},"body":"More information: https:\/\/allennlp.org\/mocha","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1162\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1161","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1161\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1161\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1161\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1161","id":757705286,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDM0MDM3","number":1161,"title":"Linguisticprobing","user":{"login":"sileod","id":9168444,"node_id":"MDQ6VXNlcjkxNjg0NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9168444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sileod","html_url":"https:\/\/github.com\/sileod","followers_url":"https:\/\/api.github.com\/users\/sileod\/followers","following_url":"https:\/\/api.github.com\/users\/sileod\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sileod\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sileod\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sileod\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sileod\/orgs","repos_url":"https:\/\/api.github.com\/users\/sileod\/repos","events_url":"https:\/\/api.github.com\/users\/sileod\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sileod\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607182518000,"updated_at":1607531906000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1161","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1161","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1161.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1161.patch"},"body":"Adding Linguistic probing datasets from\r\nWhat you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties\r\n https:\/\/www.aclweb.org\/anthology\/P18-1198\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1161\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1160","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1160\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1160\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1160\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1160","id":757677188,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDE0Nzcw","number":1160,"title":"adding TabFact dataset","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["FYI you guys are on GitHub's homepage \ud83d\ude0d\r\n\r\n<img width=\"1589\" alt=\"Screenshot 2020-12-09 at 12 34 28\" src=\"https:\/\/user-images.githubusercontent.com\/326577\/101624883-a0ecc700-39e8-11eb-8a97-11af0d036536.png\">\r\n","Yeayy \ud83d\ude0d \ud83d\udd25"],"created_at":1607173552000,"updated_at":1607514099000,"closed_at":1607505161000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1160","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1160","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1160.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1160.patch"},"body":"Adding TabFact: A Large-scale Dataset for Table-based Fact Verification.\r\n\r\nhttps:\/\/github.com\/wenhuchen\/Table-Fact-Checking\r\n\r\n- The tables are stored as individual csv files, so need to download 16,573 \ud83e\udd2f csv files. As a result the `datasets_infos.json` file is huge (6.62 MB).\r\n- Original dataset has nested structure where, where table is one example and each table has multiple statements,\r\nflattening the structure here so that each statement is one example.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1160\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1159","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1159\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1159\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1159\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1159","id":757661128,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDAyNzYx","number":1159,"title":"Add Roman Urdu dataset","user":{"login":"jaketae","id":25360440,"node_id":"MDQ6VXNlcjI1MzYwNDQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25360440?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jaketae","html_url":"https:\/\/github.com\/jaketae","followers_url":"https:\/\/api.github.com\/users\/jaketae\/followers","following_url":"https:\/\/api.github.com\/users\/jaketae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jaketae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jaketae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jaketae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jaketae\/orgs","repos_url":"https:\/\/api.github.com\/users\/jaketae\/repos","events_url":"https:\/\/api.github.com\/users\/jaketae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jaketae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607168203000,"updated_at":1607348481000,"closed_at":1607335143000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1159","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1159","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1159.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1159.patch"},"body":"This PR adds the [Roman Urdu dataset](https:\/\/archive.ics.uci.edu\/ml\/datasets\/Roman+Urdu+Data+Set#). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1159\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1158","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1158\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1158\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1158\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1158","id":757658926,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDAxMjM0","number":1158,"title":"Add BBC Hindi NLI Dataset ","user":{"login":"avinsit123","id":33565881,"node_id":"MDQ6VXNlcjMzNTY1ODgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33565881?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/avinsit123","html_url":"https:\/\/github.com\/avinsit123","followers_url":"https:\/\/api.github.com\/users\/avinsit123\/followers","following_url":"https:\/\/api.github.com\/users\/avinsit123\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/avinsit123\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/avinsit123\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/avinsit123\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/avinsit123\/orgs","repos_url":"https:\/\/api.github.com\/users\/avinsit123\/repos","events_url":"https:\/\/api.github.com\/users\/avinsit123\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/avinsit123\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ","Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help","@lhoestq sorry I completely forgot about this pr. I will complete it ASAP.","@lhoestq I have fixed the code to resolve all your comments. Pls do check. I also don't seem to know why the CI tests are failing as I ran all the tests in CONTRIBUTING.md on my local pc and they passed.","@lhoestq thanks for ur patient review :) . I also wish to add similar 3 more NLI hindi datasets. Hope to do within this week.","@lhoestq would this be merged to master?","Yes of course ;)\r\nmerging now !"],"created_at":1607167534000,"updated_at":1612518511000,"closed_at":1612518511000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1158","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1158","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1158.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1158.patch"},"body":"# Dataset Card for BBC Hindi NLI Dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-fields)\r\n - [Data Splits](#data-splits)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- HomePage : https:\/\/github.com\/midas-research\/hindi-nli-data\r\n- Paper : \"https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\"\r\n- Point of Contact : https:\/\/github.com\/midas-research\/hindi-nli-data\r\n\r\n### Dataset Summary\r\n\r\n- Dataset for Natural Language Inference in Hindi Language. BBC Hindi Dataset consists of textual-entailment pairs.\r\n- Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic.\r\n- Context and Hypothesis is written in Hindi while Entailment_Label is in English.\r\n- Entailment_label is of 2 types - entailed and not-entailed.\r\n- Dataset can be used to train models for Natural Language Inference tasks in Hindi Language.\r\n[More Information Needed]\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\n- Natural Language Inference for Hindi\r\n\r\n### Languages\r\n\r\nDataset is in Hindi\r\n\r\n## Dataset Structure\r\n\r\n- Data is structured in TSV format. \r\n- Train and Test files are in seperate files\r\n\r\n\r\n### Dataset Instances\r\n\r\nAn example of 'train' looks as follows.\r\n\r\n```\r\n{'hypothesis': '\u092f\u0939 \u0916\u092c\u0930 \u0915\u0940 \u0938\u0942\u091a\u0928\u093e \u0939\u0948|', 'label': 'entailed', 'premise': '\u0917\u094b\u092a\u0928\u0940\u092f\u0924\u093e \u0915\u0940 \u0928\u0940\u0924\u093f', 'topic': '1'}\r\n\r\n```\r\n### Data Fields\r\n\r\n- Each row contatins 4 columns - Premise, Hypothesis, Label and Topic.\r\n\r\n### Data Splits\r\n\r\n- Train : 15553\r\n- Valid : 2581\r\n- Test : 2593\r\n\r\n## Dataset Creation\r\n\r\n- We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available BBC Hindi news text classification datasets in Hindi and pose them as TE problems\r\n- In this recasting process, we build template hypotheses for each class in the label taxonomy\r\n- Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples.\r\n- For more information on the recasting process, refer to paper \"https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\"\r\n\r\n### Source Data\r\n\r\nSource Dataset for the recasting process is the BBC Hindi Headlines Dataset(https:\/\/github.com\/NirantK\/hindi2vec\/releases\/tag\/bbc-hindi-v0.1)\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- BBC Hindi News Classification Dataset contains 4, 335 Hindi news headlines tagged across 14 categories: India, Pakistan,news, International, entertainment, sport, science, China, learning english, social, southasia, business, institutional, multimedia\r\n- We processed this dataset to combine two sets of relevant but low prevalence classes.\r\n- Namely, we merged the samples from Pakistan, China, international, and southasia as one class called international.\r\n- Likewise, we also merged samples from news, business, social, learning english, and institutional as news.\r\n- Lastly, we also removed the class multimedia because there were very few samples.\r\n\r\n#### Who are the source language producers?\r\n\r\nPls refer to this paper: \"https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\"\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\nAnnotation process has been described in Dataset Creation Section.\r\n\r\n#### Who are the annotators?\r\n\r\nAnnotation is done automatically.\r\n\r\n### Personal and Sensitive Information\r\n\r\nNo Personal and Sensitive Information is mentioned in the Datasets.\r\n\r\n## Considerations for Using the Data\r\n\r\nPls refer to this paper: https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\r\n\r\n### Discussion of Biases\r\n\r\nPls refer to this paper: https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\r\n\r\n### Other Known Limitations\r\n\r\nNo other known limitations\r\n\r\n## Additional Information\r\n\r\nPls refer to this link: https:\/\/github.com\/midas-research\/hindi-nli-data\r\n\r\n### Dataset Curators\r\n\r\nIt is written in the repo : https:\/\/github.com\/avinsit123\/hindi-nli-data that \r\n- This corpus can be used freely for research purposes.\r\n- The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper.\r\n- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.\r\n- If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications.\r\n- Rather than redistributing the corpus, please direct interested parties to this page\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your data for natural language inference.\r\n - if interested in a collaborative research project.\r\n\r\n\r\n### Licensing Information\r\n\r\nCopyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi).\r\nPls contact authors for any information on the dataset.\r\n\r\n### Citation Information\r\n\r\n```\r\n @inproceedings{uppal-etal-2020-two,\r\n title = \"Two-Step Classification using Recasted Data for Low Resource Settings\",\r\n author = \"Uppal, Shagun and\r\n Gupta, Vivek and\r\n Swaminathan, Avinash and\r\n Zhang, Haimin and\r\n Mahata, Debanjan and\r\n Gosangi, Rakesh and\r\n Shah, Rajiv Ratn and\r\n Stent, Amanda\",\r\n booktitle = \"Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing\",\r\n month = dec,\r\n year = \"2020\",\r\n address = \"Suzhou, China\",\r\n publisher = \"Association for Computational Linguistics\",\r\n url = \"https:\/\/www.aclweb.org\/anthology\/2020.aacl-main.71\",\r\n pages = \"706--719\",\r\n abstract = \"An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.\",\r\n}\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1158\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1157","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1157\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1157\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1157\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1157","id":757657888,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMzMDAwNDQy","number":1157,"title":"Add dataset XhosaNavy English -Xhosa","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607167194000,"updated_at":1607332293000,"closed_at":1607332293000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1157","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1157","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1157.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1157.patch"},"body":"Add dataset XhosaNavy English -Xhosa\r\nMore info : http:\/\/opus.nlpl.eu\/XhosaNavy.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1157\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1156","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1156\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1156\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1156\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1156","id":757656094,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyOTk5MTQ1","number":1156,"title":"add telugu-news corpus","user":{"login":"oostopitre","id":3135345,"node_id":"MDQ6VXNlcjMxMzUzNDU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3135345?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/oostopitre","html_url":"https:\/\/github.com\/oostopitre","followers_url":"https:\/\/api.github.com\/users\/oostopitre\/followers","following_url":"https:\/\/api.github.com\/users\/oostopitre\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/oostopitre\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/oostopitre\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/oostopitre\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/oostopitre\/orgs","repos_url":"https:\/\/api.github.com\/users\/oostopitre\/repos","events_url":"https:\/\/api.github.com\/users\/oostopitre\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/oostopitre\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607166476000,"updated_at":1607332128000,"closed_at":1607332128000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1156","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1156","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1156.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1156.patch"},"body":"Adding Telugu News Corpus to datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1156\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1155","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1155\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1155\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1155\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1155","id":757652517,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyOTk2NjQ2","number":1155,"title":"Add BSD ","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Glad to have more Japanese data! Couple of comments:\r\n- the abbreviation might confuse some people as there is also an OPUS BSD corpus, would you mind renaming it as `bsd_ja_en`?\r\n- `flake8` is throwing some errors, you can run it locally (`flake8 datasets`) and fix what it tells you until it's happy :)\r\n- We're not using `os.path.join` for URLs as it's unstable across systems (introduces backslashes on Windows). Can you write the URLs explicitly instead?\r\n\r\nThanks!","Fantastic, looks great!","> Fantastic, looks great!\r\n\r\nThanks for your help @yjernite, really appreciate it!","The RemoteDatasetTest is fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1607165028000,"updated_at":1607333266000,"closed_at":1607333266000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1155","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1155","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1155.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1155.patch"},"body":"This PR adds BSD, the Japanese-English business dialogue corpus by \r\n[Rikters et al., 2020](https:\/\/www.aclweb.org\/anthology\/D19-5204.pdf). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1155\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1154","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1154\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1154\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1154\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1154","id":757651669,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyOTk2MDQ3","number":1154,"title":"Opus sardware","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607164682000,"updated_at":1607187945000,"closed_at":1607187945000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1154","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1154","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1154.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1154.patch"},"body":"Added Opus sardware dataset for machine translation English to Sardinian.\r\nfor more info : http:\/\/opus.nlpl.eu\/sardware.php","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1154\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1153","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1153\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1153\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1153\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1153","id":757643302,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyOTkwMTk4","number":1153,"title":"Adding dataset for proto_qa in huggingface datasets library","user":{"login":"bpatidar","id":12439573,"node_id":"MDQ6VXNlcjEyNDM5NTcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12439573?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bpatidar","html_url":"https:\/\/github.com\/bpatidar","followers_url":"https:\/\/api.github.com\/users\/bpatidar\/followers","following_url":"https:\/\/api.github.com\/users\/bpatidar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bpatidar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bpatidar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bpatidar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bpatidar\/orgs","repos_url":"https:\/\/api.github.com\/users\/bpatidar\/repos","events_url":"https:\/\/api.github.com\/users\/bpatidar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bpatidar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607161408000,"updated_at":1607194390000,"closed_at":1607194390000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1153","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1153","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1153.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1153.patch"},"body":"Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning\r\nFollowed all steps for adding a new dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1153\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1152","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1152\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1152\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1152\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1152","id":757640506,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyOTg4MjMw","number":1152,"title":"hindi discourse analysis dataset commit","user":{"login":"duttahritwik","id":31453142,"node_id":"MDQ6VXNlcjMxNDUzMTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31453142?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/duttahritwik","html_url":"https:\/\/github.com\/duttahritwik","followers_url":"https:\/\/api.github.com\/users\/duttahritwik\/followers","following_url":"https:\/\/api.github.com\/users\/duttahritwik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/duttahritwik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/duttahritwik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/duttahritwik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/duttahritwik\/orgs","repos_url":"https:\/\/api.github.com\/users\/duttahritwik\/repos","events_url":"https:\/\/api.github.com\/users\/duttahritwik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/duttahritwik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That's a great dataset to have! We need a couple more things to be good to go:\r\n- you should `make style` and `flake8 datasets` before pushing to make the code quality check happy :) \r\n- the dataset will need some dummy data which you should be able to auto-generate and test locally: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n- there's some good information in your current README, but we need the format to follow the template [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README.md) and to have YAML tags at the top, as described in the guide: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nLEt us know if you need any help!","Hi @yjernite \r\nI was successfully able to generate the dataset_info.json file using the command \r\npython datasets-cli test datasets\/<your-dataset-folder> --save_infos --all_configs\r\n\r\nBut unfortunately, could not generate the dummy data\r\n\r\nWhile running the command \r\npython datasets-cli dummy_data datasets\/<your-dataset-folder> --auto_generate\r\nI got an error as \r\n\r\nValueError: Couldn't parse columns ['0', '1', '2', '3', '4', ......, '9982']. Maybe specify which json field must be used to read the data with --json_field <my_field>.\r\n\r\nThe thing is the dataset I am trying to upload is of the format \r\n{\r\n '0': {'Story_no': 15, 'Sentence': ' \u0917\u093e\u0901\u0920 \u0938\u0947 \u0938\u093e\u0922\u093c\u0947 \u0924\u0940\u0928 \u0930\u0941\u092a\u092f\u0947 \u0932\u0917 \u0917\u092f\u0947, \u091c\u094b \u0905\u092c \u092a\u0947\u091f \u092e\u0947\u0902 \u091c\u093e\u0915\u0930 \u0916\u0928\u0915\u0924\u0947 \u092d\u0940 \u0928\u0939\u0940\u0902! \u091c\u094b \u0924\u0947\u0930\u0940 \u0915\u0930\u0928\u0940 \u092e\u093e\u0932\u093f\u0915! \u201d \u201c\u0907\u0938\u092e\u0947\u0902 \u092e\u093e\u0932\u093f\u0915 \u0915\u0940 \u0915\u094d\u092f\u093e \u0915\u0930\u0928\u0940 \u0939\u0948? \u201d', 'Discourse Mode': 'Dialogue'},\r\n '1': {'Story_no': story_no, 'Sentence': sentence, 'Discourse Mode': discourse_mode},\r\n .......,\r\n '9982': {'Story_no': story_no, 'Sentence': sentence, 'Discourse Mode': discourse_mode}\r\n}\r\n\r\nCan you please suggest any errors I am making in the _generate_examples method?\r\n\r\nThanks!","The dummy data generator doesn't support this kind of json format yet.\r\nCan you create the dummy data manually please ? You can get the instructions by running the \r\n```\r\ndatasets-cli dummy_data .\/datasets\/dataset_name\r\n```\r\ncommand.","Hi, I created the dummy data manually but the tests are still failing it seems.\r\nCan you suggest the format of JSON which is supported by dummy data generator?\r\nI will have to modify my _generate_examples method accordingly.\r\nPlease advice on the same.\r\nThanks much.\r\n","Can you run `make style` to format the code for the CI please ?\r\n\r\nAlso about the dummy data, here is how to generate them:\r\n\r\nWe need a dummy_data.zip file in .\/datasets\/hindiDiscourse\/dummy\/1.0.0 (or replace hindiDiscourse by hindi_discourse since we have to rename the folder anyway)\r\nTo create the zip file, first go in this directory and create a folder named dummy_data.\r\nThen inside the dummy_data folder create a file `discourse_dataset.json` and fill it with something like 5 examples.\r\nFinally zip the dummy_data folder to end up with the dummy_data.zip file\r\n\r\nOnce it's done you can check if the dummy data test passes with \r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_hindi_discourse\r\n```\r\n\r\nIf it passes you can then remove the dummy_data folder to keep only the dummy_data.zip file","Hi @duttahritwik did you manage to make the dummy data ?\r\nFeel free to ping me if you have questions or if we can help","The error `tests\/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one","Ci is green on master :) ","merging since the CI is fixed on master"],"created_at":1607160241000,"updated_at":1607975088000,"closed_at":1607975088000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1152","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1152","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1152.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1152.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1152\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1151","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1151\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1151\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1151\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1151","id":757517092,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODc5ODk4","number":1151,"title":"adding psc dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607136001000,"updated_at":1607513921000,"closed_at":1607513921000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1151","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1151","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1151.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1151.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1151\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1150","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1150\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1150\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1150\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1150","id":757512441,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODc2MzEz","number":1150,"title":"adding dyk dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607134302000,"updated_at":1607187139000,"closed_at":1607187139000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1150","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1150","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1150.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1150.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1150\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1149","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1149\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1149\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1149\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1149","id":757504068,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODY5ODUz","number":1149,"title":"Fix typo in the comment in _info function","user":{"login":"vinaykudari","id":34424769,"node_id":"MDQ6VXNlcjM0NDI0NzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34424769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vinaykudari","html_url":"https:\/\/github.com\/vinaykudari","followers_url":"https:\/\/api.github.com\/users\/vinaykudari\/followers","following_url":"https:\/\/api.github.com\/users\/vinaykudari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vinaykudari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vinaykudari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vinaykudari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vinaykudari\/orgs","repos_url":"https:\/\/api.github.com\/users\/vinaykudari\/repos","events_url":"https:\/\/api.github.com\/users\/vinaykudari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vinaykudari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607131580000,"updated_at":1607185166000,"closed_at":1607185166000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1149","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1149","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1149.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1149.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1149\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1148","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1148\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1148\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1148\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1148","id":757503918,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODY5NzM0","number":1148,"title":"adding polemo2 dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607131529000,"updated_at":1607187099000,"closed_at":1607187099000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1148","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1148","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1148.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1148.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1148\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1147","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1147\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1147\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1147\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1147","id":757502199,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODY4MzU2","number":1147,"title":"Vinay\/add\/telugu books","user":{"login":"vinaykudari","id":34424769,"node_id":"MDQ6VXNlcjM0NDI0NzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34424769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vinaykudari","html_url":"https:\/\/github.com\/vinaykudari","followers_url":"https:\/\/api.github.com\/users\/vinaykudari\/followers","following_url":"https:\/\/api.github.com\/users\/vinaykudari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vinaykudari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vinaykudari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vinaykudari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vinaykudari\/orgs","repos_url":"https:\/\/api.github.com\/users\/vinaykudari\/repos","events_url":"https:\/\/api.github.com\/users\/vinaykudari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vinaykudari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607131022000,"updated_at":1607186164000,"closed_at":1607186164000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1147","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1147","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1147.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1147.patch"},"body":"Real data tests are failing as this dataset needs to be manually downloaded","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1147\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1146","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1146\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1146\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1146\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1146","id":757498565,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODY1NTAy","number":1146,"title":"Add LINNAEUS","user":{"login":"edugp","id":17855740,"node_id":"MDQ6VXNlcjE3ODU1NzQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17855740?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/edugp","html_url":"https:\/\/github.com\/edugp","followers_url":"https:\/\/api.github.com\/users\/edugp\/followers","following_url":"https:\/\/api.github.com\/users\/edugp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/edugp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/edugp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/edugp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/edugp\/orgs","repos_url":"https:\/\/api.github.com\/users\/edugp\/repos","events_url":"https:\/\/api.github.com\/users\/edugp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/edugp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607130069000,"updated_at":1607186153000,"closed_at":1607186153000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1146","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1146","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1146.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1146.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1146\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1145","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1145\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1145\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1145\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1145","id":757477349,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODQ4MTQx","number":1145,"title":"Add Species-800","user":{"login":"edugp","id":17855740,"node_id":"MDQ6VXNlcjE3ODU1NzQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17855740?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/edugp","html_url":"https:\/\/github.com\/edugp","followers_url":"https:\/\/api.github.com\/users\/edugp\/followers","following_url":"https:\/\/api.github.com\/users\/edugp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/edugp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/edugp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/edugp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/edugp\/orgs","repos_url":"https:\/\/api.github.com\/users\/edugp\/repos","events_url":"https:\/\/api.github.com\/users\/edugp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/edugp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks @lhoestq ! I probably need to do the same change in the `SplitGenerator`s (lines 107, 110 and 113). I'll open a new PR for that","Yes indeed ! Good catch \ud83d\udc4d \r\nFeel free to open a PR and ping me"],"created_at":1607125491000,"updated_at":1607191730000,"closed_at":1607186101000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1145","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1145","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1145.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1145.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1145\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1144","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1144\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1144\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1144\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1144","id":757452831,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODI3OTI4","number":1144,"title":"Add JFLEG","user":{"login":"j-chim","id":22435209,"node_id":"MDQ6VXNlcjIyNDM1MjA5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22435209?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/j-chim","html_url":"https:\/\/github.com\/j-chim","followers_url":"https:\/\/api.github.com\/users\/j-chim\/followers","following_url":"https:\/\/api.github.com\/users\/j-chim\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/j-chim\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/j-chim\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/j-chim\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/j-chim\/orgs","repos_url":"https:\/\/api.github.com\/users\/j-chim\/repos","events_url":"https:\/\/api.github.com\/users\/j-chim\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/j-chim\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @j-chim ! You're right it does feel redundant: your option works better, but I'd even suggest having the references in a Sequence feature, which you can declare as:\r\n```\r\n\t features=datasets.Features(\r\n {\r\n \"sentence\": datasets.Value(\"string\"),\r\n \"corrections\": datasets.Sequence(datasets.Value(\"string\")),\r\n }\r\n ),\r\n```\r\n\r\nTo create the dummy data, you just need to tell the generator which files it should use, which you can do with:\r\n`python datasets-cli dummy_data datasets\/<your-dataset-folder> --auto_generate --match_text_files \"train*,dev*,test*\"`\r\n","Many thanks for this @yjernite! I've incorporated your feedback and sorted out the dummy data."],"created_at":1607121398000,"updated_at":1607278564000,"closed_at":1607278564000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1144","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1144","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1144.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1144.patch"},"body":"This PR adds [JFLEG ](https:\/\/www.aclweb.org\/anthology\/E17-2037\/), an English grammatical error correction benchmark. \r\n\r\nThe tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target sentences**. The original dataset comprise files in a flat structure, labelled by split then by source\/target (e.g., dev.src, dev.ref0, ..., dev.ref3). Not sure what is the best way of adding this.\r\n\r\nI imagine I can treat each distinct source-target pair as its own split? But having so many copies of the source sentence feels redundant, and it would make it less convenient to end-users who might want to access multiple gold standard targets simultaneously. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1144\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1143","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1143\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1143\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1143\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1143","id":757448920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyODI0NzMx","number":1143,"title":"Add the Winograd Schema Challenge","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607120819000,"updated_at":1607526691000,"closed_at":1607506354000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1143","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1143","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1143.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1143.patch"},"body":"Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples.\r\n\r\n- https:\/\/cs.nyu.edu\/faculty\/davise\/papers\/WinogradSchemas\/WS.html\r\n\r\nThe data format was a bit of a nightmare but I think I got it to a workable format.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1143\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1142","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1142\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1142\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1142\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1142","id":757413920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzk1MjY0","number":1142,"title":"Fix PerSenT","user":{"login":"jeromeku","id":2455711,"node_id":"MDQ6VXNlcjI0NTU3MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2455711?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeromeku","html_url":"https:\/\/github.com\/jeromeku","followers_url":"https:\/\/api.github.com\/users\/jeromeku\/followers","following_url":"https:\/\/api.github.com\/users\/jeromeku\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeromeku\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeromeku\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeromeku\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeromeku\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeromeku\/repos","events_url":"https:\/\/api.github.com\/users\/jeromeku\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeromeku\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607116862000,"updated_at":1607953174000,"closed_at":1607953174000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1142","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1142","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1142.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1142.patch"},"body":"New PR for dataset PerSenT","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1142\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1141","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1141\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1141\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1141\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1141","id":757411057,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzkyNzU3","number":1141,"title":"Add GitHub version of ETH Py150 Corpus","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The `RemoteDatasetTest` is fixed on master so it's fine","thanks for rebasing :)\r\n\r\nCI is green now, merging"],"created_at":1607116568000,"updated_at":1607538764000,"closed_at":1607335224000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1141","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1141","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1141.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1141.patch"},"body":"Add the redistributable version of **ETH Py150 Corpus**","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1141\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1140","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1140\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1140\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1140\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1140","id":757399142,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzgyODc0","number":1140,"title":"Add Urdu Sentiment Corpus (USC). ","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq have made the suggested changes in the README file.","@lhoestq Created a new PR #1231 with only the relevant files.\r\nclosing this one :)"],"created_at":1607115327000,"updated_at":1607311643000,"closed_at":1607311643000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1140","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1140","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1140.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1140.patch"},"body":"Added Urdu Sentiment Corpus. More details about the dataset over <a href=\"https:\/\/github.com\/MuhammadYaseenKhan\/Urdu-Sentiment-Corpus\">here<\/a>. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1140\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1139","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1139\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1139\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1139\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1139","id":757393158,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzc3OTg2","number":1139,"title":"Add ReFreSD dataset","user":{"login":"mpariente","id":18496796,"node_id":"MDQ6VXNlcjE4NDk2Nzk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18496796?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mpariente","html_url":"https:\/\/github.com\/mpariente","followers_url":"https:\/\/api.github.com\/users\/mpariente\/followers","following_url":"https:\/\/api.github.com\/users\/mpariente\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mpariente\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mpariente\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mpariente\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mpariente\/orgs","repos_url":"https:\/\/api.github.com\/users\/mpariente\/repos","events_url":"https:\/\/api.github.com\/users\/mpariente\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mpariente\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool dataset! Replying in-line:\r\n\r\n> This PR adds the **ReFreSD dataset**.\r\n> The original data is hosted [on this github repo](https:\/\/github.com\/Elbria\/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.\r\n> \r\n> Need feedback on:\r\n> \r\n> * I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.\r\n\r\nyou can use `--match_text_files` in the dummy data generation:\r\n`python datasets-cli dummy_data datasets\/refresd --auto_generate --match_text_files \"REFreSD_rationale\"`\r\n\r\n> * The feature names.\r\n> \r\n> * I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.\r\n\r\nIt would actually be even better to use the `Translation` feature here to replace best:\r\n`\"sentence_pair\": datasets.Translation(languages=['en', 'fr']),`\r\n\r\nThen during `_generate_examples` this filed should look like\"\r\n`{\"sentence_pair\": {\"fr\": french, \"en\": english}}`\r\n\r\n> * There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.\r\nLooks good!\r\n\r\n> * The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.\r\n\r\nHaving the feature declared as `\"rationale_en\": datasets.Sequence(datasets.Value(\"int32\"))` should work\r\n\r\n> \r\n> Thanks in advance\r\n\r\nHope that helps you out! Don't forget to `make style`, rebase from master, and run all the tests before pushing again! You will also need to add a `README.md` as described in the guide:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card","Thanks a lot for the answer, that does help a lot !\r\nI opened a PR for a License in the original repo so I was waiting for that for the model card. If there is no news on Monday, I'll add it without License. ","Looks good! It looks like it might need a rebase to pass the tests. Once you do that, should be good to go!"],"created_at":1607114711000,"updated_at":1608134478000,"closed_at":1608134478000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1139","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1139","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1139.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1139.patch"},"body":"This PR adds the **ReFreSD dataset**. \r\nThe original data is hosted [on this github repo](https:\/\/github.com\/Elbria\/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data. \r\n\r\n\r\nNeed feedback on:\r\n- I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work. \r\n- The feature names. \r\n - I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit. \r\n - There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better. \r\n- The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple. \r\n\r\nThanks in advance ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1139\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1138","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1138\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1138\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1138\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1138","id":757378406,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzY1NTI2","number":1138,"title":"updated after the class name update","user":{"login":"timpal0l","id":6556710,"node_id":"MDQ6VXNlcjY1NTY3MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6556710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timpal0l","html_url":"https:\/\/github.com\/timpal0l","followers_url":"https:\/\/api.github.com\/users\/timpal0l\/followers","following_url":"https:\/\/api.github.com\/users\/timpal0l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timpal0l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timpal0l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timpal0l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timpal0l\/orgs","repos_url":"https:\/\/api.github.com\/users\/timpal0l\/repos","events_url":"https:\/\/api.github.com\/users\/timpal0l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timpal0l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607113183000,"updated_at":1607183012000,"closed_at":1607183012000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1138","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1138","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1138.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1138.patch"},"body":"@lhoestq <--- ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1138\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1137","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1137\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1137\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1137\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1137","id":757358145,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzQ4NDAx","number":1137,"title":"add wmt mlqe 2020 shared task","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["re-created in #1218 because this was too messy"],"created_at":1607111134000,"updated_at":1607284784000,"closed_at":1607284426000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1137","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1137","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1137.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1137.patch"},"body":"First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation)\r\nNote that I copied the tags in the README for only one (of the 7 configurations): `en-de`.\r\nThere is one configuration for each pair of languages.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1137\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1136","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1136\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1136\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1136\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1136","id":757341607,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzM0MzQ4","number":1136,"title":"minor change in description in paws-x.py and updated dataset_infos","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607109469000,"updated_at":1607277777000,"closed_at":1607277777000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1136","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1136","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1136.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1136.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1136\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1135","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1135\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1135\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1135\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1135","id":757325741,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzIxMDIz","number":1135,"title":"added paws","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607107958000,"updated_at":1607534233000,"closed_at":1607534233000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1135","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1135","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1135.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1135.patch"},"body":"Updating README and tags for dataset card in a while","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1135\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1134","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1134\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1134\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1134\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1134","id":757317651,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzE0MjQ2","number":1134,"title":"adding xquad-r dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607107153000,"updated_at":1607187047000,"closed_at":1607187047000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1134","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1134","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1134.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1134.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1134\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1133","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1133\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1133\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1133\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1133","id":757307660,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzA1ODQ4","number":1133,"title":"Adding XQUAD-R Dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607106149000,"updated_at":1607106534000,"closed_at":1607106529000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1133","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1133","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1133.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1133.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1133\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1132","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1132\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1132\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1132\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1132","id":757301368,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNzAwNTY5","number":1132,"title":"Add Urdu Sentiment Corpus (USC).","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607105544000,"updated_at":1607115168000,"closed_at":1607115168000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1132","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1132","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1132.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1132.patch"},"body":"Added Urdu Sentiment Corpus. More details about the dataset over <a href=\"https:\/\/github.com\/MuhammadYaseenKhan\/Urdu-Sentiment-Corpus\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1132\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1131","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1131\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1131\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1131\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1131","id":757278341,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjgxMTI0","number":1131,"title":"Adding XQUAD-R Dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607103343000,"updated_at":1607106442000,"closed_at":1607106442000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1131","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1131","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1131.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1131.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1131\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1130","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1130\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1130\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1130\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1130","id":757265075,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjY5ODY0","number":1130,"title":"adding discovery","user":{"login":"sileod","id":9168444,"node_id":"MDQ6VXNlcjkxNjg0NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9168444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sileod","html_url":"https:\/\/github.com\/sileod","followers_url":"https:\/\/api.github.com\/users\/sileod\/followers","following_url":"https:\/\/api.github.com\/users\/sileod\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sileod\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sileod\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sileod\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sileod\/orgs","repos_url":"https:\/\/api.github.com\/users\/sileod\/repos","events_url":"https:\/\/api.github.com\/users\/sileod\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sileod\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607102214000,"updated_at":1607950994000,"closed_at":1607950994000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1130","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1130","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1130.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1130.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1130\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1129","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1129\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1129\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1129\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1129","id":757255492,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjYxNzM2","number":1129,"title":"Adding initial version of cord-19 dataset","user":{"login":"ggdupont","id":5583410,"node_id":"MDQ6VXNlcjU1ODM0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5583410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ggdupont","html_url":"https:\/\/github.com\/ggdupont","followers_url":"https:\/\/api.github.com\/users\/ggdupont\/followers","following_url":"https:\/\/api.github.com\/users\/ggdupont\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ggdupont\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ggdupont\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ggdupont\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ggdupont\/orgs","repos_url":"https:\/\/api.github.com\/users\/ggdupont\/repos","events_url":"https:\/\/api.github.com\/users\/ggdupont\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ggdupont\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ggdupont !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or when you're ready for a review","> Hi @ggdupont !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or when you're ready for a review\r\n\r\nYes I did, just busy period (and no time on weekend right now ;-) )","With some delay, reduced the dummy data and had t rebase","Thanks !\r\n\r\nIt looks like the rebase messed up the github diff for this PR (2.000+ files changed)\r\nCould you create another branch and another PR please ?","Cleaned PR: https:\/\/github.com\/huggingface\/datasets\/pull\/1850"],"created_at":1607101397000,"updated_at":1612866155000,"closed_at":1612865886000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1129","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1129","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1129.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1129.patch"},"body":"Initial version only reading the metadata in CSV.\r\n\r\n### Checklist:\r\n- [x] Create the dataset script \/datasets\/my_dataset\/my_dataset.py using the template\r\n- [x] Fill the _DESCRIPTION and _CITATION variables\r\n- [x] Implement _infos(), _split_generators() and _generate_examples()\r\n- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.\r\n- [x] Generate the metadata file dataset_infos.json for all configurations\r\n- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card README.md using the template and at least fill the tags\r\n- [x] Both tests for the real data and the dummy data pass.\r\n\r\n### TODO:\r\n- [x] add more metadata\r\n- [x] add full text\r\n- [x] add pre-computed document embedding","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1129\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1128","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1128\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1128\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1128\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1128","id":757245404,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjUzMzgy","number":1128,"title":"Add xquad-r dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607100533000,"updated_at":1607105670000,"closed_at":1607105666000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1128","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1128","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1128.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1128.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1128\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1127","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1127\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1127\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1127\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1127","id":757229684,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjQwMjMx","number":1127,"title":"Add wikiqaar dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607099178000,"updated_at":1607359181000,"closed_at":1607359181000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1127","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1127","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1127.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1127.patch"},"body":"Arabic Wiki Question Answering Corpus.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1127\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1126","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1126\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1126\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1126\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1126","id":757197735,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjEzNzcw","number":1126,"title":"Adding babi dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is ok now @lhoestq\r\n\r\nI've included the tweak to `dummy_data` to only use the data transmitted to `_generate_examples` by default (it only do that if it can find at least one path to an existing file in the `gen_kwargs` and this can be unactivated with a flag).\r\n\r\nShould I extract it in another PR or is it ok like this?","Nice !\r\nCould you add the dummy data generation trick in another PR ?\r\nI think we can also extend it to make it work not only with data files paths but also with data directories (sometimes it's one of the parent directory that is passed to gen_kwargs, not the actual path to the file).\r\nThis will help a lot to make the dummy data lighter !","This PR can be closed due to #2053 @lhoestq\r\n\r\n"],"created_at":1607096554000,"updated_at":1617097444000,"closed_at":1617097444000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1126","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1126","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1126.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1126.patch"},"body":"Adding the English version of bAbI.\r\n\r\nSamples are taken from ParlAI for consistency with the main users at the moment.\r\n\r\nSupersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1126\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1125","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1125\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1125\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1125\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1125","id":757194531,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjExMDU5","number":1125,"title":"Add Urdu fake news dataset.","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq looks like a lot of files were updated... shall I create a new PR?","Hi @chaitnayabasava ! you can try rebasing and see if that fixes the number of files changed, otherwise please do open a new PR with only the relevant files and close this one :) ","Created a new PR #1230.\r\nclosing this one :)"],"created_at":1607096297000,"updated_at":1607311265000,"closed_at":1607311265000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1125","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1125","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1125.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1125.patch"},"body":"Added Urdu fake news dataset. More information about the dataset can be found <a href=\"https:\/\/github.com\/MaazAmjad\/Datasets-for-Urdu-news\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1125\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1124","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1124\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1124\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1124\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1124","id":757186983,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNjA0NzY3","number":1124,"title":"Add Xitsonga Ner","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes about many files other than the ones related to xitsonga NER\r\n\r\ncould you create another branch and another PR please ?"],"created_at":1607095664000,"updated_at":1607279495000,"closed_at":1607279495000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1124","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1124","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1124.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1124.patch"},"body":"Clean Xitsonga Ner PR","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1124\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1123","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1123\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1123\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1123\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1123","id":757181014,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTk5ODQ3","number":1123,"title":"adding cdt dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the `ms_terms` formatting CI fails is fixed on master","merging since the CI is fixed on master"],"created_at":1607095176000,"updated_at":1607101556000,"closed_at":1607101556000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1123","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1123","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1123.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1123.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1123\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1122","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1122\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1122\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1122\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1122","id":757176172,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTk1ODE5","number":1122,"title":"Add Urdu fake news.","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607094790000,"updated_at":1607095207000,"closed_at":1607095207000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1122","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1122","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1122.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1122.patch"},"body":"Added Urdu fake news dataset. More information about the dataset can be found <a href=\"https:\/\/github.com\/MaazAmjad\/Datasets-for-Urdu-news\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1122\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1121","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1121\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1121\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1121\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1121","id":757169944,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTkwNjY2","number":1121,"title":"adding cdt dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607094273000,"updated_at":1607095009000,"closed_at":1607095009000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1121","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1121","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1121.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1121.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1121\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1120","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1120\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1120\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1120\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1120","id":757166342,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTg3Njk1","number":1120,"title":"Add conda environment activation","user":{"login":"parmarsuraj99","id":9317265,"node_id":"MDQ6VXNlcjkzMTcyNjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9317265?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/parmarsuraj99","html_url":"https:\/\/github.com\/parmarsuraj99","followers_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/followers","following_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/orgs","repos_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/repos","events_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/parmarsuraj99\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607093983000,"updated_at":1607106888000,"closed_at":1607100057000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1120","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1120","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1120.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1120.patch"},"body":"Added activation of Conda environment before installing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1120\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1119","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1119\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1119\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1119\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1119","id":757156781,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTc5ODA5","number":1119,"title":"Add Google Great Code Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607093188000,"updated_at":1607275994000,"closed_at":1607275993000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1119","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1119","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1119.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1119.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1119\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1118","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1118\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1118\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1118\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1118","id":757142350,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTY3ODMw","number":1118,"title":"Add Tashkeela dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry @lhoestq for the trouble, sometime I forget to change the names :\/","> Sorry @lhoestq for the trouble, sometime I forget to change the names :\/\r\n\r\nhaha it's ok ;)"],"created_at":1607091978000,"updated_at":1607096821000,"closed_at":1607096811000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1118","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1118","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1118.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1118.patch"},"body":"Arabic Vocalized Words Dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1118\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1117","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1117\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1117\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1117\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1117","id":757133789,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTYwNzM4","number":1117,"title":"Fix incorrect MRQA train+SQuAD URL","user":{"login":"jimmycode","id":6259768,"node_id":"MDQ6VXNlcjYyNTk3Njg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6259768?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jimmycode","html_url":"https:\/\/github.com\/jimmycode","followers_url":"https:\/\/api.github.com\/users\/jimmycode\/followers","following_url":"https:\/\/api.github.com\/users\/jimmycode\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jimmycode\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jimmycode\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jimmycode\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jimmycode\/orgs","repos_url":"https:\/\/api.github.com\/users\/jimmycode\/repos","events_url":"https:\/\/api.github.com\/users\/jimmycode\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jimmycode\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! could you regenerate the dataset_infos.json file ?\r\n\r\n```\r\ndatasets-cli test .\/datasets\/mrqa --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nalso cc @VictorSanh ","Oooops, good catch @jimmycode ","> Thanks ! could you regenerate the dataset_infos.json file ?\r\n> \r\n> ```\r\n> datasets-cli test .\/datasets\/mrqa --save_infos --all_configs --ignore_verifications\r\n> ```\r\n> \r\n> also cc @VictorSanh\r\n\r\nUpdated the `dataset_infos.json` file."],"created_at":1607091266000,"updated_at":1607274851000,"closed_at":1607274850000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1117","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1117","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1117.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1117.patch"},"body":"Fix issue #1115 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1117\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1116","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1116\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1116\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1116\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1116","id":757133502,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTYwNDk4","number":1116,"title":"add dbpedia_14 dataset","user":{"login":"hfawaz","id":29229602,"node_id":"MDQ6VXNlcjI5MjI5NjAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29229602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hfawaz","html_url":"https:\/\/github.com\/hfawaz","followers_url":"https:\/\/api.github.com\/users\/hfawaz\/followers","following_url":"https:\/\/api.github.com\/users\/hfawaz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hfawaz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hfawaz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hfawaz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hfawaz\/orgs","repos_url":"https:\/\/api.github.com\/users\/hfawaz\/repos","events_url":"https:\/\/api.github.com\/users\/hfawaz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hfawaz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the review. \r\nCheers!","Hi @hfawaz, this week we are doing the \ud83e\udd17 `datasets` sprint (see some details [here](https:\/\/discuss.huggingface.co\/t\/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library\/2176)).\r\n\r\nNothing more to do on your side but it means that if you register on the thread I linked above, you can have some goodies for the present dataset that you have already added (and a special goodie if you want to spend more time and add 2 other datasets as well).\r\n\r\nIf you want to join, just tell me (or post on the thread on the HuggingFace forum: https:\/\/discuss.huggingface.co\/t\/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library\/2176)","Hello @thomwolf \r\nThanks for the feedback and for this invitation, indeed I would be glad to join you guys (you can add me). \r\nI will see if I have the time to implement a couple of datasets. \r\nCheers! ","@hfawaz invited you to the slack with your uha email.\r\n\r\nCheck your spam folder if you can't find the invitation :)","Oh thanks, but can you invite me on my gmail: hassanismailfawaz@gmail.com \r\nUHA is my old organization, I haven't had the time to update my online profiles yet.\r\nThank you "],"created_at":1607091239000,"updated_at":1607335614000,"closed_at":1607182583000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1116","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1116","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1116.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1116.patch"},"body":"This dataset corresponds to the DBpedia dataset requested in https:\/\/github.com\/huggingface\/datasets\/issues\/353.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1116\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1115","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1115\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1115\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1115\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1115","id":757127527,"node_id":"MDU6SXNzdWU3NTcxMjc1Mjc=","number":1115,"title":"Incorrect URL for MRQA SQuAD train subset","user":{"login":"jimmycode","id":6259768,"node_id":"MDQ6VXNlcjYyNTk3Njg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6259768?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jimmycode","html_url":"https:\/\/github.com\/jimmycode","followers_url":"https:\/\/api.github.com\/users\/jimmycode\/followers","following_url":"https:\/\/api.github.com\/users\/jimmycode\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jimmycode\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jimmycode\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jimmycode\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jimmycode\/orgs","repos_url":"https:\/\/api.github.com\/users\/jimmycode\/repos","events_url":"https:\/\/api.github.com\/users\/jimmycode\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jimmycode\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["good catch !"],"created_at":1607090724000,"updated_at":1607274862000,"closed_at":1607274862000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/huggingface\/datasets\/blob\/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74\/datasets\/mrqa\/mrqa.py#L53\r\n\r\nThe URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https:\/\/s3.us-east-2.amazonaws.com\/mrqa\/release\/v2\/train\/SQuAD.jsonl.gz`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1115\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1114","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1114\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1114\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1114\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1114","id":757123638,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTUyMjE1","number":1114,"title":"Add sesotho ner corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607090381000,"updated_at":1607094127000,"closed_at":1607094127000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1114","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1114","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1114.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1114.patch"},"body":"Clean Sesotho PR","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1114\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1113","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1113\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1113\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1113\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1113","id":757115557,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTQ1Mzg2","number":1113,"title":"add qed","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607089677000,"updated_at":1607183181000,"closed_at":1607182917000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1113","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1113","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1113.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1113.patch"},"body":"adding QED: Dataset for Explanations in Question Answering\r\nhttps:\/\/github.com\/google-research-datasets\/QED\r\nhttps:\/\/arxiv.org\/abs\/2009.06354","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1113\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1112","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1112\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1112\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1112\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1112","id":757108151,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTM5MjE2","number":1112,"title":"Initial version of cord-19 dataset from AllenAI with only the abstract","user":{"login":"ggdupont","id":5583410,"node_id":"MDQ6VXNlcjU1ODM0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5583410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ggdupont","html_url":"https:\/\/github.com\/ggdupont","followers_url":"https:\/\/api.github.com\/users\/ggdupont\/followers","following_url":"https:\/\/api.github.com\/users\/ggdupont\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ggdupont\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ggdupont\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ggdupont\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ggdupont\/orgs","repos_url":"https:\/\/api.github.com\/users\/ggdupont\/repos","events_url":"https:\/\/api.github.com\/users\/ggdupont\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ggdupont\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["too ugly, I'll make a clean one"],"created_at":1607088999000,"updated_at":1607098600000,"closed_at":1607098584000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1112","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1112","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1112.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1112.patch"},"body":"Initial version only reading the metadata in CSV.\r\n\r\n### Checklist:\r\n- [x] Create the dataset script \/datasets\/my_dataset\/my_dataset.py using the template\r\n- [x] Fill the _DESCRIPTION and _CITATION variables\r\n- [x] Implement _infos(), _split_generators() and _generate_examples()\r\n- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.\r\n- [x] Generate the metadata file dataset_infos.json for all configurations\r\n- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card README.md using the template and at least fill the tags\r\n- [ ] Both tests for the real data and the dummy data pass.\r\n\r\n### TODO:\r\n- [ ] add more metadata\r\n- [ ] add full text\r\n- [ ] add pre-computed document embedding","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1112\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1111","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1111\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1111\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1111\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1111","id":757083266,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNTE4NDY1","number":1111,"title":"Add Siswati Ner corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607086651000,"updated_at":1607092981000,"closed_at":1607092980000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1111","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1111","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1111.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1111.patch"},"body":"Clean Siswati PR","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1111\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1110","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1110\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1110\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1110\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1110","id":757082677,"node_id":"MDU6SXNzdWU3NTcwODI2Nzc=","number":1110,"title":"Using a feature named \"_type\" fails with certain operations","user":{"login":"dcfidalgo","id":15979778,"node_id":"MDQ6VXNlcjE1OTc5Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15979778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dcfidalgo","html_url":"https:\/\/github.com\/dcfidalgo","followers_url":"https:\/\/api.github.com\/users\/dcfidalgo\/followers","following_url":"https:\/\/api.github.com\/users\/dcfidalgo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dcfidalgo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dcfidalgo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dcfidalgo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dcfidalgo\/orgs","repos_url":"https:\/\/api.github.com\/users\/dcfidalgo\/repos","events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\n\r\nIndeed this is a keyword in the library that is used to encode\/decode features to a python dictionary that we can save\/load to json.\r\nWe can probably change `_type` to something that is less likely to collide with user feature names.\r\nIn this case we would want something backward compatible though.\r\n\r\nFeel free to try a fix and open a PR, and to ping me if I can help :) "],"created_at":1607086593000,"updated_at":1610535208000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations:\r\n```python\r\nfrom datasets import Dataset, concatenate_datasets\r\n\r\nds = Dataset.from_dict({\"_type\": [\"whatever\"]}).map()\r\nconcatenate_datasets([ds])\r\n# or simply\r\nDataset(ds._data)\r\n```\r\nContext: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column.\r\n\r\nNot sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict.\r\n\r\nBest wishes and keep up the awesome work!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1110\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1109","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1109\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1109\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1109\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1109","id":757055702,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDk1MDk2","number":1109,"title":"add woz_dialogue","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607083987000,"updated_at":1607182883000,"closed_at":1607182818000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1109","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1109","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1109.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1109.patch"},"body":"Adding Wizard-of-Oz task oriented dialogue dataset \r\nhttps:\/\/github.com\/nmrksic\/neural-belief-tracker\/tree\/master\/data\/woz\r\nhttps:\/\/arxiv.org\/abs\/1604.04562","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1109\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1108","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1108\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1108\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1108\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1108","id":757054732,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDk0MjY4","number":1108,"title":"Add Sepedi NER corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607083884000,"updated_at":1607092740000,"closed_at":1607092740000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1108","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1108","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1108.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1108.patch"},"body":"Finally a clean PR for Sepedi","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1108\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1107","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1107\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1107\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1107\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1107","id":757031179,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDc0MzMy","number":1107,"title":"Add arsentd_lev dataset","user":{"login":"moussaKam","id":28675016,"node_id":"MDQ6VXNlcjI4Njc1MDE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28675016?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moussaKam","html_url":"https:\/\/github.com\/moussaKam","followers_url":"https:\/\/api.github.com\/users\/moussaKam\/followers","following_url":"https:\/\/api.github.com\/users\/moussaKam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moussaKam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moussaKam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moussaKam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moussaKam\/orgs","repos_url":"https:\/\/api.github.com\/users\/moussaKam\/repos","events_url":"https:\/\/api.github.com\/users\/moussaKam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moussaKam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks ! can you also regenerate the dataset_infos.json file please ?"],"created_at":1607081464000,"updated_at":1607182689000,"closed_at":1607182689000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1107","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1107","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1107.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1107.patch"},"body":"Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV)\r\n\r\nPaper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https:\/\/arxiv.org\/abs\/1906.01830)\r\nHomepage: http:\/\/oma-project.com\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1107\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1106","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1106\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1106\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1106\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1106","id":757027158,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDcwOTM3","number":1106,"title":"Add Urdu fake news","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607081054000,"updated_at":1607091672000,"closed_at":1607091672000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1106","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1106","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1106.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1106.patch"},"body":"Added Urdu fake news dataset. More information about the dataset can be found <a href=\"https:\/\/github.com\/MaazAmjad\/Datasets-for-Urdu-news\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1106\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1105","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1105\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1105\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1105\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1105","id":757024162,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDY4NDIw","number":1105,"title":"add xquad_r dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes in many files than the ones for xquad_r, could you create a new branch and a new PR ?","Sure, I will close this then.\r\n"],"created_at":1607080775000,"updated_at":1607099820000,"closed_at":1607099820000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1105","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1105","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1105.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1105.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1105\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1104","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1104\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1104\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1104\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1104","id":757020934,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDY1NzA4","number":1104,"title":"add TLC","user":{"login":"chameleonTK","id":6429850,"node_id":"MDQ6VXNlcjY0Mjk4NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6429850?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chameleonTK","html_url":"https:\/\/github.com\/chameleonTK","followers_url":"https:\/\/api.github.com\/users\/chameleonTK\/followers","following_url":"https:\/\/api.github.com\/users\/chameleonTK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chameleonTK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chameleonTK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chameleonTK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chameleonTK\/orgs","repos_url":"https:\/\/api.github.com\/users\/chameleonTK\/repos","events_url":"https:\/\/api.github.com\/users\/chameleonTK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chameleonTK\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607080498000,"updated_at":1607092163000,"closed_at":1607092163000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1104","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1104","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1104.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1104.patch"},"body":"Added TLC dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1104\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1103","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1103\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1103\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1103\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1103","id":757016820,"node_id":"MDU6SXNzdWU3NTcwMTY4MjA=","number":1103,"title":"Add support to download kaggle datasets","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?"],"created_at":1607080117000,"updated_at":1626889093000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"We can use API key","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1103\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1102","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1102\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1102\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1102\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1102","id":757016515,"node_id":"MDU6SXNzdWU3NTcwMTY1MTU=","number":1102,"title":"Add retries to download manager","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false},"assignees":[{"login":"SBrandeis","id":33657802,"node_id":"MDQ6VXNlcjMzNjU3ODAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33657802?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SBrandeis","html_url":"https:\/\/github.com\/SBrandeis","followers_url":"https:\/\/api.github.com\/users\/SBrandeis\/followers","following_url":"https:\/\/api.github.com\/users\/SBrandeis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SBrandeis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SBrandeis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SBrandeis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SBrandeis\/orgs","repos_url":"https:\/\/api.github.com\/users\/SBrandeis\/repos","events_url":"https:\/\/api.github.com\/users\/SBrandeis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SBrandeis\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1607080091000,"updated_at":1608651246000,"closed_at":1608651246000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1102\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1101","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1101\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1101\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1101\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1101","id":757009226,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDU2MDM4","number":1101,"title":"Add Wikicorpus dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq done! ;)"],"created_at":1607079446000,"updated_at":1607537590000,"closed_at":1607537589000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1101","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1101","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1101.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1101.patch"},"body":"Add dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1101\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1100","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1100\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1100\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1100\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1100","id":756998433,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDQ2ODc1","number":1100,"title":"Urdu fake news","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607078480000,"updated_at":1607080740000,"closed_at":1607080740000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1100","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1100","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1100.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1100.patch"},"body":"Added Bend the Truth urdu fake news dataset. More inforation <a href=\"https:\/\/github.com\/MaazAmjad\/Datasets-for-Urdu-news\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1100\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1099","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1099\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1099\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1099\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1099","id":756993540,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDQyODEw","number":1099,"title":"Add tamilmixsentiment data","user":{"login":"jamespaultg","id":7421838,"node_id":"MDQ6VXNlcjc0MjE4Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7421838?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jamespaultg","html_url":"https:\/\/github.com\/jamespaultg","followers_url":"https:\/\/api.github.com\/users\/jamespaultg\/followers","following_url":"https:\/\/api.github.com\/users\/jamespaultg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jamespaultg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jamespaultg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jamespaultg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jamespaultg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jamespaultg\/repos","events_url":"https:\/\/api.github.com\/users\/jamespaultg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jamespaultg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607078047000,"updated_at":1607236342000,"closed_at":1607186913000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1099","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1099","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1099.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1099.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1099\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1098","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1098\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1098\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1098\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1098","id":756975414,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDI3OTE5","number":1098,"title":"Add ToTTo Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607076445000,"updated_at":1607089100000,"closed_at":1607089099000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1098","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1098","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1098.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1098.patch"},"body":"Adds a brand new table to text dataset: https:\/\/github.com\/google-research-datasets\/ToTTo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1098\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1097","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1097\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1097\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1097\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1097","id":756955729,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDExNzQ4","number":1097,"title":"Add MSRA NER labels","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607074696000,"updated_at":1607088719000,"closed_at":1607088718000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1097","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1097","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1097.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1097.patch"},"body":"Fixes #940 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1097\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1096","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1096\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1096\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1096\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1096","id":756952461,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyNDA5MDIx","number":1096,"title":"FIX matinf link in ADD_NEW_DATASET.md","user":{"login":"moussaKam","id":28675016,"node_id":"MDQ6VXNlcjI4Njc1MDE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28675016?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moussaKam","html_url":"https:\/\/github.com\/moussaKam","followers_url":"https:\/\/api.github.com\/users\/moussaKam\/followers","following_url":"https:\/\/api.github.com\/users\/moussaKam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moussaKam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moussaKam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moussaKam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moussaKam\/orgs","repos_url":"https:\/\/api.github.com\/users\/moussaKam\/repos","events_url":"https:\/\/api.github.com\/users\/moussaKam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moussaKam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607074405000,"updated_at":1607091935000,"closed_at":1607091935000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1096","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1096","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1096.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1096.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1096\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1095","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1095\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1095\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1095\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1095","id":756934964,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzk0Nzgy","number":1095,"title":"Add TupleInf Open IE Dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Errors are in the CI are not related to this PR (RemoteDatasetError)\r\nthe CI is fixed on master so it's fine ","@lhoestq Added the dataset card. Please let me know if more information needs to be added."],"created_at":1607072887000,"updated_at":1607096454000,"closed_at":1607096454000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1095","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1095","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1095.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1095.patch"},"body":"For more information: https:\/\/allenai.org\/data\/tuple-ie","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1095\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1094","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1094\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1094\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1094\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1094","id":756927060,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzg5MDQ4","number":1094,"title":"add urdu fake news dataset","user":{"login":"chaitnayabasava","id":44389205,"node_id":"MDQ6VXNlcjQ0Mzg5MjA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44389205?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chaitnayabasava","html_url":"https:\/\/github.com\/chaitnayabasava","followers_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/followers","following_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/orgs","repos_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/repos","events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chaitnayabasava\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607072258000,"updated_at":1607073656000,"closed_at":1607073656000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1094","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1094","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1094.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1094.patch"},"body":"Added Urdu fake news dataset. The dataset can be found <a href=\"https:\/\/github.com\/MaazAmjad\/Datasets-for-Urdu-news\">here<\/a>.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1094\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1093","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1093\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1093\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1093\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1093","id":756916565,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzgxNjkw","number":1093,"title":"Add NCBI Disease Corpus dataset","user":{"login":"edugp","id":17855740,"node_id":"MDQ6VXNlcjE3ODU1NzQw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17855740?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/edugp","html_url":"https:\/\/github.com\/edugp","followers_url":"https:\/\/api.github.com\/users\/edugp\/followers","following_url":"https:\/\/api.github.com\/users\/edugp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/edugp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/edugp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/edugp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/edugp\/orgs","repos_url":"https:\/\/api.github.com\/users\/edugp\/repos","events_url":"https:\/\/api.github.com\/users\/edugp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/edugp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607071352000,"updated_at":1607080512000,"closed_at":1607080512000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1093","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1093","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1093.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1093.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1093\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1092","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1092\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1092\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1092\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1092","id":756913134,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzc5MDY0","number":1092,"title":"Add Coached Conversation Preference Dataset","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607071009000,"updated_at":1608471240000,"closed_at":1607089790000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1092","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1092","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1092.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1092.patch"},"body":"Adding [Coached Conversation Preference Dataset](https:\/\/research.google\/tools\/datasets\/coached-conversational-preference-elicitation\/)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1092\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1091","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1091\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1091\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1091\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1091","id":756841254,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzE5MDk5","number":1091,"title":"Add Google wellformed query dataset","user":{"login":"vasudevgupta7","id":53136577,"node_id":"MDQ6VXNlcjUzMTM2NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53136577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vasudevgupta7","html_url":"https:\/\/github.com\/vasudevgupta7","followers_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/followers","following_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/orgs","repos_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/repos","events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["hope this works.."],"created_at":1607063154000,"updated_at":1607276583000,"closed_at":1607276582000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1091","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1091","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1091.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1091.patch"},"body":"This pull request will add Google wellformed_query dataset. Link of dataset is https:\/\/github.com\/google-research-datasets\/query-wellformedness","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1091\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1090","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1090\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1090\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1090\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1090","id":756825941,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzA1OTk1","number":1090,"title":"add thaisum","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607061288000,"updated_at":1607080566000,"closed_at":1607080566000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1090","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1090","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1090.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1090.patch"},"body":"ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. We evaluate the performance of various existing summarization models on ThaiSum dataset and analyse the characteristic of the dataset to present its difficulties.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1090\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1089","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1089\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1089\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1089\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1089","id":756823690,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzA0MDM2","number":1089,"title":"add sharc_modified","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607060989000,"updated_at":1607078490000,"closed_at":1607077904000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1089","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1089","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1089.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1089.patch"},"body":"Adding modified ShARC dataset https:\/\/github.com\/nikhilweee\/neural-conv-qa","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1089\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1088","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1088\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1088\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1088\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1088","id":756822017,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMzAyNjIz","number":1088,"title":"add xquad_r dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607060755000,"updated_at":1607079493000,"closed_at":1607078821000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1088","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1088","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1088.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1088.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1088\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1087","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1087\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1087\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1087\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1087","id":756794430,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMjc5NDI3","number":1087,"title":"Add Big Patent dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq reduced the dummy data size to around 19MB in total and added the dataset card.","@lhoestq so I ended up removing all the nested JSON objects in the gz datafile and keep only one object with minimal content: `{\"publication_number\": \"US-8230922-B2\", \"abstract\": \"dummy abstract\", \"application_number\": \"US-201113163519-A\", \"description\": \"dummy description\"}`. \r\n\r\nThey're reduced to 35KB in total (2.5KB per domain and 17.5KB for all domains), hopefully, they're small enough."],"created_at":1607056650000,"updated_at":1607275260000,"closed_at":1607275259000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1087","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1087","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1087.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1087.patch"},"body":"* More info on the dataset: https:\/\/evasharma.github.io\/bigpatent\/\r\n* There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1087\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1086","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1086\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1086\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1086\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1086","id":756720643,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMjIzNDEy","number":1086,"title":"adding cdt dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Thanks for adding this one !\r\n> \r\n> I left a few comments\r\n> \r\n> after the change you'll need to regenerate the dataset_infos.json file as well\r\n\r\ndataset_infos.json regenerated","looks like this PR includes changes to many files other that the ones for CDT\r\ncould you create another branch and another PR please ?"],"created_at":1607045291000,"updated_at":1607094242000,"closed_at":1607094242000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1086","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1086","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1086.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1086.patch"},"body":"- **Name:** *Cyberbullying Detection Task*\r\n- **Description:** *The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.*\r\n- **Data:** *https:\/\/github.com\/ptaszynski\/cyberbullying-Polish*\r\n- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji J\u0119zykowych) is a set of nine evaluation tasks for the Polish language understanding.*","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1086\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1085","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1085\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1085\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1085\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1085","id":756704563,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMjExNTA4","number":1085,"title":"add mutual friends conversational dataset","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ready for review"],"created_at":1607042901000,"updated_at":1608134311000,"closed_at":1608134310000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1085","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1085","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1085.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1085.patch"},"body":"Mutual friends dataset\r\nWIP\r\n\r\nTODO:\r\n- scenario_kbs (bug with pyarrow conversion)\r\n- download from codalab checksums bug","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1085\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1084","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1084\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1084\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1084\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1084","id":756688727,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTk4MTM3","number":1084,"title":"adding cdsc dataset","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607040605000,"updated_at":1607078486000,"closed_at":1607078486000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1084","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1084","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1084.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1084.patch"},"body":"- **Name**: *cdsc (domains: cdsc-e & cdsc-r)*\r\n- **Description**: *Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wr\u00f3blewska and Krasnowska-Kiera\u015b (2017) for a detailed description of the resource.*\r\n- **Data**: *http:\/\/2019.poleval.pl\/index.php\/tasks\/*\r\n- **Motivation**: *The KLEJ benchmark (Kompleksowa Lista Ewaluacji J\u0119zykowych) is a set of nine evaluation tasks for the Polish language understanding.*","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1084\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1083","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1083\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1083\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1083\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1083","id":756687101,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTk2Nzc0","number":1083,"title":"Add the multilingual Exams dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Will slim down the dummy files in the morning"],"created_at":1607040364000,"updated_at":1607101920000,"closed_at":1607101920000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1083","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1083","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1083.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1083.patch"},"body":"https:\/\/github.com\/mhardalov\/exams-qa\r\n\r\n`multilingual` configs have all languages mixed together\r\n\r\n`crosslingual` mixes the languages for test but separates them for train and dec, so I've made one config per language for train\/dev data and one config with the joint test set","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1083\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1082","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1082\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1082\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1082\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1082","id":756676218,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTg3ODg3","number":1082,"title":"Myanmar news dataset","user":{"login":"mapmeld","id":643918,"node_id":"MDQ6VXNlcjY0MzkxOA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/643918?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mapmeld","html_url":"https:\/\/github.com\/mapmeld","followers_url":"https:\/\/api.github.com\/users\/mapmeld\/followers","following_url":"https:\/\/api.github.com\/users\/mapmeld\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mapmeld\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mapmeld\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mapmeld\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mapmeld\/orgs","repos_url":"https:\/\/api.github.com\/users\/mapmeld\/repos","events_url":"https:\/\/api.github.com\/users\/mapmeld\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mapmeld\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607038740000,"updated_at":1607076818000,"closed_at":1607076818000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1082","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1082","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1082.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1082.patch"},"body":"Add news topic classification dataset in Myanmar \/ Burmese languagess\r\n\r\nThis data was collected in 2017 by Aye Hninn Khine , and published on GitHub with a GPL license\r\nhttps:\/\/github.com\/ayehninnkhine\/MyanmarNewsClassificationSystem\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1082\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1081","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1081\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1081\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1081\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1081","id":756672527,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTg0ODc4","number":1081,"title":"Add Knowledge-Enhanced Language Model Pre-training (KELM)","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607038209000,"updated_at":1607099788000,"closed_at":1607099788000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1081","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1081","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1081.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1081.patch"},"body":"Adds the KELM dataset.\r\n\r\n- Webpage\/repo: https:\/\/github.com\/google-research-datasets\/KELM-corpus\r\n- Paper: https:\/\/arxiv.org\/pdf\/2010.12688.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1081\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1080","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1080\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1080\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1080\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1080","id":756663464,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTc3NDg5","number":1080,"title":"Add WikiANN NER dataset","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Dataset card added, so ready for review!"],"created_at":1607036964000,"updated_at":1607275135000,"closed_at":1607275135000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1080","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1080","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1080.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1080.patch"},"body":"This PR adds the full set of 176 languages from the balanced train\/dev\/test splits of WikiANN \/ PAN-X from: https:\/\/github.com\/afshinrahimi\/mmner\r\n\r\nUntil now, only 40 of these languages were available in `datasets` as part of the XTREME benchmark\r\n\r\nCourtesy of the dataset author, we can now download this dataset from a Dropbox URL without needing a manual download anymore \ud83e\udd73, so at some point it would be worth updating the PAN-X subset of XTREME as well \ud83d\ude04 \r\n\r\nLink to gist with some snippets for producing dummy data: https:\/\/gist.github.com\/lewtun\/5b93294ab6dbcf59d1493dbe2cfd6bb9\r\n\r\nP.S. @yjernite I think I was confused about needing to generate a set of YAML tags per config, so ended up just adding a single one in the README.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1080\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1079","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1079\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1079\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1079\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1079","id":756652427,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTY4Nzky","number":1079,"title":"nkjp-ner","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607035646000,"updated_at":1607074926000,"closed_at":1607074926000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1079","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1079","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1079.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1079.patch"},"body":"- **Name:** *nkjp-ner*\r\n- **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.*\r\n- **Data:** *https:\/\/klejbenchmark.com\/tasks\/*\r\n- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji J\u0119zykowych) is a set of nine evaluation tasks for the Polish language understanding.*\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1079\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1078","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1078\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1078\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1078\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1078","id":756633215,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTUyMzgx","number":1078,"title":"add AJGT dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607033791000,"updated_at":1607075715000,"closed_at":1607075715000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1078","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1078","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1078.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1078.patch"},"body":"Arabic Jordanian General Tweets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1078\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1077","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1077\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1077\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1077\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1077","id":756617964,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTM5ODMx","number":1077,"title":"Added glucose dataset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607032141000,"updated_at":1607075753000,"closed_at":1607075752000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1077","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1077","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1077.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1077.patch"},"body":"This PR adds the [Glucose](https:\/\/github.com\/ElementalCognition\/glucose) dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1077\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1076","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1076\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1076\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1076\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1076","id":756584328,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMTExNDU5","number":1076,"title":"quac quac \/ coin coin","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["pan"],"created_at":1607028929000,"updated_at":1607099799000,"closed_at":1607073320000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1076","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1076","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1076.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1076.patch"},"body":"Add QUAC (Question Answering in Context)\r\nI linearized most of the dictionnaries to lists.\r\nReferenced to the authors' datasheet for the dataset card.\r\n\ud83e\udd86\ud83e\udd86\ud83e\udd86\r\nCoin coin","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1076\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1075","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1075\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1075\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1075\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1075","id":756501235,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMDM4ODg1","number":1075,"title":"adding cleaned verion of E2E NLG","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607023267000,"updated_at":1607024636000,"closed_at":1607024636000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1075","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1075","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1075.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1075.patch"},"body":"Found at: https:\/\/github.com\/tuetschek\/e2e-cleaning","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1075\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1074","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1074\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1074\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1074\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1074","id":756483172,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMDIyNTIy","number":1074,"title":"Swedish MT STS-B","user":{"login":"timpal0l","id":6556710,"node_id":"MDQ6VXNlcjY1NTY3MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6556710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timpal0l","html_url":"https:\/\/github.com\/timpal0l","followers_url":"https:\/\/api.github.com\/users\/timpal0l\/followers","following_url":"https:\/\/api.github.com\/users\/timpal0l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timpal0l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timpal0l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timpal0l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timpal0l\/orgs","repos_url":"https:\/\/api.github.com\/users\/timpal0l\/repos","events_url":"https:\/\/api.github.com\/users\/timpal0l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timpal0l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607022385000,"updated_at":1607113347000,"closed_at":1607028268000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1074","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1074","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1074.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1074.patch"},"body":"Added a Swedish machine translated version of the well known STS-B Corpus","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1074\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1073","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1073\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1073\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1073\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1073","id":756468034,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMyMDA4NjIw","number":1073,"title":"Add DialogRE dataset","user":{"login":"vineeths96","id":50873201,"node_id":"MDQ6VXNlcjUwODczMjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50873201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vineeths96","html_url":"https:\/\/github.com\/vineeths96","followers_url":"https:\/\/api.github.com\/users\/vineeths96\/followers","following_url":"https:\/\/api.github.com\/users\/vineeths96\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vineeths96\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vineeths96\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vineeths96\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vineeths96\/orgs","repos_url":"https:\/\/api.github.com\/users\/vineeths96\/repos","events_url":"https:\/\/api.github.com\/users\/vineeths96\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vineeths96\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607021800000,"updated_at":1608471288000,"closed_at":1607089311000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1073","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1073","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1073.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1073.patch"},"body":"Adding the [DialogRE](https:\/\/github.com\/nlpdata\/dialogre) dataset Version 2.\r\n\r\n- All tests passed successfully.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1073\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1072","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1072\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1072\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1072\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1072","id":756454511,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTk2Njky","number":1072,"title":"actually uses the previously declared VERSION on the configs in the template","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607021067000,"updated_at":1607024146000,"closed_at":1607024146000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1072","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1072","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1072.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1072.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1072\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1071","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1071\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1071\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1071\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1071","id":756447296,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTkwNzY1","number":1071,"title":"add xlrd to test package requirements","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607020367000,"updated_at":1607021236000,"closed_at":1607021236000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1071","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1071","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1071.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1071.patch"},"body":"Adds `xlrd` package to the test requirements to handle scripts that use `pandas` to load excel files","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1071\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1070","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1070\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1070\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1070\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1070","id":756442481,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTg2Nzcz","number":1070,"title":"add conv_ai","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This one will make @thomwolf reminisce ;)","Merging."],"created_at":1607019920000,"updated_at":1607068715000,"closed_at":1607064274000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1070","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1070","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1070.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1070.patch"},"body":"Adding ConvAI dataset https:\/\/github.com\/DeepPavlov\/convai\/tree\/master\/2017","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1070\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1069","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1069\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1069\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1069\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1069","id":756425737,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTcyNjYz","number":1069,"title":"Test","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607018505000,"updated_at":1607055858000,"closed_at":1607055851000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1069","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1069","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1069.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1069.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1069\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1068","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1068\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1068\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1068\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1068","id":756417337,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTY1MDk0","number":1068,"title":"Add Pubmed (citation + abstract) dataset (2020).","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM! ftp addition looks fine but maybe have a look @thomwolf ?","It's not finished yet, I need to run the tests on the full dataset (it was running this weekend, there is an error somewhere deep)\r\n","@yjernite Ready for review !\r\n@thomwolf \r\n\r\nSo I tried to follow closely the original format that means I still had to drop information (namely tags on elements are simply discarded for now but they don't seem to carry critical information).\r\nSome elements are also discarded they tend to not come up often:\r\n - The most notable is Author affiliation, which seems to be all over the place in terms of what it look meaning it's hard to actually get a consistent format.\r\n - Journal is the same, all the elements in there can be wildly different so I decided to drop it for now instead of trying to figure out a way to have a common presentation. (the DOI and medline ID are present so it can be recovered).\r\n\r\nI think this PR could go as it but we probably should add a way to get easier information to use with a config.\r\nFor instance `{\"title\": \"string\", \"abstract\": \"string\", \"authors\": List[str], \"substances\": List[str]}` maybe ? (substances for instance is a tricky one, some substances only have an international identifier, some have simply a common name, some both)\r\n\r\nIt's relatively easy to do I think it's mostly discarding other fields and renaming some deep structure into a flat one.","Look ok to me but @lhoestq is the master on the Download Manager side"],"created_at":1607018050000,"updated_at":1608717127000,"closed_at":1608717127000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1068","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1068","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1068.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1068.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1068\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1067","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1067\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1067\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1067\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1067","id":756414212,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTYyNDYx","number":1067,"title":"add xquad-r dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607017801000,"updated_at":1607018001000,"closed_at":1607017995000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1067","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1067","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1067.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1067.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1067\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1066","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1066\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1066\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1066\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1066","id":756391957,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTQ0MDc0","number":1066,"title":"Add ChrEn","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just saw your PR actually ^^","> I just saw your PR actually ^^\r\n\r\nSomehow that still doesn't work, lmk if you have any ideas.","Did you rebase from master ?"],"created_at":1607015868000,"updated_at":1607032179000,"closed_at":1607032179000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1066","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1066","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1066.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1066.patch"},"body":"Adding the Cherokee English machine translation dataset of https:\/\/github.com\/ZhangShiyue\/ChrEn","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1066\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1065","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1065\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1065\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1065\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1065","id":756383414,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTM2OTQ3","number":1065,"title":"add xquad-r dataset","user":{"login":"manandey","id":6687858,"node_id":"MDQ6VXNlcjY2ODc4NTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6687858?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/manandey","html_url":"https:\/\/github.com\/manandey","followers_url":"https:\/\/api.github.com\/users\/manandey\/followers","following_url":"https:\/\/api.github.com\/users\/manandey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/manandey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/manandey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/manandey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/manandey\/orgs","repos_url":"https:\/\/api.github.com\/users\/manandey\/repos","events_url":"https:\/\/api.github.com\/users\/manandey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/manandey\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607015183000,"updated_at":1607017341000,"closed_at":1607017323000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1065","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1065","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1065.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1065.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1065\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1064","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1064\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1064\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1064\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1064","id":756382186,"node_id":"MDU6SXNzdWU3NTYzODIxODY=","number":1064,"title":"Not support links with 302 redirect ","user":{"login":"chameleonTK","id":6429850,"node_id":"MDQ6VXNlcjY0Mjk4NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6429850?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chameleonTK","html_url":"https:\/\/github.com\/chameleonTK","followers_url":"https:\/\/api.github.com\/users\/chameleonTK\/followers","following_url":"https:\/\/api.github.com\/users\/chameleonTK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chameleonTK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chameleonTK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chameleonTK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chameleonTK\/orgs","repos_url":"https:\/\/api.github.com\/users\/chameleonTK\/repos","events_url":"https:\/\/api.github.com\/users\/chameleonTK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chameleonTK\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"},{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nThis kind of links is now supported by the library since #1316","> Hi !\r\n> This kind of links is now supported by the library since #1316\r\n\r\nI updated links in TLC datasets to be the github links in this pull request \r\n https:\/\/github.com\/huggingface\/datasets\/pull\/1737\r\n\r\nEverything works now. Thank you."],"created_at":1607015083000,"updated_at":1610592685000,"closed_at":1610592685000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I have an issue adding this download link https:\/\/github.com\/jitkapat\/thailitcorpus\/releases\/download\/v.2.0\/tlc_v.2.0.tar.gz\r\n\r\nit might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). \r\n\r\n```\r\nr.head(\"https:\/\/github.com\/jitkapat\/thailitcorpus\/releases\/download\/v.2.0\/tlc_v.2.0.tar.gz\", allow_redirects=True) \r\n# <Response [403]>\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1064\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1063","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1063\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1063\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1063\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1063","id":756376374,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTMxMTMz","number":1063,"title":"Add the Ud treebank","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1607014601000,"updated_at":1607098314000,"closed_at":1607097106000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1063","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1063","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1063.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1063.patch"},"body":"This PR adds the 183 datasets in 104 languages of the UD Treebank.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1063\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1062","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1062\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1062\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1062\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1062","id":756373187,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTI4NDY5","number":1062,"title":"Add KorNLU dataset","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice thank you !\r\nCan you regenerate the dataset_infos.json file ? Since we changed the features we must update it\r\n\r\nThen I think we'll be good to merge :)"],"created_at":1607014359000,"updated_at":1607079919000,"closed_at":1607079919000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1062","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1062","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1062.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1062.patch"},"body":"Added Korean NLU datasets. The link to the dataset can be found [here](https:\/\/github.com\/kakaobrain\/KorNLUDatasets) and the paper can be found [here](https:\/\/arxiv.org\/abs\/2004.03289)\r\n\r\n**Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq \r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1062\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1061","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1061\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1061\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1061\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1061","id":756362661,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTE5ODA0","number":1061,"title":"add labr dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607013537000,"updated_at":1607019944000,"closed_at":1607019944000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1061","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1061","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1061.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1061.patch"},"body":"Arabic Book Reviews dataset. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1061\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1060","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1060\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1060\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1060\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1060","id":756349001,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTA4MTgx","number":1060,"title":"Fix squad V2 metric script","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The script with changes is used and tested in [#8924](https:\/\/github.com\/huggingface\/transformers\/pull\/8924). It gives the same results as the old `evaluate_squad` function when used on the same predictions.","merging since the CI is fixed on master"],"created_at":1607012612000,"updated_at":1608649340000,"closed_at":1608649339000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1060","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1060","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1060.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1060.patch"},"body":"The current squad v2 metric doesn't work with the squad (v1 or v2) datasets. The script is copied from `squad_evaluate` in transformers that requires the labels (with multiple answers) to be like this:\r\n```\r\nreferences = [{'id': 'a', 'answers': [\r\n {'text': 'Denver Broncos', 'answer_start': 177},\r\n {'text': 'Denver Broncos', 'answer_start': 177}\r\n]}]\r\n```\r\nwhile the dataset had references like this:\r\n```\r\nreferences = [{'id': 'a', 'answers': \r\n {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]}\r\n}]\r\n```\r\n\r\nUsing one or the other format fails with the current squad v2 metric:\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric(\"squad_v2\")\r\npredictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}]\r\nreferences = [{'id': 'a', 'answers': [\r\n {'text': 'Denver Broncos', 'answer_start': 177},\r\n {'text': 'Denver Broncos', 'answer_start': 177}\r\n]}]\r\nmetric.compute(predictions=predictions, references=references)\r\n```\r\nfails as well as\r\n```\r\nfrom datasets import load_metric\r\nmetric = load_metric(\"squad_v2\")\r\npredictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}]\r\nreferences = [{'id': 'a', 'answers': \r\n {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]}\r\n}]\r\nmetric.compute(predictions=predictions, references=references)\r\n```\r\n\r\nThis is because arrow reformats the references behind the scene.\r\n\r\nWith this PR (tested locally), both the snippets up there work and return proper results.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1060\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1059","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1059\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1059\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1059\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1059","id":756348623,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxOTA3ODYy","number":1059,"title":"Add TLC","user":{"login":"chameleonTK","id":6429850,"node_id":"MDQ6VXNlcjY0Mjk4NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6429850?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chameleonTK","html_url":"https:\/\/github.com\/chameleonTK","followers_url":"https:\/\/api.github.com\/users\/chameleonTK\/followers","following_url":"https:\/\/api.github.com\/users\/chameleonTK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chameleonTK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chameleonTK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chameleonTK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chameleonTK\/orgs","repos_url":"https:\/\/api.github.com\/users\/chameleonTK\/repos","events_url":"https:\/\/api.github.com\/users\/chameleonTK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chameleonTK\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have reduced the size of the dummy file and added README sections as you suggested. ","I have a little issue to run the test. It seems there is no failed case in my machine. ","Thanks !\r\nIt looks like the PR includes changes to many other files than the ones of `tlc`, can you create another branch and another PR ?"],"created_at":1607012586000,"updated_at":1607080533000,"closed_at":1607080533000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1059","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1059","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1059.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1059.patch"},"body":"Added TLC dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1059\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1058","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1058\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1058\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1058\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1058","id":756332704,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxODk0Mjc0","number":1058,"title":"added paws-x dataset","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607011561000,"updated_at":1607089565000,"closed_at":1607089565000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1058","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1058","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1058.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1058.patch"},"body":"Added paws-x dataset. Updating README and tags in the dataset card in a while","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1058\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1057","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1057\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1057\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1057\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1057","id":756331419,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxODkzMjE4","number":1057,"title":"Adding TamilMixSentiment","user":{"login":"jamespaultg","id":7421838,"node_id":"MDQ6VXNlcjc0MjE4Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7421838?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jamespaultg","html_url":"https:\/\/github.com\/jamespaultg","followers_url":"https:\/\/api.github.com\/users\/jamespaultg\/followers","following_url":"https:\/\/api.github.com\/users\/jamespaultg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jamespaultg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jamespaultg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jamespaultg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jamespaultg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jamespaultg\/repos","events_url":"https:\/\/api.github.com\/users\/jamespaultg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jamespaultg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this pr incldues changes about many other files than the ones for tamilMixSentiment, could you create another branch and another PR ?"],"created_at":1607011465000,"updated_at":1607076574000,"closed_at":1607076552000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1057","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1057","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1057.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1057.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1057\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1056","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1056\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1056\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1056\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1056","id":756309828,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxODc1MjA2","number":1056,"title":"Add deal_or_no_dialog","user":{"login":"moussaKam","id":28675016,"node_id":"MDQ6VXNlcjI4Njc1MDE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28675016?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moussaKam","html_url":"https:\/\/github.com\/moussaKam","followers_url":"https:\/\/api.github.com\/users\/moussaKam\/followers","following_url":"https:\/\/api.github.com\/users\/moussaKam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moussaKam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moussaKam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moussaKam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moussaKam\/orgs","repos_url":"https:\/\/api.github.com\/users\/moussaKam\/repos","events_url":"https:\/\/api.github.com\/users\/moussaKam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moussaKam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607009887000,"updated_at":1607019225000,"closed_at":1607019225000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1056","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1056","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1056.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1056.patch"},"body":"Add deal_or_no_dialog Dataset\r\n\r\ngithub: https:\/\/github.com\/facebookresearch\/end-to-end-negotiator\r\nPaper: [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https:\/\/arxiv.org\/abs\/1706.05125)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1056\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1055","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1055\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1055\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1055\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1055","id":756298372,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxODY1NjM4","number":1055,"title":"Add hebrew-sentiment","user":{"login":"elronbandel","id":23455264,"node_id":"MDQ6VXNlcjIzNDU1MjY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23455264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/elronbandel","html_url":"https:\/\/github.com\/elronbandel","followers_url":"https:\/\/api.github.com\/users\/elronbandel\/followers","following_url":"https:\/\/api.github.com\/users\/elronbandel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/elronbandel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/elronbandel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/elronbandel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/elronbandel\/orgs","repos_url":"https:\/\/api.github.com\/users\/elronbandel\/repos","events_url":"https:\/\/api.github.com\/users\/elronbandel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/elronbandel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@elronbandel it looks like something went wrong with the renaming, as the old files are still in the PR. Can you `git rm datasets\/hebrew-sentiment` ?","merging since the CI is fixed on master"],"created_at":1607009071000,"updated_at":1607081056000,"closed_at":1607081056000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1055","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1055","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1055.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1055.patch"},"body":"hebrew-sentiment dataset is ready! (including tests, tags etc)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1055\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1054","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1054\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1054\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1054\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1054","id":756265688,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxODM3NzQ0","number":1054,"title":"Add dataset - SemEval 2014 - Task 1","user":{"login":"ashmeet13","id":24266995,"node_id":"MDQ6VXNlcjI0MjY2OTk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24266995?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ashmeet13","html_url":"https:\/\/github.com\/ashmeet13","followers_url":"https:\/\/api.github.com\/users\/ashmeet13\/followers","following_url":"https:\/\/api.github.com\/users\/ashmeet13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ashmeet13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ashmeet13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ashmeet13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ashmeet13\/orgs","repos_url":"https:\/\/api.github.com\/users\/ashmeet13\/repos","events_url":"https:\/\/api.github.com\/users\/ashmeet13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ashmeet13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Added the dataset card.\r\nRequesting another review."],"created_at":1607007179000,"updated_at":1607043164000,"closed_at":1607043164000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1054","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1054","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1054.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1054.patch"},"body":"Adding the dataset of SemEval 2014 Task 1\r\n\r\nFound the dataset under the shared Google Sheet > Recurring Task Datasets\r\nTask Homepage - https:\/\/alt.qcri.org\/semeval2014\/task1\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1054\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1053","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1053\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1053\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1053\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1053","id":756176061,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzYyNzg4","number":1053,"title":"Fix dataset URL and file names, and add column name in \"Social Bias Frames\" dataset","user":{"login":"otakumesi","id":14996977,"node_id":"MDQ6VXNlcjE0OTk2OTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14996977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/otakumesi","html_url":"https:\/\/github.com\/otakumesi","followers_url":"https:\/\/api.github.com\/users\/otakumesi\/followers","following_url":"https:\/\/api.github.com\/users\/otakumesi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/otakumesi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/otakumesi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/otakumesi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/otakumesi\/orgs","repos_url":"https:\/\/api.github.com\/users\/otakumesi\/repos","events_url":"https:\/\/api.github.com\/users\/otakumesi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/otakumesi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks a lot, looks good!"],"created_at":1607000585000,"updated_at":1607002946000,"closed_at":1607002946000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1053","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1053","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1053.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1053.patch"},"body":"# Why I did\r\nWhen I use \"social_bias_frames\" datasets in this library, I got 404 Errors.\r\nSo, I fixed this error and another some problems that I faced to use the dataset.\r\n\r\n# What I did\r\n* Modify this dataset URL\r\n* Modify this dataset file names\r\n* Add a \"dataSource\" column\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1053\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1052","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1052\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1052\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1052\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1052","id":756171798,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzU5MjA0","number":1052,"title":"add sharc dataset","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1607000243000,"updated_at":1607013861000,"closed_at":1607004594000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1052","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1052","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1052.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1052.patch"},"body":"This PR adds the ShARC dataset.\r\n\r\nMore info:\r\nhttps:\/\/sharc-data.github.io\/index.html","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1052\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1051","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1051\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1051\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1051\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1051","id":756169049,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzU2OTQy","number":1051,"title":"Add Facebook SimpleQuestionV2","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think @thomwolf may also be working on this one as part of the Babi benchmark in #945 "],"created_at":1607000000000,"updated_at":1607016719000,"closed_at":1607016718000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1051","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1051","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1051.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1051.patch"},"body":"Add simple questions v2: https:\/\/research.fb.com\/downloads\/babi\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1051\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1050","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1050\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1050\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1050\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1050","id":756166728,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzU1MDQ3","number":1050,"title":"Add GoEmotions","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Whoops, didn't mean for that to be merged yet (my bad). I'm reaching out to the authors since we'd like their feedback on the best way to have the `author` field anonymized or removed. Will send a patch once they get back to me."],"created_at":1606999793000,"updated_at":1607017065000,"closed_at":1607016608000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1050","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1050","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1050.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1050.patch"},"body":"Adds the GoEmotions dataset, a nice emotion classification dataset with 27 (multi-)label annotations on reddit comments. Includes both a large raw version and a narrowed version with predefined train\/test\/val splits, which I've included as separate configs with the latter as a default.\r\n\r\n- Webpage\/repo: https:\/\/github.com\/google-research\/google-research\/tree\/master\/goemotions\r\n- Paper: https:\/\/arxiv.org\/abs\/2005.00547","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1050\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1049","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1049\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1049\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1049\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1049","id":756157602,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzQ3NDY0","number":1049,"title":"Add siswati ner corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606998960000,"updated_at":1607016422000,"closed_at":1607016415000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1049","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1049","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1049.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1049.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1049\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1048","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1048\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1048\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1048\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1048","id":756133072,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzI3MDA0","number":1048,"title":"Adding NCHLT dataset","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1606996765000,"updated_at":1607088597000,"closed_at":1607088597000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1048","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1048","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1048.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1048.patch"},"body":"https:\/\/repo.sadilar.org\/handle\/20.500.12185\/7\/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1048\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1047","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1047\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1047\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1047\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1047","id":756127490,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzIyMjk4","number":1047,"title":"Add KorNLU","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the CI error about `social_bias_frames` is fixed on master so it's fine","created new [PR](https:\/\/github.com\/huggingface\/datasets\/pull\/1062)","looks like this PR includes many changes to other files that the ones related to KorNLU\r\nCould you create another branch and another PR please ?","Wow crazy timing","hahahaha"],"created_at":1606996254000,"updated_at":1607015827000,"closed_at":1607015769000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1047","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1047","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1047.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1047.patch"},"body":"Added Korean NLU datasets. The link to the dataset can be found [here](https:\/\/github.com\/kakaobrain\/KorNLUDatasets) and the paper can be found [here](https:\/\/arxiv.org\/abs\/2004.03289)\r\n\r\n**Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq \r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1047\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1046","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1046\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1046\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1046\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1046","id":756122709,"node_id":"MDU6SXNzdWU3NTYxMjI3MDk=","number":1046,"title":"Dataset.map() turns tensors into lists?","user":{"login":"tombosc","id":5270804,"node_id":"MDQ6VXNlcjUyNzA4MDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5270804?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tombosc","html_url":"https:\/\/github.com\/tombosc","followers_url":"https:\/\/api.github.com\/users\/tombosc\/followers","following_url":"https:\/\/api.github.com\/users\/tombosc\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tombosc\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tombosc\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tombosc\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tombosc\/orgs","repos_url":"https:\/\/api.github.com\/users\/tombosc\/repos","events_url":"https:\/\/api.github.com\/users\/tombosc\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tombosc\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.","It is expected behavior, you should set the format to `\"torch\"` as you mentioned to get pytorch tensors back.\r\nBy default datasets returns pure python objects."],"created_at":1606995826000,"updated_at":1608731472000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!\r\n\r\n```import datasets\r\nimport torch \r\nfrom datasets import load_dataset \r\nprint(\"version datasets\", datasets.__version__)\r\n\r\ndataset = load_dataset(\"snli\", split='train[0:50]') \r\n\r\ndef tokenizer_fn(example):\r\n # actually uses a tokenizer which does something like:\r\n return {'input_ids': torch.tensor([[0, 1, 2]])}\r\n\r\nprint(\"First item in dataset:\\n\", dataset[0])\r\ntokenized = tokenizer_fn(dataset[0])\r\nprint(\"Tokenized hyp:\\n\", tokenized)\r\ndataset_tok = dataset.map(tokenizer_fn, batched=False,\r\n remove_columns=['label', 'premise', 'hypothesis'])\r\nprint(\"Tokenized using map:\\n\", dataset_tok[0])\r\nprint(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))\r\ndataset_tok = dataset.map(tokenizer_fn, batched=False,\r\n remove_columns=['label', 'premise', 'hypothesis'])\r\nprint(\"Tokenized using map:\\n\", dataset_tok[0])\r\nprint(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))\r\n```\r\n\r\nThe output is:\r\n\r\n```\r\nversion datasets 1.1.3\r\nReusing dataset snli (\/home\/tom\/.cache\/huggingface\/datasets\/snli\/plain_text\/1.0.0\/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c)\r\nFirst item in dataset:\r\n {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}\r\nTokenized hyp:\r\n {'input_ids': tensor([[0, 1, 2]])}\r\nLoading cached processed dataset at \/home\/tom\/.cache\/huggingface\/datasets\/snli\/plain_text\/1.0.0\/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c\/cache-fe38f449fe9ac46f.arrow\r\nTokenized using map:\r\n {'input_ids': [[0, 1, 2]]}\r\n<class 'torch.Tensor'> <class 'list'>\r\n```\r\n\r\nOr am I doing something wrong?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1046\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1045","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1045\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1045\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1045\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1045","id":756120760,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzE2NzIy","number":1045,"title":"Add xitsonga ner corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Look like this PR includes changes to many other files than the ones related to xitsonga NER.\r\nCould you create another branch and another PR please ?"],"created_at":1606995648000,"updated_at":1607016003000,"closed_at":1607015972000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1045","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1045","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1045.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1045.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1045\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1044","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1044\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1044\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1044\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1044","id":756111647,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzA5MTg0","number":1044,"title":"Add AMTTL Chinese Word Segmentation Dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606994872000,"updated_at":1607015594000,"closed_at":1607015593000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1044","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1044","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1044.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1044.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1044\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1043","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1043\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1043\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1043\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1043","id":756100717,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNzAwMDQ1","number":1043,"title":"Add TSAC: Tunisian Sentiment Analysis Corpus","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606993955000,"updated_at":1607002505000,"closed_at":1607002344000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1043","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1043","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1043.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1043.patch"},"body":"github: https:\/\/github.com\/fbougares\/TSAC\r\n\r\npaper: https:\/\/www.aclweb.org\/anthology\/W17-1307\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1043\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1042","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1042\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1042\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1042\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1042","id":756097583,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNjk3NDU4","number":1042,"title":"Add Big Patent dataset","user":{"login":"mattbui","id":46804938,"node_id":"MDQ6VXNlcjQ2ODA0OTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46804938?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mattbui","html_url":"https:\/\/github.com\/mattbui","followers_url":"https:\/\/api.github.com\/users\/mattbui\/followers","following_url":"https:\/\/api.github.com\/users\/mattbui\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mattbui\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mattbui\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mattbui\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mattbui\/orgs","repos_url":"https:\/\/api.github.com\/users\/mattbui\/repos","events_url":"https:\/\/api.github.com\/users\/mattbui\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mattbui\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like this PR include changes about many other files than the ones related to big patent.\r\nCould you create another branch and another PR ?","@lhoestq Just created a new PR here: https:\/\/github.com\/huggingface\/datasets\/pull\/1087"],"created_at":1606993679000,"updated_at":1607056706000,"closed_at":1607056706000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1042","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1042","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1042.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1042.patch"},"body":"- More info on the dataset: https:\/\/evasharma.github.io\/bigpatent\/\r\n- There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later.\r\n- ~Currently, there are no dummy data for this dataset yet as I'm facing some problems with generating them. I'm trying to add them later.~","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1042\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1041","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1041\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1041\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1041\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1041","id":756055102,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNjYyMDI0","number":1041,"title":"Add SuperGLUE metric","user":{"login":"calpt","id":36051308,"node_id":"MDQ6VXNlcjM2MDUxMzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36051308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/calpt","html_url":"https:\/\/github.com\/calpt","followers_url":"https:\/\/api.github.com\/users\/calpt\/followers","following_url":"https:\/\/api.github.com\/users\/calpt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/calpt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/calpt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/calpt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/calpt\/orgs","repos_url":"https:\/\/api.github.com\/users\/calpt\/repos","events_url":"https:\/\/api.github.com\/users\/calpt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/calpt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606990294000,"updated_at":1614106979000,"closed_at":1614103332000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1041","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1041","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1041.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1041.patch"},"body":"Adds a new metric for the SuperGLUE benchmark (similar to the GLUE benchmark metric).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1041\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1040","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1040\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1040\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1040\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1040","id":756050387,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNjU4MTU3","number":1040,"title":"Add UN Universal Declaration of Human Rights (UDHR)","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606989898000,"updated_at":1607023215000,"closed_at":1607023211000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1040","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1040","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1040.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1040.patch"},"body":"Universal declaration of human rights with translations in 464 languages and dialects.\r\n\r\n- UN page: https:\/\/www.ohchr.org\/EN\/UDHR\/Pages\/UDHRIndex.aspx\r\n- Raw data source: https:\/\/unicode.org\/udhr\/index.html\r\n\r\nEach instance of the dataset corresponds to one translation of the document. Since there's only one instance per language (and because there are 500 languages so the dummy data would be messy), I opted to just include them all under the same single config. I wasn't able to find any kind of license so I just copied the copyright notice.\r\n\r\nI was pretty careful careful generating the language tags so they _should_ all be correct & consistent BCP-47 codes per the docs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1040\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1039","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1039\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1039\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1039\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1039","id":756000478,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNjE3MDI2","number":1039,"title":"Update ADD NEW DATASET","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606985912000,"updated_at":1606987108000,"closed_at":1606987090000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1039","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1039","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1039.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1039.patch"},"body":"This PR adds a couple of detail on cloning\/rebasing the repo.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1039\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1038","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1038\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1038\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1038\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1038","id":755987997,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNjA2Njgw","number":1038,"title":"add med_hop","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606984827000,"updated_at":1607014393000,"closed_at":1607014343000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1038","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1038","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1038.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1038.patch"},"body":"This PR adds the MedHop dataset from the QAngaroo multi hop reading comprehension datasets\r\n\r\nMore info:\r\nhttp:\/\/qangaroo.cs.ucl.ac.uk\/index.html","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1038\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1037","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1037\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1037\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1037\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1037","id":755975586,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNTk2NDkx","number":1037,"title":"Fix docs indentation issues","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["is this an issue ?","Yes @lhoestq, look at the docs site. For example, in https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html, look at the indentation in the code block under the sentence:\r\n> Here are the features of the SQuAD dataset for instance, which is taken from the squad dataset loading script:"],"created_at":1606983694000,"updated_at":1608652875000,"closed_at":1608652875000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1037","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1037","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1037.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1037.patch"},"body":"Replace tabs with spaces.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1037\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1036","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1036\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1036\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1036\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1036","id":755953294,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNTc4MjQ4","number":1036,"title":"Add PerSenT","user":{"login":"jeromeku","id":2455711,"node_id":"MDQ6VXNlcjI0NTU3MTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2455711?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeromeku","html_url":"https:\/\/github.com\/jeromeku","followers_url":"https:\/\/api.github.com\/users\/jeromeku\/followers","following_url":"https:\/\/api.github.com\/users\/jeromeku\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeromeku\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeromeku\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeromeku\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeromeku\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeromeku\/repos","events_url":"https:\/\/api.github.com\/users\/jeromeku\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeromeku\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR contains changes in many other files than the ones for PerSenT\r\ncan you create another branch and another PR ?","closing since #1142 was merged"],"created_at":1606981438000,"updated_at":1607953243000,"closed_at":1607953243000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1036","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1036","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1036.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1036.patch"},"body":"Added [Person's SentimenT](https:\/\/stonybrooknlp.github.io\/PerSenT\/) dataset. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1036\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1035","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1035\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1035\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1035\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1035","id":755947097,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNTczMTc3","number":1035,"title":"add wiki_hop","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also the dummy data files are quite big (500KB)\r\nIf you could reduce that that would be nice (just look at the files inside and remove unecessary chunks of texts)\r\nin general dummy data are just a few KB and we suggest to not get higher than 50KB\r\n\r\nHaving light dummy data makes the repo faster to clone"],"created_at":1606980746000,"updated_at":1607013820000,"closed_at":1607013672000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1035","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1035","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1035.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1035.patch"},"body":"This PR adds the WikiHop dataset from the QAngaroo multi hop reading comprehension datasets\r\n\r\nMore info:\r\nhttp:\/\/qangaroo.cs.ucl.ac.uk\/index.html\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1035\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1034","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1034\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1034\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1034\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1034","id":755936327,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNTY0MjA0","number":1034,"title":"add scb_mt_enth_2020","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606979629000,"updated_at":1607014643000,"closed_at":1607014643000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1034","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1034","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1034.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1034.patch"},"body":"## scb-mt-en-th-2020: A Large English-Thai Parallel Corpus\r\n\r\nThe primary objective of our work is to build a large-scale English-Thai dataset for machine translation.\r\nWe construct an English-Thai machine translation dataset with over 1 million segment pairs, curated from various sources,\r\nnamely news, Wikipedia articles, SMS messages, task-based dialogs, web-crawled data and government documents.\r\nMethodology for gathering data, building parallel texts and removing noisy sentence pairs are presented in a reproducible manner.\r\nWe train machine translation models based on this dataset. Our models' performance are comparable to that of\r\nGoogle Translation API (as of May 2020) for Thai-English and outperform Google when the Open Parallel Corpus (OPUS) is\r\nincluded in the training data for both Thai-English and English-Thai translation.\r\nThe dataset, pre-trained models, and source code to reproduce our work are available for public use.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1034\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1033","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1033\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1033\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1033\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1033","id":755921927,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNTUxNzYw","number":1033,"title":"Add support for \".txm\" format","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Neat! Looks like you need a rebase and then should be good to go :) ","Done, @yjernite, @lhoestq.","If you agree, we could merge this.","Hi ! yes sure :) can you just merge master into this branch before we merge ?","Done @lhoestq "],"created_at":1606978328000,"updated_at":1613936831000,"closed_at":1613936831000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1033","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1033","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1033.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1033.patch"},"body":"In dummy data generation, add support for XML-like \".txm\" file format.\r\n\r\nAlso support filenames with additional compression extension: \".txm.gz\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1033\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1032","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1032\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1032\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1032\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1032","id":755858785,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNDk2MTU2","number":1032,"title":"IIT B English to Hindi machine translation dataset","user":{"login":"spatil6","id":6419011,"node_id":"MDQ6VXNlcjY0MTkwMTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6419011?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/spatil6","html_url":"https:\/\/github.com\/spatil6","followers_url":"https:\/\/api.github.com\/users\/spatil6\/followers","following_url":"https:\/\/api.github.com\/users\/spatil6\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/spatil6\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/spatil6\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/spatil6\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/spatil6\/orgs","repos_url":"https:\/\/api.github.com\/users\/spatil6\/repos","events_url":"https:\/\/api.github.com\/users\/spatil6\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/spatil6\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Please note that this dataset is actually behind a form that one needs to fill. However, the link is direct. I'm not sure what should the approach be in this case.","also pinging @thomwolf \r\nThe dataset webpage returns a form when trying to download the dataset (form here : http:\/\/www.cfilt.iitb.ac.in\/iitb_parallel\/dataset.html).\r\nHowever the url we get with the form can be used for the dataset script.\r\nShould we ask the authors or use the urls this way ?","> also pinging @thomwolf\r\n> The dataset webpage returns a form when trying to download the dataset (form here : http:\/\/www.cfilt.iitb.ac.in\/iitb_parallel\/dataset.html).\r\n> However the url we get with the form can be used for the dataset script.\r\n> Should we ask the authors or use the urls this way ?\r\n\r\nI had discussion on this with @thomwolf . We have already sent email to author of this dataset.","Hi @spatil6 !\r\nAny news from the authors ?","IIT B folks will add this dataset to repo."],"created_at":1606972725000,"updated_at":1610268291000,"closed_at":1610268255000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1032","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1032","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1032.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1032.patch"},"body":"Adding IIT Bombay English-Hindi Corpus dataset\r\nmore info : http:\/\/www.cfilt.iitb.ac.in\/iitb_parallel\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1032\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1031","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1031\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1031\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1031\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1031","id":755844004,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNDgyMzEy","number":1031,"title":"add crows_pairs","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks good now :) wdyt @yjernite ?","Looks good to merge for me, can edit the dataset card later if required. Merging"],"created_at":1606971911000,"updated_at":1607020192000,"closed_at":1607020179000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1031","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1031","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1031.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1031.patch"},"body":"This PR adds CrowS-Pairs datasets.\r\n\r\nMore info:\r\nhttps:\/\/github.com\/nyu-mll\/crows-pairs\/\r\nhttps:\/\/arxiv.org\/pdf\/2010.00133.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1031\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1030","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1030\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1030\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1030\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1030","id":755777438,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNDI0MDM3","number":1030,"title":"allegro_reviews dataset ","user":{"login":"abecadel","id":1654113,"node_id":"MDQ6VXNlcjE2NTQxMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1654113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abecadel","html_url":"https:\/\/github.com\/abecadel","followers_url":"https:\/\/api.github.com\/users\/abecadel\/followers","following_url":"https:\/\/api.github.com\/users\/abecadel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abecadel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abecadel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abecadel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abecadel\/orgs","repos_url":"https:\/\/api.github.com\/users\/abecadel\/repos","events_url":"https:\/\/api.github.com\/users\/abecadel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abecadel\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606965099000,"updated_at":1607079389000,"closed_at":1607013287000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1030","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1030","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1030.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1030.patch"},"body":"- **Name:** *allegro_reviews*\r\n- **Description:** *Allegro Reviews is a sentiment analysis dataset, consisting of 11,588 product reviews written in Polish and extracted from Allegro.pl - a popular e-commerce marketplace. Each review contains at least 50 words and has a rating on a scale from one (negative review) to five (positive review).*\r\n- **Data:** *https:\/\/github.com\/allegro\/klejbenchmark-allegroreviews*\r\n- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji J\u0119zykowych) is a set of nine evaluation tasks for the Polish language understanding.*","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1030\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1029","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1029\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1029\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1029\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1029","id":755767616,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxNDE2NzE4","number":1029,"title":"Add PEC","user":{"login":"zhongpeixiang","id":11826803,"node_id":"MDQ6VXNlcjExODI2ODAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11826803?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhongpeixiang","html_url":"https:\/\/github.com\/zhongpeixiang","followers_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/followers","following_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/repos","events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm a bit frustrated now to get this right.","Hey @zhongpeixiang!\r\nReally nice addition here!\r\n\r\nDid you officially joined the sprint by posting [on the forum thread](https:\/\/discuss.huggingface.co\/t\/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library\/2176) and joining our slack?\r\nI can't seem to find you there! Should I add you directly with your gmail address?","> Hey @zhongpeixiang!\r\n> Really nice addition here!\r\n> \r\n> Did you officially joined the sprint by posting [on the forum thread](https:\/\/discuss.huggingface.co\/t\/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library\/2176) and joining our slack?\r\n> I can't seem to find you there! Should I add you directly with your gmail address?\r\n\r\nThank you for the invitation. This initiative is awesome. Sadly I\u2019m occupied by my thesis writing this month. Good luck \ud83e\udd17","As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)","> As you want @zhongpeixiang (I was maybe not clear but that just mean that by posting on the forum thread that you participated in the current event you will get a special gift (a tee-shirt) for the contribution that you have already done here :-) Nothing more to do)\r\n\r\nOh, I misunderstood the post. I'm glad to join."],"created_at":1606963568000,"updated_at":1607079499000,"closed_at":1607012106000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1029","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1029","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1029.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1029.patch"},"body":"A persona-based empathetic conversation dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1029\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1028","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1028\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1028\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1028\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1028","id":755712854,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMzc0MTYw","number":1028,"title":"Add ASSET dataset for text simplification evaluation","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice, thanks @yjernite !!"],"created_at":1606955309000,"updated_at":1608199386000,"closed_at":1607013277000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1028","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1028","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1028.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1028.patch"},"body":"Adding the ASSET dataset from https:\/\/github.com\/facebookresearch\/asset\r\n\r\nOne config for the simplification data, one for the human ratings of quality.\r\n\r\nThe README.md borrows from that written by @juand-r","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1028\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1027","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1027\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1027\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1027\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1027","id":755695420,"node_id":"MDU6SXNzdWU3NTU2OTU0MjA=","number":1027,"title":"Hi","user":{"login":"suemori87","id":75398394,"node_id":"MDQ6VXNlcjc1Mzk4Mzk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75398394?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/suemori87","html_url":"https:\/\/github.com\/suemori87","followers_url":"https:\/\/api.github.com\/users\/suemori87\/followers","following_url":"https:\/\/api.github.com\/users\/suemori87\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/suemori87\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/suemori87\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/suemori87\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/suemori87\/orgs","repos_url":"https:\/\/api.github.com\/users\/suemori87\/repos","events_url":"https:\/\/api.github.com\/users\/suemori87\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/suemori87\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606952834000,"updated_at":1607013761000,"closed_at":1607013761000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1027\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1026","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1026\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1026\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1026\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1026","id":755689195,"node_id":"MDU6SXNzdWU3NTU2ODkxOTU=","number":1026,"title":"L\u00edo o","user":{"login":"Isaias0","id":73465581,"node_id":"MDQ6VXNlcjczNDY1NTgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73465581?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Isaias0","html_url":"https:\/\/github.com\/Isaias0","followers_url":"https:\/\/api.github.com\/users\/Isaias0\/followers","following_url":"https:\/\/api.github.com\/users\/Isaias0\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Isaias0\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Isaias0\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Isaias0\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Isaias0\/orgs","repos_url":"https:\/\/api.github.com\/users\/Isaias0\/repos","events_url":"https:\/\/api.github.com\/users\/Isaias0\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Isaias0\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606951945000,"updated_at":1607013767000,"closed_at":1607013767000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"````l`````````\n\n```\nO\n```\n`````\n\u00d1o\n```\n````\n\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1026\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1025","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1025\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1025\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1025\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1025","id":755673371,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMzQxNjE5","number":1025,"title":"Add Sesotho Ner","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR include changes to other files (sepedi)\r\ncould you try to only include the files related to the addition of sesotho ner ?","I think i need to clean up my local repo. I am committing everything a fresh after sepedi","Feel free to ping me when yuo have a clean PR and it's ready to review :)","closing in favor of #1114 "],"created_at":1606950015000,"updated_at":1608136023000,"closed_at":1608136022000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1025","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1025","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1025.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1025.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1025\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1024","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1024\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1024\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1024\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1024","id":755664113,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMzMzOTc5","number":1024,"title":"Add ZEST: ZEroShot learning from Task descriptions","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good to me, we can ping the authors for more info later. And yes apply `other-task` labels liberally, we can sort them out later :) \r\n\r\nLooks ready to merge when you're ready @joeddav "],"created_at":1606948880000,"updated_at":1607023260000,"closed_at":1607011755000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1024","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1024","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1024.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1024.patch"},"body":"Adds the ZEST dataset on zero-shot learning from task descriptions from AI2.\r\n\r\n- Webpage: https:\/\/allenai.org\/data\/zest\r\n- Paper: https:\/\/arxiv.org\/abs\/2011.08115\r\n\r\nThe nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that...","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1024\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1023","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1023\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1023\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1023\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1023","id":755655752,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMzI3MTMy","number":1023,"title":"Add Schema Guided Dialogue dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606947961000,"updated_at":1606958281000,"closed_at":1606958281000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1023","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1023","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1023.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1023.patch"},"body":"This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge\r\n- https:\/\/github.com\/google-research-datasets\/dstc8-schema-guided-dialogue\r\n\r\nA bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config for the schemas.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1023\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1022","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1022\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1022\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1022\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1022","id":755651377,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMzIzNTkw","number":1022,"title":"add MRQA","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["THanks!\r\nDone!"],"created_at":1606947476000,"updated_at":1607042066000,"closed_at":1607042065000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1022","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1022","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1022.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1022.patch"},"body":"MRQA (shared task 2019)\r\nout of distribution generalization\r\nFramed as extractive question answering\r\nDataset is the concatenation (of subsets) of existing QA datasets processed to match the SQuAD format","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1022\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1021","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1021\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1021\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1021\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1021","id":755644559,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMzE4MTQw","number":1021,"title":"Add Gutenberg time references dataset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Description: \"A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg and the Hathi Trust Digital Library 2.\" > This is just the Gutenberg part.\r\n\r\nAlso, the paragraph at the top of the file would make a good Dataset Summary in the README :) "],"created_at":1606946726000,"updated_at":1606991619000,"closed_at":1606991618000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1021","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1021","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1021.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1021.patch"},"body":"This PR adds the gutenberg_time dataset: https:\/\/arxiv.org\/abs\/2011.04124","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1021\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1020","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1020\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1020\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1020\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1020","id":755601450,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjgyODQy","number":1020,"title":"Add Setswana NER","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606942327000,"updated_at":1607007374000,"closed_at":1607007374000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1020","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1020","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1020.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1020.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1020\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1019","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1019\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1019\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1019\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1019","id":755582090,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjY2NzAz","number":1019,"title":"Add caWaC dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606940335000,"updated_at":1607006829000,"closed_at":1607006829000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1019","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1019","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1019.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1019.patch"},"body":"Add dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1019\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1018","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1018\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1018\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1018\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1018","id":755570882,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjU3NTU2","number":1018,"title":"Add Sepedi NER","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry for this. I deleted sepedi_ner_corpus as per your earlier advise. Let me check. "],"created_at":1606939265000,"updated_at":1607032023000,"closed_at":1607031998000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1018","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1018","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1018.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1018.patch"},"body":"This is a new branch created for this dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1018\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1017","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1017\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1017\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1017\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1017","id":755558175,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjQ3MDE2","number":1017,"title":"Specify file encoding","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1606938045000,"updated_at":1606956265000,"closed_at":1606956265000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1017","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1017","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1017.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1017.patch"},"body":"If not specified, Python uses system default, which for Windows is not \"utf-8\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1017\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1016","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1016\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1016\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1016\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1016","id":755521862,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjE3MjM3","number":1016,"title":"Add CLINC150 dataset","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606934670000,"updated_at":1606991524000,"closed_at":1606991524000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1016","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1016","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1016.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1016.patch"},"body":"Added CLINC150 Dataset. The link to the dataset can be found [here](https:\/\/github.com\/clinc\/oos-eval) and the paper can be found [here](https:\/\/www.aclweb.org\/anthology\/D19-1131.pdf)\r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1016\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1015","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1015\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1015\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1015\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1015","id":755508841,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjA2MTgy","number":1015,"title":"add hard dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @sumanthd17 that fixed it. "],"created_at":1606933656000,"updated_at":1607007834000,"closed_at":1607007834000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1015","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1015","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1015.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1015.patch"},"body":"Hotel Reviews in Arabic language.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1015\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1014","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1014\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1014\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1014\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1014","id":755505851,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMjAzNzAz","number":1014,"title":"Add SciTLDR Dataset (Take 2)","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq please review this PR when you get free","If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master","> If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master\r\n\r\nThe same 3 tests are failing again :(\r\n```\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```","One trick if you want to add more datasets to avoid these errors : you can just rebase the master branch of your fork from the master branch of the repo. Then each time you make a new branch from master on your fork, it will include the fix for these errors","> One trick if you want to add more datasets to avoid these errors : you can just rebase the master branch of your fork from the master branch of the repo. Then each time you make a new branch from master on your fork, it will include the fix for these errors\r\n\r\nYes, I almost always do that, but somehow seems even this branch got old \ud83d\ude13 \r\nI also do the following if I directly create a new branch locally: `git checkout -b <branchname> upstream\/master` so it stays up-to date irrespective of my fork, still don't know how this crept in again","Merging this one since the CI is fixed on master"],"created_at":1606933370000,"updated_at":1606935310000,"closed_at":1606934278000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1014","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1014","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1014.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1014.patch"},"body":"Adds the SciTLDR Dataset by AI2\r\nAdded the `README.md` card with tags to the best of my knowledge\r\n\r\nMulti-target summaries or TLDRs of Scientific Documents\r\n\r\nContinued from #986 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1014\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1013","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1013\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1013\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1013\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1013","id":755493075,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMTkzMTcy","number":1013,"title":"Adding CS restaurants dataset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606932150000,"updated_at":1606933520000,"closed_at":1606933519000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1013","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1013","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1013.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1013.patch"},"body":"This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1013\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1012","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1012\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1012\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1012\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1012","id":755485658,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMTg3MTI2","number":1012,"title":"Adding Evidence Inference Data:","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606931495000,"updated_at":1607007886000,"closed_at":1607007886000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1012","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1012","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1012.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1012.patch"},"body":"http:\/\/evidence-inference.ebm-nlp.com\/download\/\nhttps:\/\/arxiv.org\/pdf\/2005.04177.pdf","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1012\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1011","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1011\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1011\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1011\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1011","id":755463726,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMTY5MjA3","number":1011,"title":"Add Bilingual Corpus of Arabic-English Parallel Tweets","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["IMO, the problem with this dataset is that it is not really a text\/nlp dataset. These are just collections of tweet ids. So, ultimately, one needs to crawl twitter to get the actual text.","That's true.\r\n\r\n","at least it's clear in the description that one needs to collect the tweets : \r\n```\r\nThis resource is a result of a generic method for collecting parallel tweets.\r\n```","Looks like this is failing for other datasets. Should I rebase it and push again?\r\nAlso rebasing and pushing is reflecting changes in many other files (ultimately forcing me to open a new branch and a new PR) any way to avoid this?","No let me merge this one directly, it's fine","merging since the CI is fixed on master"],"created_at":1606929602000,"updated_at":1607093110000,"closed_at":1607093073000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1011","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1011","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1011.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1011.patch"},"body":"Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https:\/\/alt.qcri.org\/wp-content\/uploads\/2020\/08\/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https:\/\/www.aclweb.org\/anthology\/2020.bucc-1.3.pdf)\r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1011\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1010","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1010\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1010\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1010\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1010","id":755432143,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMTQzMzAx","number":1010,"title":"Add NoReC: Norwegian Review Corpus","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606927109000,"updated_at":1613659649000,"closed_at":1613659648000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1010","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1010","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1010.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1010.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1010\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1009","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1009\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1009\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1009\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1009","id":755384433,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMTA0NDc5","number":1009,"title":"Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset.","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606923636000,"updated_at":1607001390000,"closed_at":1607001389000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1009","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1009","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1009.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1009.patch"},"body":"https:\/\/github.com\/nlpdata\/c3\nhttps:\/\/arxiv.org\/abs\/1904.09679","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1009\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1008","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1008\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1008\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1008\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1008","id":755372798,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDk1ODQy","number":1008,"title":"Adding C3 dataset: the first free-form multiple-Choice Chinese machine reading Comprehension dataset. https:\/\/github.com\/nlpdata\/c3 https:\/\/arxiv.org\/abs\/1904.09679","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Dupe of #1009 "],"created_at":1606922885000,"updated_at":1606923655000,"closed_at":1606923655000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1008","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1008","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1008.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1008.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1008\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1007","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1007\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1007\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1007\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1007","id":755364078,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDg4NTk5","number":1007,"title":"Include license file in source distribution","user":{"login":"synapticarbors","id":589279,"node_id":"MDQ6VXNlcjU4OTI3OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/589279?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/synapticarbors","html_url":"https:\/\/github.com\/synapticarbors","followers_url":"https:\/\/api.github.com\/users\/synapticarbors\/followers","following_url":"https:\/\/api.github.com\/users\/synapticarbors\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/synapticarbors\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/synapticarbors\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/synapticarbors\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/synapticarbors\/orgs","repos_url":"https:\/\/api.github.com\/users\/synapticarbors\/repos","events_url":"https:\/\/api.github.com\/users\/synapticarbors\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/synapticarbors\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606922263000,"updated_at":1606931885000,"closed_at":1606931885000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1007","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1007","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1007.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1007.patch"},"body":"It would be helpful to include the license file in the source distribution.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1007\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1006","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1006\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1006\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1006\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1006","id":755362766,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDg3NTIy","number":1006,"title":"add yahoo_answers_topics","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["feel free to merge\/ping me to merge if there're no more changes to do"],"created_at":1606922173000,"updated_at":1607013878000,"closed_at":1606932092000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1006","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1006","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1006.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1006.patch"},"body":"This PR adds yahoo answers topic classification dataset.\r\n\r\nMore info:\r\nhttps:\/\/github.com\/LC-John\/Yahoo-Answers-Topic-Classification-Dataset\r\n\r\ncc @joeddav, @yjernite ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1006\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1005","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1005\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1005\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1005\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1005","id":755337255,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDY3Mjc5","number":1005,"title":"Adding Autshumato South african langages:","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606920453000,"updated_at":1607001210000,"closed_at":1607001210000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1005","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1005","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1005.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1005.patch"},"body":"https:\/\/repo.sadilar.org\/handle\/20.500.12185\/7\/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1005\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1004","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1004\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1004\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1004\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/1004","id":755325368,"node_id":"MDU6SXNzdWU3NTUzMjUzNjg=","number":1004,"title":"how large datasets are handled under the hood ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I\/O speed.\r\n\r\nFor example when you access one element or one batch\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\nfirst_element = squad[0]\r\none_batch = squad[:8]\r\n```\r\n\r\nthen only this element\/batch is loaded in memory, while the rest of the dataset is memory mapped.","How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nEDIT:\r\nMy fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.","> How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory.\r\n\r\nLoading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM.\r\nMemory mapping is almost instantaneous and is done within one process.\r\n\r\nThen, the speed of querying examples from the dataset is I\/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast.\r\nBut since the I\/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory.\r\nStill, if you load the dataset in different processes then it can be faster but there will still be the I\/O bottleneck of the disk.\r\n\r\n> EDIT:\r\n> My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.\r\n\r\nOk let me know if that helps !\r\n"],"created_at":1606919560000,"updated_at":1612175031000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1004\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1003","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1003\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1003\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1003\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1003","id":755310318,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDQ1NDcy","number":1003,"title":"Add multi_x_science_sum","user":{"login":"moussaKam","id":28675016,"node_id":"MDQ6VXNlcjI4Njc1MDE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28675016?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moussaKam","html_url":"https:\/\/github.com\/moussaKam","followers_url":"https:\/\/api.github.com\/users\/moussaKam\/followers","following_url":"https:\/\/api.github.com\/users\/moussaKam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moussaKam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moussaKam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moussaKam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moussaKam\/orgs","repos_url":"https:\/\/api.github.com\/users\/moussaKam\/repos","events_url":"https:\/\/api.github.com\/users\/moussaKam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moussaKam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606918441000,"updated_at":1606930745000,"closed_at":1606930745000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1003","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1003","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1003.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1003.patch"},"body":"Add Multi-XScience Dataset. \r\n\r\ngithub repo: https:\/\/github.com\/yaolu\/Multi-XScience\r\npaper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https:\/\/arxiv.org\/abs\/2010.14235)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1003\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1002","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1002\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1002\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1002\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1002","id":755309758,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDQ1MDIx","number":1002,"title":"Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you fix the dummy data before we merge ?\r\nLooks like the dummy `train.csv` is missing","Thanks @Narsil @lhoestq for adding MeDAL :)"],"created_at":1606918397000,"updated_at":1607360283000,"closed_at":1607001273000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1002","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1002","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1002.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1002.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1002\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1001","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1001\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1001\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1001\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1001","id":755309071,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDQ0NDQ0","number":1001,"title":"Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Dupe"],"created_at":1606918350000,"updated_at":1606918392000,"closed_at":1606918392000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1001","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1001","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1001.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1001.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1001\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1000","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1000\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1000\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1000\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1000","id":755292066,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMxMDMxMTE1","number":1000,"title":"UM005: Urdu <> English Translation Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606917095000,"updated_at":1607096070000,"closed_at":1607096069000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1000","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1000","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1000.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1000.patch"},"body":"Adds Urdu-English dataset for machine translation: http:\/\/ufal.ms.mff.cuni.cz\/umc\/005-en-ur\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1000\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/999","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/999\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/999\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/999\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/999","id":755246786,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwOTk1MTY3","number":999,"title":"add generated_reviews_enth","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606913443000,"updated_at":1606994248000,"closed_at":1606994248000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/999","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/999","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/999.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/999.patch"},"body":"`generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https:\/\/arxiv.org\/pdf\/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https:\/\/arxiv.org\/pdf\/2007.03541.pdf)) are English product reviews generated by [CTRL](https:\/\/arxiv.org\/abs\/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/999\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/998","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/998\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/998\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/998\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/998","id":755235356,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwOTg2MTQ3","number":998,"title":"adding yahoo_answers_qa","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606912434000,"updated_at":1606916740000,"closed_at":1606915566000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/998","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/998","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/998.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/998.patch"},"body":"Adding Yahoo Answers QA dataset.\r\n\r\nMore info:\r\nhttps:\/\/ciir.cs.umass.edu\/downloads\/nfL6\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/998\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/997","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/997\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/997\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/997\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/997","id":755185517,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwOTQ2MTIy","number":997,"title":"Microsoft CodeXGlue","user":{"login":"madlag","id":272253,"node_id":"MDQ6VXNlcjI3MjI1Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/272253?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/madlag","html_url":"https:\/\/github.com\/madlag","followers_url":"https:\/\/api.github.com\/users\/madlag\/followers","following_url":"https:\/\/api.github.com\/users\/madlag\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/madlag\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/madlag\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/madlag\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/madlag\/orgs","repos_url":"https:\/\/api.github.com\/users\/madlag\/repos","events_url":"https:\/\/api.github.com\/users\/madlag\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/madlag\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["#978 is working on adding code refinement\r\n\r\nmaybe we should keep the CodeXGlue benchmark (as glue) and don't merge the code_refinement dataset proposed in #978 ?\r\n\r\ncc @reshinthadithyan","Hi @madlag and @lhoestq , I am extremely interested in getting this dataset into HF's library as I research in this area a lot. I see that it hasn't been updated in a while, but it is very close to being finished. If no one is currently working on this, I'd be happy to do any final touches that might be needed to get this merged.","Hi @ncoop57 ! Thanks for your interest and sorry for the inactivity on this PR.\r\nSure feel free to create another PR to continue this one ! This one was really close to being merged so I think it won't require that much changes. In addition to my previous comments, there should also be a \"Contributions\" subsection (see the template of the README [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README.md))","Superseded by https:\/\/github.com\/huggingface\/datasets\/pull\/2357 ."],"created_at":1606908078000,"updated_at":1623159745000,"closed_at":1623159744000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/997","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/997","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/997.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/997.patch"},"body":"Datasets from https:\/\/github.com\/microsoft\/CodeXGLUE\r\n\r\nThis contains 13 datasets:\r\n\r\ncode_x_glue_cc_clone_detection_big_clone_bench\r\ncode_x_glue_cc_clone_detection_poj_104\r\ncode_x_glue_cc_cloze_testing_all\r\ncode_x_glue_cc_cloze_testing_maxmin\r\ncode_x_glue_cc_code_completion_line\r\ncode_x_glue_cc_code_completion_token\r\ncode_x_glue_cc_code_refinement\r\ncode_x_glue_cc_code_to_code_trans\r\ncode_x_glue_cc_defect_detection\r\ncode_x_glue_ct_code_to_text\r\ncode_x_glue_tc_nl_code_search_adv\r\ncode_x_glue_tc_text_to_code\r\ncode_x_glue_tt_text_to_text\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/997\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/996","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/996\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/996\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/996\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/996","id":755176084,"node_id":"MDU6SXNzdWU3NTUxNzYwODQ=","number":996,"title":"NotADirectoryError while loading the CNN\/Dailymail dataset","user":{"login":"arc-bu","id":75367920,"node_id":"MDQ6VXNlcjc1MzY3OTIw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75367920?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/arc-bu","html_url":"https:\/\/github.com\/arc-bu","followers_url":"https:\/\/api.github.com\/users\/arc-bu\/followers","following_url":"https:\/\/api.github.com\/users\/arc-bu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/arc-bu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/arc-bu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/arc-bu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/arc-bu\/orgs","repos_url":"https:\/\/api.github.com\/users\/arc-bu\/repos","events_url":"https:\/\/api.github.com\/users\/arc-bu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/arc-bu\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like the google drive download failed.\r\nI'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.\r\n\r\nWe should consider finding a better host than google drive for this dataset imo\r\nrelated : #873 #864 ","It is working now, thank you. \r\n\r\nShould I leave this issue open to address the Quota-exceeded error?","Yes please. It's been happening several times, we definitely need to address it","Any updates on this one? I'm facing a similar issue trying to add CelebA.","I've looked into it and couldn't find a solution. This looks like a Google Drive limitation..\r\nPlease try to use other hosts when possible","The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS.","It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik.\r\nOtherwise you can use the google drive link, but it it's not that convenient because of this quota issue.","Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome.","I am getting this error as well. Is there a fix?","Not as long as the data is stored on GG drive unfortunately.\r\nMaybe we can ask if there's a mirror ?\r\n\r\nHi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ?\r\n\r\nTo give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset."],"created_at":1606907276000,"updated_at":1617806971000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"\r\nDownloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to \/root\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '\/root\/.cache\/huggingface\/datasets\/downloads\/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\/cnn\/stories'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/996\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/995","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/995\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/995\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/995\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/995","id":755175199,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwOTM3NjI3","number":995,"title":"added dataset circa","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Blocked @k125-ak ;) Bye-bye"],"created_at":1606907199000,"updated_at":1607079496000,"closed_at":1606988377000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/995","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/995","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/995.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/995.patch"},"body":"Dataset Circa added. Only README.md and dataset card left","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/995\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/994","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/994\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/994\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/994\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/994","id":755146834,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwOTE1MDc2","number":994,"title":"Add Sepedi ner corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like the PR includes commits about many other files.\r\nCould you create a clean branch from master, and create another PR ?","Sorry, will do that. "],"created_at":1606905007000,"updated_at":1606990754000,"closed_at":1606933208000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/994","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/994","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/994.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/994.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/994\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/993","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/993\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/993\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/993\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/993","id":755135768,"node_id":"MDU6SXNzdWU3NTUxMzU3Njg=","number":993,"title":"Problem downloading amazon_reviews_multi","user":{"login":"hfawaz","id":29229602,"node_id":"MDQ6VXNlcjI5MjI5NjAy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29229602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hfawaz","html_url":"https:\/\/github.com\/hfawaz","followers_url":"https:\/\/api.github.com\/users\/hfawaz\/followers","following_url":"https:\/\/api.github.com\/users\/hfawaz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hfawaz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hfawaz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hfawaz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hfawaz\/orgs","repos_url":"https:\/\/api.github.com\/users\/hfawaz\/repos","events_url":"https:\/\/api.github.com\/users\/hfawaz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hfawaz\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?","Hi, it seems a connection problem. \r\nNow it says: \r\n`ConnectionError: Couldn't reach https:\/\/amazon-reviews-ml.s3-us-west-2.amazonaws.com\/json\/train\/dataset_ja_train.json`"],"created_at":1606904157000,"updated_at":1607074693000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Thanks for adding the dataset. \r\nAfter trying to load the dataset, I am getting the following error: \r\n`ConnectionError: Couldn't reach https:\/\/amazon-reviews-ml.s3-us-west-2.amazonaws.com\/json\/train\/dataset_fr_train.json\r\n`\r\nI used the following code to load the dataset: \r\n`load_dataset(\r\n dataset_name,\r\n \"all_languages\",\r\n cache_dir=\".data\"\r\n )`\r\n\r\nI am using version 1.1.3 of `datasets`\r\n\r\nNote that I can perform a successfull `wget https:\/\/amazon-reviews-ml.s3-us-west-2.amazonaws.com\/json\/train\/dataset_fr_train.json`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/993\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/992","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/992\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/992\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/992\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/992","id":755124963,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODk3Njkx","number":992,"title":"Add CAIL 2018 dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606903300000,"updated_at":1606927742000,"closed_at":1606927741000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/992","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/992","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/992.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/992.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/992\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/991","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/991\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/991\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/991\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/991","id":755117902,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODkyMDk0","number":991,"title":"Adding farsi_news dataset (https:\/\/github.com\/sci2lab\/Farsi-datasets)","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606902739000,"updated_at":1606993286000,"closed_at":1606993286000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/991","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/991","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/991.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/991.patch"},"body":null,"timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/991\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/990","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/990\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/990\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/990\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/990","id":755097798,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODc1NDYx","number":990,"title":"Add E2E NLG","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606901112000,"updated_at":1607000885000,"closed_at":1607000884000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/990","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/990","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/990.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/990.patch"},"body":"Adding the E2E NLG dataset.\r\n\r\nMore info here : http:\/\/www.macs.hw.ac.uk\/InteractionLab\/E2E\/\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template and at least fill the tags \r\n- [x] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/990\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/989","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/989\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/989\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/989\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/989","id":755079394,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODYwNDMw","number":989,"title":"Fix SV -> NO","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606899599000,"updated_at":1606900701000,"closed_at":1606900694000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/989","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/989","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/989.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/989.patch"},"body":"This PR fixes the small typo as seen in #956 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/989\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/988","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/988\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/988\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/988\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/988","id":755069159,"node_id":"MDU6SXNzdWU3NTUwNjkxNTk=","number":988,"title":"making sure datasets are not loaded in memory and distributed training of them","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["my implementation of sharding per TPU core: https:\/\/github.com\/google-research\/ruse\/blob\/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c\/seq2seq\/trainers\/t5_trainer.py#L316 \r\nmy implementation of dataloader for this case https:\/\/github.com\/google-research\/ruse\/blob\/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c\/seq2seq\/tasks\/tasks.py#L496 "],"created_at":1606898715000,"updated_at":1606899034000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/988\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/987","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/987\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/987\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/987\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/987","id":755059469,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODQ0MTQ4","number":987,"title":"Add OPUS DOGC dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["merging since the CI is fixed on master"],"created_at":1606897832000,"updated_at":1607088461000,"closed_at":1607088461000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/987","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/987","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/987.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/987.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/987\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/986","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/986\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/986\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/986\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/986","id":755047470,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODM0MzYx","number":986,"title":"Add SciTLDR Dataset","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["CI failures seem to be unrelated (related to `norwegian_ner`)\r\n```\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```","you can just rebase from master to fix the CI :) ","can you just rebase from master before we merge ?","Sorry, the rebase from master went horribly wrong, I guess I'll just open another PR\r\n\r\nClosing this one due to a mistake in rebasing :(","Continued in #1014 "],"created_at":1606896676000,"updated_at":1606934242000,"closed_at":1606932179000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/986","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/986","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/986.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/986.patch"},"body":"Adds the SciTLDR Dataset by AI2\r\nAdded README card with tags to the best of my knowledge\r\n\r\nMulti-target summaries or TLDRs of Scientific Documents","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/986\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/985","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/985\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/985\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/985\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/985","id":755020564,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODEyNTM1","number":985,"title":"Add GAP dataset","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This dataset already exists apparently, sorry :\/ \r\nsee\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/gap\/gap.py\r\n\r\nFeel free to re-use the dataset card you did for `\/datasets\/gap`\r\n","oh heck, my bad \ud83e\udd26\u200d\u2642\ufe0f sorry"],"created_at":1606893911000,"updated_at":1606925792000,"closed_at":1606925792000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/985","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/985","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/985.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/985.patch"},"body":"GAP dataset\r\nGender bias coreference resolution","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/985\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/984","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/984\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/984\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/984\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/984","id":755009916,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwODAzNzgw","number":984,"title":"committing Whoa file","user":{"login":"StulosDunamos","id":75356780,"node_id":"MDQ6VXNlcjc1MzU2Nzgw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75356780?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/StulosDunamos","html_url":"https:\/\/github.com\/StulosDunamos","followers_url":"https:\/\/api.github.com\/users\/StulosDunamos\/followers","following_url":"https:\/\/api.github.com\/users\/StulosDunamos\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/StulosDunamos\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/StulosDunamos\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/StulosDunamos\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/StulosDunamos\/orgs","repos_url":"https:\/\/api.github.com\/users\/StulosDunamos\/repos","events_url":"https:\/\/api.github.com\/users\/StulosDunamos\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/StulosDunamos\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["can't find the Whoa file since there' nothing left","The classic `rm -rf` command - nice one"],"created_at":1606892866000,"updated_at":1606925729000,"closed_at":1606923658000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/984","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/984","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/984.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/984.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/984\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/983","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/983\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/983\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/983\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/983","id":754966620,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNzY4MTMw","number":983,"title":"add mc taco","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606888495000,"updated_at":1606923467000,"closed_at":1606923466000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/983","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/983","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/983.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/983.patch"},"body":"MC-TACO\r\nTemporal commonsense knowledge","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/983\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/982","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/982\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/982\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/982\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/982","id":754946337,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNzUxMzYx","number":982,"title":"add prachathai67k take2","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606885921000,"updated_at":1606904291000,"closed_at":1606904291000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/982","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/982","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/982.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/982.patch"},"body":"I decided it will be faster to create a new pull request instead of fixing the rebase issues.\r\ncontinuing from https:\/\/github.com\/huggingface\/datasets\/pull\/954\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/982\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/981","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/981\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/981\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/981\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/981","id":754937612,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNzQ0MTYx","number":981,"title":"add wisesight_sentiment take2","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606884659000,"updated_at":1606905433000,"closed_at":1606905433000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/981","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/981","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/981.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/981.patch"},"body":"Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/981\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/980","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/980\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/980\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/980\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/980","id":754899301,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNzEzNjY3","number":980,"title":"Wongnai - Thai reviews dataset","user":{"login":"mapmeld","id":643918,"node_id":"MDQ6VXNlcjY0MzkxOA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/643918?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mapmeld","html_url":"https:\/\/github.com\/mapmeld","followers_url":"https:\/\/api.github.com\/users\/mapmeld\/followers","following_url":"https:\/\/api.github.com\/users\/mapmeld\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mapmeld\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mapmeld\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mapmeld\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mapmeld\/orgs","repos_url":"https:\/\/api.github.com\/users\/mapmeld\/repos","events_url":"https:\/\/api.github.com\/users\/mapmeld\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mapmeld\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you for contributing a Thai dataset, @mapmeld ! I'm super hyped. \r\nOne comment I may add is that wongnai-corpus has two datasets: review classification (this) and word tokenization (https:\/\/github.com\/wongnai\/wongnai-corpus\/blob\/master\/search\/labeled_queries_by_judges.txt).\r\nWould it be possible for you to rename this one something along the line of `wongnai-reviews` so that when\/if we include the word tokenization dataset, we will know which is which.\r\n\r\nThis helps solve my check_code_quality issue.\r\n```\r\nmake style\r\nblack --line-length 119 --target-version py36 datasets\/wongnai\r\nflake8 datasets\/wongnai\r\nisort datasets\/wongnai\/wongnai.py\r\n```","@cstorm125 thanks! following your suggestions on formatting and on naming the dataset\r\n\r\nI am writing a blog post about Thai NLP and transformers (example: mBERT does 1-2 character tokens instead of doing word segmentation), started adding this dataset to use as an example, and then saw you were adding other datasets. Great work! And if you know any Thai BERT models beyond https:\/\/github.com\/ThAIKeras\/bert we should maybe talk over email!"],"created_at":1606879208000,"updated_at":1606923281000,"closed_at":1606923005000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/980","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/980","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/980.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/980.patch"},"body":"40,000 reviews, previously released on GitHub ( https:\/\/github.com\/wongnai\/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https:\/\/www.kaggle.com\/c\/wongnai-challenge-review-rating-prediction\/ )","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/980\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/979","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/979\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/979\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/979\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/979","id":754893337,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNzA4OTA5","number":979,"title":"[WIP] Add multi woz","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606878342000,"updated_at":1606925236000,"closed_at":1606925236000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/979","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/979","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/979.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/979.patch"},"body":"This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https:\/\/github.com\/budzianowski\/multiwoz\/tree\/master\/data\/MultiWOZ_2.2\r\n\r\nIt was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md\r\n\r\nOn the plus side the structure is broadly similar to that of the Google Schema Guided dialogue [dataset](https:\/\/github.com\/google-research-datasets\/dstc8-schema-guided-dialogue), so will take care of that one next.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/979\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/978","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/978\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/978\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/978\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/978","id":754854478,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjc4NTUy","number":978,"title":"Add code refinement","user":{"login":"reshinthadithyan","id":36307201,"node_id":"MDQ6VXNlcjM2MzA3MjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36307201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/reshinthadithyan","html_url":"https:\/\/github.com\/reshinthadithyan","followers_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/followers","following_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/repos","events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also cc @madlag since I recall you wanted to work on CodeXGlue as well ?","Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a consistent naming with all the other codexglue datasets ? What do you think @reshinthadithyan ?","> Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a consistent naming with all the other codexglue datasets ? What do you think @reshinthadithyan ?\r\n\r\nHello @madlag, I think you can retain that in your script. Let's stick onto the same file like how Glue is maintained.","Hi @reshinthadithyan ! Are you still working on this version of the dataset or are we going with @madlag 's only ?","> Hi @reshinthadithyan ! Are you still working on this version of the dataset or are we going with @madlag 's only ?\r\n\r\nHello, yes. We are going with Madlag's"],"created_at":1606872598000,"updated_at":1607305978000,"closed_at":1607305978000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/978","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/978","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/978.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/978.patch"},"body":"### OVERVIEW\r\nMillions of open-source projects with numerous bug fixes\r\nare available in code repositories. This proliferation\r\nof software development histories can be leveraged to\r\nlearn how to fix common programming bugs\r\nCode refinement aims to automatically fix bugs in the code,\r\nwhich can contribute to reducing the cost of bug-fixes for developers.\r\nGiven a piece of Java code with bugs,\r\nthe task is to remove the bugs to output the refined code.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/978\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/977","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/977\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/977\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/977\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/977","id":754839594,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjY2ODg3","number":977,"title":"Add ROPES dataset","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606870330000,"updated_at":1606906716000,"closed_at":1606906715000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/977","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/977","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/977.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/977.patch"},"body":"ROPES dataset \r\nReasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa.\r\n\r\nOne thing to note: labels of the test set are hidden (leaderboard submission) so I encoded that as an empty list (ropes.py:L125)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/977\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/976","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/976\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/976\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/976\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/976","id":754826146,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjU1NzM5","number":976,"title":"Arabic pos dialect","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["looks like this PR includes changes about many other files than the oens for Araboc POS Dialect\r\n\r\nCan you create a another branch and another PR please ?","Sorry! I'm not sure how I managed to do that. I'll make a new branch."],"created_at":1606868473000,"updated_at":1607535032000,"closed_at":1607535032000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/976","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/976","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/976.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/976.patch"},"body":"A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/976\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/975","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/975\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/975\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/975\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/975","id":754823701,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjUzNjg4","number":975,"title":"add MeTooMA dataset","user":{"login":"akash418","id":23264033,"node_id":"MDQ6VXNlcjIzMjY0MDMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23264033?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akash418","html_url":"https:\/\/github.com\/akash418","followers_url":"https:\/\/api.github.com\/users\/akash418\/followers","following_url":"https:\/\/api.github.com\/users\/akash418\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akash418\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akash418\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akash418\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akash418\/orgs","repos_url":"https:\/\/api.github.com\/users\/akash418\/repos","events_url":"https:\/\/api.github.com\/users\/akash418\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akash418\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606868155000,"updated_at":1606906736000,"closed_at":1606906735000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/975","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/975","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/975.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/975.patch"},"body":"This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines.\r\n\r\nPaper: https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/7292\r\nDataset Link: https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/JN4EYU\r\n\r\n\r\n---\r\nannotations_creators:\r\n- expert-generated\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- en\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\n- text-retrieval\r\ntask_ids:\r\n- multi-class-classification\r\n- multi-label-classification\r\n---\r\n\r\n# Dataset Card for #MeTooMA dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-instances)\r\n - [Data Splits](#data-instances)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- **Homepage:** https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/JN4EYU\r\n- **Paper:** https:\/\/ojs.aaai.org\/\/index.php\/ICWSM\/article\/view\/7292\r\n- **Point of Contact:** https:\/\/github.com\/midas-research\/MeTooMA\r\n\r\n\r\n### Dataset Summary\r\n\r\n- The dataset consists of tweets belonging to #MeToo movement on Twitter, labeled into different categories.\r\n- This dataset includes more data points and has more labels than any of the previous datasets that contain social media\r\nposts about sexual abuse disclosures. Please refer to the Related Datasets of the publication for detailed information about this.\r\n- Due to Twitter's development policies, the authors provide only the tweet IDs and corresponding labels,\r\nother data can be fetched via Twitter API.\r\n- The data has been labeled by experts, with the majority taken into the account for deciding the final label.\r\n- The authors provide these labels for each of the tweets.\r\n - Relevance\r\n - Directed Hate\r\n - Generalized Hate\r\n - Sarcasm\r\n - Allegation\r\n - Justification\r\n - Refutation\r\n - Support\r\n - Oppose\r\n- The definitions for each task\/label are in the main publication.\r\n- Please refer to the accompanying paper https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7292 for statistical analysis on the textual data\r\nextracted from this dataset.\r\n- The language of all the tweets in this dataset is English\r\n- Time period: October 2018 - December 2018\r\n- Suggested Use Cases of this dataset:\r\n - Evaluating usage of linguistic acts such as hate-speech and sarcasm in the context of public sexual abuse disclosures.\r\n - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.\r\n - Identifying how influential people were portrayed on the public platform in the\r\n events of mass social movements.\r\n - Polarization analysis based on graph simulations of social nodes of users involved\r\n in the #MeToo movement.\r\n\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\nMulti-Label and Multi-Class Classification\r\n\r\n### Languages\r\n\r\nEnglish\r\n\r\n## Dataset Structure\r\n- The dataset is structured into CSV format with TweetID and accompanying labels.\r\n- Train and Test sets are split into respective files.\r\n\r\n### Data Instances\r\n\r\nTweet ID and the appropriate labels\r\n\r\n### Data Fields\r\n\r\nTweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID\r\n\r\n### Data Splits\r\n\r\n- Train: 7979\r\n- Test: 1996\r\n\r\n## Dataset Creation\r\n\r\n### Curation Rationale\r\n\r\n- Twitter was the major source of all the public disclosures of sexual abuse incidents during the #MeToo movement.\r\n- People expressed their opinions over issues that were previously missing from the social media space.\r\n- This provides an option to study the linguistic behaviors of social media users in an informal setting,\r\ntherefore the authors decide to curate this annotated dataset.\r\n- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.\r\n- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.\r\n\r\n\r\n### Source Data\r\n- Source of all the data points in this dataset is a Twitter social media platform.\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- All the tweets are mined from Twitter with initial search parameters identified using keywords from the #MeToo movement.\r\n- Redundant keywords were removed based on manual inspection.\r\n- Public streaming APIs of Twitter was used for querying with the selected keywords.\r\n- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.\r\n- Non-English tweets were removed.\r\n- The final set was labeled by experts with the majority label taken into the account for deciding the final label.\r\n- Please refer to this paper for detailed information: https:\/\/ojs.aaai.org\/\/index.php\/ICWSM\/article\/view\/7292\r\n\r\n#### Who are the source language producers?\r\n\r\nPlease refer to this paper for detailed information: https:\/\/ojs.aaai.org\/\/index.php\/ICWSM\/article\/view\/7292\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\n- The authors chose against crowdsourcing for labeling this dataset due to its highly sensitive nature.\r\n- The annotators are domain experts having degrees in advanced clinical psychology and gender studies.\r\n- They were provided a guidelines document with instructions about each task and its definitions, labels, and examples.\r\n- They studied the document, worked on a few examples to get used to this annotation task.\r\n- They also provided feedback for improving the class definitions.\r\n- The annotation process is not mutually exclusive, implying that the presence of one label does not mean the\r\nabsence of the other one.\r\n\r\n\r\n#### Who are the annotators?\r\n\r\n- The annotators are domain experts having a degree in clinical psychology and gender studies.\r\n- Please refer to the accompanying paper for a detailed annotation process.\r\n\r\n### Personal and Sensitive Information\r\n\r\n- Considering Twitter's policy for distribution of data, only Tweet ID and applicable labels are shared for public use.\r\n- It is highly encouraged to use this dataset for scientific purposes only.\r\n- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.\r\n\r\n## Considerations for Using the Data\r\n\r\n### Social Impact of Dataset\r\n\r\n- The authors of this dataset do not intend to conduct a population-centric analysis of the #MeToo movement on Twitter.\r\n- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these\r\nshould be used to assist already existing human intervention tools and therapies.\r\n- Enough care has been taken to ensure that this work comes off as trying to target a specific person for their\r\nthe personal stance of issues pertaining to the #MeToo movement.\r\n- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.\r\n- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset\r\nand the social impact of this work.\r\n\r\n\r\n### Discussion of Biases\r\n\r\n- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of\r\nthe community affected by sexual abuse.\r\n- Any work undertaken on this dataset should aim to minimize the bias against minority groups which\r\nmight amplify in cases of a sudden outburst of public reactions over sensitive social media discussions.\r\n\r\n### Other Known Limitations\r\n\r\n- Considering privacy concerns, social media practitioners should be aware of making automated interventions\r\nto aid the victims of sexual abuse as some people might not prefer to disclose their notions.\r\n- Concerned social media users might also repeal their social information if they found out that their\r\ninformation is being used for computational purposes, hence it is important to seek subtle individual consent\r\nbefore trying to profile authors involved in online discussions to uphold personal privacy.\r\n\r\n## Additional Information\r\n\r\nPlease refer to this link: https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/JN4EYU\r\n\r\n### Dataset Curators\r\n\r\n- If you use the corpus in a product or application, then please credit the authors\r\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\r\n(http:\/\/midas.iiitd.edu.in) appropriately.\r\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- If interested in the commercial use of the corpus, send an email to midas@iiitd.ac.in.\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\r\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\r\nHowever, the contact listed above will be happy to respond to queries and clarifications\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your social media data.\r\n - if interested in a collaborative research project.\r\n\r\n### Licensing Information\r\n\r\n[More Information Needed]\r\n\r\n### Citation Information\r\n\r\nPlease cite the following publication if you make use of the dataset: https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/7292\r\n\r\n```\r\n\r\n@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.<\/p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }\r\n\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/975\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/974","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/974\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/974\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/974\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/974","id":754811185,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjQzNzQ3","number":974,"title":"Add MeTooMA Dataset","user":{"login":"akash418","id":23264033,"node_id":"MDQ6VXNlcjIzMjY0MDMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23264033?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akash418","html_url":"https:\/\/github.com\/akash418","followers_url":"https:\/\/api.github.com\/users\/akash418\/followers","following_url":"https:\/\/api.github.com\/users\/akash418\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akash418\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akash418\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akash418\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akash418\/orgs","repos_url":"https:\/\/api.github.com\/users\/akash418\/repos","events_url":"https:\/\/api.github.com\/users\/akash418\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akash418\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606866241000,"updated_at":1606867078000,"closed_at":1606867078000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/974","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/974","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/974.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/974.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/974\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/973","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/973\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/973\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/973\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/973","id":754807963,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjQxMTky","number":973,"title":"Adding The Microsoft Terminology Collection dataset.","user":{"login":"leoxzhao","id":7915719,"node_id":"MDQ6VXNlcjc5MTU3MTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7915719?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leoxzhao","html_url":"https:\/\/github.com\/leoxzhao","followers_url":"https:\/\/api.github.com\/users\/leoxzhao\/followers","following_url":"https:\/\/api.github.com\/users\/leoxzhao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leoxzhao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leoxzhao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leoxzhao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leoxzhao\/orgs","repos_url":"https:\/\/api.github.com\/users\/leoxzhao\/repos","events_url":"https:\/\/api.github.com\/users\/leoxzhao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leoxzhao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have to manually copy a dataset_infos.json file from other dataset and modify it since the `datasets-cli` isn't able to handle manually downloaded datasets yet (as far as I know).","you can generate the dataset_infos.json file even for dataset with manual data\r\nTo do so just specify `--data_dir <path\/to\/the\/folder\/containing\/the\/manual\/data>`","Also, dummy_data seems having difficulty to handle manually downloaded datasets. `python datasets-cli dummy_data datasets\/ms_terms --data_dir ...` reported `error: unrecognized arguments: --data_dir` error. Without `--data_dir`, it reported this error:\r\n```\r\nDataset ms_terms with config BuilderConfig(name='ms_terms-full', version=1.0.0, data_dir=None, data_files=None, description='...\\n') seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file None.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"\/Users\/lzhao\/Downloads\/huggingface\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 326, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"\/Users\/lzhao\/Downloads\/huggingface\/datasets\/src\/datasets\/commands\/dummy_data.py\", line 406, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```","Oh yes `--data_dir` seems to only be supported for the `datasets_cli test` command. Sorry about that.\r\n\r\nCan you try to build the dummy_data.zip file manually ?\r\n\r\nIt has to be inside `.\/datasets\/ms_terms\/dummy\/ms_terms-full\/1.0.0`.\r\nInside this folder, please create a folder `dummy_data` that contains a dummy file `MicrosoftTermCollection.tbx` (with just a few examples in it). Then you can zip the `dummy_data` folder to `dummy_data.zip`\r\n\r\nThen you can check if it worked using the command\r\n```\r\npytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms\r\n```\r\n\r\nFeel free to use some debugging print statements in your script if it doesn't work first try to see what `dl_manager.manual_dir` ends up being and also `path_to_manual_file`.\r\n\r\nFeel free to ping me if you have other questions","`pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms` gave `1 passed, 4 warnings in 8.13s`. Existing datasets, like `wikihow`, and `newsroom`, also report 4 warnings. So, I guess that is not related to this dataset.","Could you run `make style` before we merge @leoxzhao ?","the other errors are fixed on master so it's fine","> Could you run `make style` before we merge @leoxzhao ?\r\n\r\nSure thing. Done. Thanks Quentin. I have other datasets in mind. All of which requires manual download. This process is very helpful","Thank you :) "],"created_at":1606865783000,"updated_at":1607095544000,"closed_at":1607094766000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/973","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/973","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/973.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/973.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/973\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/972","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/972\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/972\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/972\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/972","id":754787314,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjI0NTI3","number":972,"title":"Add Children's Book Test (CBT) dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\n\r\nI guess this PR can be closed since we merged #2044?\r\n\r\nI have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?","Closing in favor of #2044, thanks again :)\r\n\r\n> I have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?\r\n\r\nYea it's ok actually, at that time I thought there was another homepage for this dataset"],"created_at":1606863206000,"updated_at":1616153403000,"closed_at":1616153403000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/972","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/972","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/972.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/972.patch"},"body":"Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016).\r\n\r\nSentence completion given a few sentences as context from a children's book.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/972\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/971","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/971\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/971\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/971\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/971","id":754784041,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNjIxOTQz","number":971,"title":"add piqa","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606862824000,"updated_at":1606903082000,"closed_at":1606903081000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/971","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/971","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/971.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/971.patch"},"body":"Physical Interaction: Question Answering (commonsense)\r\nhttps:\/\/yonatanbisk.com\/piqa\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/971\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/970","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/970\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/970\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/970\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/970","id":754697489,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNTUxNTkz","number":970,"title":"Add SWAG","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606854065000,"updated_at":1606902916000,"closed_at":1606902915000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/970","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/970","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/970.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/970.patch"},"body":"Commonsense NLI -> https:\/\/rowanzellers.com\/swag\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/970\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/969","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/969\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/969\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/969\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/969","id":754681940,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNTM4ODQz","number":969,"title":"Add wiki auto dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606852691000,"updated_at":1606925954000,"closed_at":1606925954000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/969","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/969","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/969.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/969.patch"},"body":"This PR adds the WikiAuto sentence simplification dataset\r\n\r\nhttps:\/\/github.com\/chaojiang06\/wiki-auto\r\n\r\nThis is also a prospective GEM task, hence the README.md","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/969\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/968","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/968\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/968\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/968\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/968","id":754659015,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNTIwMjEz","number":968,"title":"ADD Afrikaans NER","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["One trick if you want to add other datasets: consider running these commands each time you want to add a new dataset\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit checkout -b add-<my_dataset_name>\r\n```"],"created_at":1606850583000,"updated_at":1606902088000,"closed_at":1606902088000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/968","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/968","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/968.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/968.patch"},"body":"Afrikaans NER corpus","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/968\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/967","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/967\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/967\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/967\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/967","id":754578988,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNDU0OTI3","number":967,"title":"Add CS Restaurants dataset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh yeah, for some reason I thought you had to do it after the merge, I'll get on it","Weird, now the CI seems to fail because of other datasets (XGLUE, Norwegian_NER)","Yea you just need to rebase from master","Re-opening a PR without the messed-up rebase"],"created_at":1606843057000,"updated_at":1606931864000,"closed_at":1606931845000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/967","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/967","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/967.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/967.patch"},"body":"This PR adds the Czech restaurants dataset for Czech NLG.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/967\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/966","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/966\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/966\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/966\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/966","id":754558686,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNDM4NDE4","number":966,"title":"Add CLINC150 Dataset","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR","created new [PR](https:\/\/github.com\/huggingface\/datasets\/pull\/1016)\r\n\r\nclosing this!"],"created_at":1606841413000,"updated_at":1606934743000,"closed_at":1606934730000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/966","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/966","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/966.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/966.patch"},"body":"Added CLINC150 Dataset. The link to the dataset can be found [here](https:\/\/github.com\/clinc\/oos-eval) and the paper can be found [here](https:\/\/www.aclweb.org\/anthology\/D19-1131.pdf)\r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/966\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/965","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/965\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/965\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/965\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/965","id":754553169,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwNDMzODQ2","number":965,"title":"Add CLINC150 Dataset","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606840980000,"updated_at":1606841476000,"closed_at":1606841355000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/965","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/965","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/965.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/965.patch"},"body":"Added CLINC150 Dataset. The link to the dataset can be found [here](https:\/\/github.com\/clinc\/oos-eval) and the paper can be found [here](https:\/\/www.aclweb.org\/anthology\/D19-1131.pdf)\r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/965\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/964","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/964\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/964\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/964\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/964","id":754474660,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy","number":964,"title":"Adding the WebNLG dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) "],"created_at":1606835123000,"updated_at":1606930445000,"closed_at":1606930445000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/964","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/964","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/964.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/964.patch"},"body":"This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.\r\n\r\nMore information can be found [here](https:\/\/webnlg-challenge.loria.fr\/)\r\n\r\nUnfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/964\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/963","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/963\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/963\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/963\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/963","id":754451234,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4","number":963,"title":"add CODAH dataset","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606833425000,"updated_at":1606916758000,"closed_at":1606915285000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/963","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/963","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/963.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/963.patch"},"body":"Adding CODAH dataset.\r\n\r\nMore info:\r\nhttps:\/\/github.com\/Websail-NU\/CODAH","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/963\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/962","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/962\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/962\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/962\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/962","id":754441428,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMzQxMDA2","number":962,"title":"Add Danish Political Comments Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606832912000,"updated_at":1606991515000,"closed_at":1606991514000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/962","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/962","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/962.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/962.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/962\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/961","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/961\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/961\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/961\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/961","id":754434398,"node_id":"MDU6SXNzdWU3NTQ0MzQzOTg=","number":961,"title":"sample multiple datasets ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["here I share my dataloader currently for multiple tasks: https:\/\/gist.github.com\/rabeehkarimimahabadi\/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks \r\n"],"created_at":1606832402000,"updated_at":1606872764000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: \r\n- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it\r\n\r\nsub-questions:\r\n- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?\r\n- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/961\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/960","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/960\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/960\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/960\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/960","id":754422710,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx","number":960,"title":"Add code to automate parts of the dataset card","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606831491000,"updated_at":1619423761000,"closed_at":1619423761000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/960","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/960","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/960.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/960.patch"},"body":"Most parts of the \"Dataset Structure\" section can be generated automatically. This PR adds some code to do so.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/960\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/959","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/959\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/959\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/959\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/959","id":754418610,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1","number":959,"title":"Add Tunizi Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606831179000,"updated_at":1607005301000,"closed_at":1607005300000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/959","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/959","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/959.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/959.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/959\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/958","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/958\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/958\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/958\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/958","id":754404095,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMzA5ODkz","number":958,"title":"dataset(ncslgr): add initial loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I added the README files, and now the tests fail... (check commit history, only changed MD file)\r\nThe tests seem a bit unstable","the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1606830077000,"updated_at":1607358939000,"closed_at":1607358939000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/958","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/958","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/958.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/958.patch"},"body":"clean #789","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/958\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/957","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/957\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/957\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/957\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/957","id":754380073,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjg5OTk4","number":957,"title":"Isixhosa ner corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606828116000,"updated_at":1606846498000,"closed_at":1606846498000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/957","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/957","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/957.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/957.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/957\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/956","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/956\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/956\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/956\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/956","id":754368378,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjgwMzU1","number":956,"title":"Add Norwegian NER","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Merging this one, good job and thank you @jplu :) "],"created_at":1606827062000,"updated_at":1606899191000,"closed_at":1606846161000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/956","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/956","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/956.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/956.patch"},"body":"This PR adds the [Norwegian NER](https:\/\/github.com\/ljos\/navnkjenner) dataset.\r\n\r\nI have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/956\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/955","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/955\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/955\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/955\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/955","id":754367291,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjc5NDQw","number":955,"title":"Added PragmEval benchmark","user":{"login":"sileod","id":9168444,"node_id":"MDQ6VXNlcjkxNjg0NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9168444?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sileod","html_url":"https:\/\/github.com\/sileod","followers_url":"https:\/\/api.github.com\/users\/sileod\/followers","following_url":"https:\/\/api.github.com\/users\/sileod\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sileod\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sileod\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sileod\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sileod\/orgs","repos_url":"https:\/\/api.github.com\/users\/sileod\/repos","events_url":"https:\/\/api.github.com\/users\/sileod\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sileod\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Really cool ! Thanks for adding this one :)\r\n> Good job at adding all those citations for each task\r\n> \r\n> Looks like the dummy data test doesn't pass. Maybe some files are missing in the dummy_data.zip files ?\r\n> The error reports `pragmeval\/verifiability\/train.tsv` to be missing\r\n> \r\n> Also could you add the tags part of the dataset card (the rest is optional) ?\r\n> See more info here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nIn the prior commits I generated dataset_infos and the dummy files myself\r\nNow they are generated with the cli, and the tests now seem to be passing better\r\nI will look into the tag\r\n","Looks like you did a good job with dummy data in the first place !\r\nThe downside of automatically generated dummy data is that the files are heavier (here 40KB per file).\r\nIf you could replace the generated dummy files with the one you created yourself it would be awesome, since the one you did yourself are way lighter (around 1KB per file). Using small files make `git clone` run faster so we encourage to use small dummy_data files.","could you rebase from master ? it should fix the CI","> could you rebase from master ? it should fix the CI\r\n\r\nI think it is due to the file structure of the dummy data that causes test failure. The automatically generated dummy data pass the tests","Indeed the error reports that `pragmeval\/verifiability\/train.tsv` is missing for the verifiability dummy_data.zip file.\r\nTo fix that you should add the missing data files in each dummy_data.zip file.\r\nTo test that your dummy data work you can run\r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_\r\n```\r\nif some file is missing it should tell you which one","Also it looks like you haven't rebased from master yet, even though you did a `rebase` commit. \r\n\r\nrebasing should fix the other CI fails","It's ok if we have `RemoteDatasetTest ` errors, they're fixed on master","merging since the CI is fixed on master","Hey @sileod! Super nice to see you participating ;)\r\n\r\nDid you officially joined the sprint by posting on [the forum thread](https:\/\/discuss.huggingface.co\/t\/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library\/2176) and joining our slack?\r\n\r\nI can't seem to find you there! Should I add you directly with your gmail address?","Hi @sileod \ud83d\udc4b "],"created_at":1606826955000,"updated_at":1607078612000,"closed_at":1606988207000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/955","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/955","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/955.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/955.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/955\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/954","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/954\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/954\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/954\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/954","id":754362012,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjc1MDY4","number":954,"title":"add prachathai67k","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Test failing for same issues as https:\/\/github.com\/huggingface\/datasets\/pull\/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n===== 7 failed, 1309 passed, 932 skipped, 11 warnings in 166.71s (0:02:46) =====\r\n```","Closing and opening a new pull request to solve rebase issues","To be continued on https:\/\/github.com\/huggingface\/datasets\/pull\/982"],"created_at":1606826455000,"updated_at":1606885931000,"closed_at":1606884232000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/954","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/954","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/954.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/954.patch"},"body":"`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com\r\nThe prachathai-67k dataset was scraped from the news site Prachathai.\r\nWe filtered out those articles with less than 500 characters of body text, mostly images and cartoons.\r\nIt contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018.\r\nThe dataset was originally scraped by @lukkiddd and cleaned by @cstorm125.\r\nYou can also see preliminary exploration at https:\/\/github.com\/PyThaiNLP\/prachathai-67k\/blob\/master\/exploration.ipynb","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/954\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/953","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/953\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/953\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/953\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/953","id":754359942,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjczMzg5","number":953,"title":"added health_fact dataset ","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lhoestq,\r\nInitially I tried int(-1) only in place of nan labels and missing values but I kept on getting this error ```pyarrow.lib.ArrowTypeError: Expected bytes, got a 'int' object``` maybe because I'm sending int values (-1) to objects which are string type"],"created_at":1606826264000,"updated_at":1606864293000,"closed_at":1606864293000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/953","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/953","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/953.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/953.patch"},"body":"Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/953\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/952","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/952\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/952\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/952\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/952","id":754357270,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjcxMTQz","number":952,"title":"Add orange sum","user":{"login":"moussaKam","id":28675016,"node_id":"MDQ6VXNlcjI4Njc1MDE2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28675016?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moussaKam","html_url":"https:\/\/github.com\/moussaKam","followers_url":"https:\/\/api.github.com\/users\/moussaKam\/followers","following_url":"https:\/\/api.github.com\/users\/moussaKam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moussaKam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moussaKam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moussaKam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moussaKam\/orgs","repos_url":"https:\/\/api.github.com\/users\/moussaKam\/repos","events_url":"https:\/\/api.github.com\/users\/moussaKam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moussaKam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606826014000,"updated_at":1606837440000,"closed_at":1606837440000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/952","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/952","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/952.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/952.patch"},"body":"Add OrangeSum a french abstractive summarization dataset. \r\n\r\nPaper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https:\/\/arxiv.org\/abs\/2010.12321)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/952\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/951","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/951\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/951\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/951\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/951","id":754349979,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjY1MTY0","number":951,"title":"Prachathai67k","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k"],"created_at":1606825312000,"updated_at":1606825793000,"closed_at":1606825706000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/951","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/951","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/951.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/951.patch"},"body":"Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com\r\n\r\nThe `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https:\/\/github.com\/lukkiddd) and cleaned by [@cstorm125](https:\/\/github.com\/cstorm125). Download the dataset [here](https:\/\/www.dropbox.com\/s\/fsxepdka4l2pr45\/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https:\/\/github.com\/PyThaiNLP\/prachathai-67k\/blob\/master\/exploration.ipynb).\r\n\r\nThis dataset is a part of [pyThaiNLP](https:\/\/github.com\/PyThaiNLP\/) Thai text [classification-benchmarks](https:\/\/github.com\/PyThaiNLP\/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**:\r\n\r\n* `\u0e01\u0e32\u0e23\u0e40\u0e21\u0e37\u0e2d\u0e07` - politics\r\n* `\u0e2a\u0e34\u0e17\u0e18\u0e34\u0e21\u0e19\u0e38\u0e29\u0e22\u0e0a\u0e19` - human_rights\r\n* `\u0e04\u0e38\u0e13\u0e20\u0e32\u0e1e\u0e0a\u0e35\u0e27\u0e34\u0e15` - quality_of_life\r\n* `\u0e15\u0e48\u0e32\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28` - international\r\n* `\u0e2a\u0e31\u0e07\u0e04\u0e21` - social\r\n* `\u0e2a\u0e34\u0e48\u0e07\u0e41\u0e27\u0e14\u0e25\u0e49\u0e2d\u0e21` - environment\r\n* `\u0e40\u0e28\u0e23\u0e29\u0e10\u0e01\u0e34\u0e08` - economics\r\n* `\u0e27\u0e31\u0e12\u0e19\u0e18\u0e23\u0e23\u0e21` - culture\r\n* `\u0e41\u0e23\u0e07\u0e07\u0e32\u0e19` - labor\r\n* `\u0e04\u0e27\u0e32\u0e21\u0e21\u0e31\u0e48\u0e19\u0e04\u0e07` - national_security\r\n* `\u0e44\u0e2d\u0e0b\u0e35\u0e17\u0e35` - ict\r\n* `\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32` - education","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/951\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/950","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/950\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/950\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/950\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/950","id":754318686,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjM4OTQx","number":950,"title":"Support .xz file format","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606822488000,"updated_at":1606829958000,"closed_at":1606829958000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/950","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/950","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/950.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/950.patch"},"body":"Add support to extract\/uncompress files in .xz format.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/950\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/949","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/949\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/949\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/949\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/949","id":754317777,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjM4MTky","number":949,"title":"Add GermaNER Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq added. "],"created_at":1606822411000,"updated_at":1607004401000,"closed_at":1607004400000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/949","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/949","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/949.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/949.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/949\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/948","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/948\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/948\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/948\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/948","id":754306260,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz","number":948,"title":"docs(ADD_NEW_DATASET): correct indentation for script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606821458000,"updated_at":1606821918000,"closed_at":1606821918000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/948","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/948","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/948.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/948.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/948\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/947","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/947\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/947\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/947\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/947","id":754286658,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjEyMjc3","number":947,"title":"Add europeana newspapers","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606819938000,"updated_at":1606902155000,"closed_at":1606902129000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/947","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/947","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/947.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/947.patch"},"body":"This PR adds the [Europeana newspapers](https:\/\/github.com\/EuropeanaNewspapers\/ner-corpora) dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/947\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/946","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/946\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/946\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/946\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/946","id":754278632,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjA1Nzgw","number":946,"title":"add PEC dataset","user":{"login":"zhongpeixiang","id":11826803,"node_id":"MDQ6VXNlcjExODI2ODAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11826803?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhongpeixiang","html_url":"https:\/\/github.com\/zhongpeixiang","followers_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/followers","following_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/repos","events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The checks failed again even if I didn't make any changes.","you just need to rebase from master to fix the CI :)","Sorry for the mess, I'm confused by the rebase and thus created a new branch."],"created_at":1606819301000,"updated_at":1606963634000,"closed_at":1606963634000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/946","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/946","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/946.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/946.patch"},"body":"A persona-based empathetic conversation dataset published at EMNLP 2020.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/946\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/945","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/945\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/945\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/945\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/945","id":754273920,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1","number":945,"title":"Adding Babi dataset - English version","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Replaced by #1126"],"created_at":1606818936000,"updated_at":1607096585000,"closed_at":1607096574000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/945","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/945","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/945.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/945.patch"},"body":"Adding the English version of bAbI.\r\n\r\nSamples are taken from ParlAI for consistency with the main users at the moment.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/945\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/944","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/944\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/944\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/944\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/944","id":754228947,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5","number":944,"title":"Add German Legal Entity Recognition Dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks ! merging this one"],"created_at":1606815502000,"updated_at":1607000816000,"closed_at":1607000815000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/944","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/944","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/944.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/944.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/944\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/943","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/943\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/943\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/943\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/943","id":754192491,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMTM2ODM3","number":943,"title":"The FLUE Benchmark","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606813250000,"updated_at":1606836278000,"closed_at":1606836270000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/943","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/943","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/943.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/943.patch"},"body":"This PR adds the [FLUE](https:\/\/github.com\/getalp\/Flaubert\/tree\/master\/flue) benchmark which is a set of different datasets to evaluate models for French content.\r\n\r\nTwo datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/943\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/942","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/942\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/942\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/942\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/942","id":754162318,"node_id":"MDU6SXNzdWU3NTQxNjIzMTg=","number":942,"title":"D","user":{"login":"CryptoMiKKi","id":74238514,"node_id":"MDQ6VXNlcjc0MjM4NTE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/74238514?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/CryptoMiKKi","html_url":"https:\/\/github.com\/CryptoMiKKi","followers_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/followers","following_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/orgs","repos_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/repos","events_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/CryptoMiKKi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606810630000,"updated_at":1607013773000,"closed_at":1607013773000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/942\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/941","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/941\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/941\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/941\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/941","id":754141321,"node_id":"MDExOlB1bGxSZXF1ZXN0NTMwMDk0MTI2","number":941,"title":"Add People's Daily NER dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> LGTM thanks :)\n> \n> \n> \n> Before we merge, could you add a dataset card ? see here for more info: https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\n> \n> \n> \n> Note that only the tags at the top of the dataset card are mandatory, if you feel like it's going to take too much time writing the rest to fill it all you can just skip the paragraphs\n\nNope. I don't think there is a citation. Also, can I do the dataset card later (maybe in bulk)?","We're doing one PR = one dataset to keep track of things. Feel free to add the tags later in this PR if you want to.\r\nAlso only the tags are required now, because we don't want people spending too much time on the cards","added @lhoestq ","Merging since the CI is fixed on master"],"created_at":1606808933000,"updated_at":1606934563000,"closed_at":1606934561000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/941","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/941","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/941.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/941.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/941\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/940","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/940\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/940\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/940\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/940","id":754010753,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5OTc3OTQ2","number":940,"title":"Add MSRA NER dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM, don't forget the tags ;)"],"created_at":1606798931000,"updated_at":1607074180000,"closed_at":1606807553000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/940","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/940","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/940.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/940.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/940\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/939","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/939\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/939\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/939\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/939","id":753965405,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz","number":939,"title":"add wisesight_sentiment","user":{"login":"cstorm125","id":15519308,"node_id":"MDQ6VXNlcjE1NTE5MzA4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15519308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cstorm125","html_url":"https:\/\/github.com\/cstorm125","followers_url":"https:\/\/api.github.com\/users\/cstorm125\/followers","following_url":"https:\/\/api.github.com\/users\/cstorm125\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cstorm125\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cstorm125\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cstorm125\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cstorm125\/orgs","repos_url":"https:\/\/api.github.com\/users\/cstorm125\/repos","events_url":"https:\/\/api.github.com\/users\/cstorm125\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cstorm125\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n```","@cstorm125 I really like the dataset and dataset card but there seems to have been a rebase issue at some point since it's now changing 140 files :D \r\n\r\nCould you rebase from master?","I think it might be faster to close and reopen.","To be continued on: https:\/\/github.com\/huggingface\/datasets\/pull\/981"],"created_at":1606791999000,"updated_at":1606884758000,"closed_at":1606883751000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/939","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/939","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/939.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/939.patch"},"body":"Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)\r\n\r\nModel Card:\r\n---\r\nYAML tags:\r\nannotations_creators:\r\n- expert-generated\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- th\r\nlicenses:\r\n- cc0-1.0\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\n- 10K<n<100K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\ntask_ids:\r\n- sentiment-classification\r\n---\r\n\r\n# Dataset Card for wisesight_sentiment\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-instances)\r\n - [Data Splits](#data-instances)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- **Homepage:** https:\/\/github.com\/PyThaiNLP\/wisesight-sentiment\r\n- **Repository:** https:\/\/github.com\/PyThaiNLP\/wisesight-sentiment\r\n- **Paper:**\r\n- **Leaderboard:** https:\/\/www.kaggle.com\/c\/wisesight-sentiment\/\r\n- **Point of Contact:** https:\/\/github.com\/PyThaiNLP\/\r\n\r\n### Dataset Summary\r\n\r\nWisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question)\r\n- Released to public domain under Creative Commons Zero v1.0 Universal license.\r\n- Labels: {\"pos\": 0, \"neu\": 1, \"neg\": 2, \"q\": 3}\r\n- Size: 26,737 messages\r\n- Language: Central Thai\r\n- Style: Informal and conversational. With some news headlines and advertisement.\r\n- Time period: Around 2016 to early 2019. With small amount from other period.\r\n- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.\r\n- Privacy:\r\n - Only messages that made available to the public on the internet (websites, blogs, social network sites).\r\n - For Facebook, this means the public comments (everyone can see) that made on a public page.\r\n - Private\/protected messages and messages in groups, chat, and inbox are not included.\r\n- Alternations and modifications:\r\n - Keep in mind that this corpus does not statistically represent anything in the language register.\r\n - Large amount of messages are not in their original form. Personal data are removed or masked.\r\n - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.\r\n (Mis)spellings are kept intact.\r\n - Messages longer than 2,000 characters are removed.\r\n - Long non-Thai messages are removed. Duplicated message (exact match) are removed.\r\n- More characteristics of the data can be explore [this notebook](https:\/\/github.com\/PyThaiNLP\/wisesight-sentiment\/blob\/master\/exploration.ipynb)\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\nSentiment analysis \/ [Kaggle Leaderboard](https:\/\/www.kaggle.com\/c\/wisesight-sentiment\/)\r\n\r\n### Languages\r\n\r\nThai\r\n\r\n## Dataset Structure\r\n\r\n### Data Instances\r\n\r\n```\r\n{'category': 'pos', 'texts': '\u0e19\u0e48\u0e32\u0e2a\u0e19\u0e19\u0e19'}\r\n{'category': 'neu', 'texts': '\u0e04\u0e23\u0e31\u0e1a #phithanbkk'}\r\n{'category': 'neg', 'texts': '\u0e0b\u0e37\u0e49\u0e2d\u0e41\u0e15\u0e48\u0e1c\u0e49\u0e32\u0e2d\u0e19\u0e32\u0e21\u0e31\u0e22\u0e41\u0e1a\u0e1a\u0e40\u0e22\u0e47\u0e19\u0e21\u0e32\u0e04\u0e48\u0e30 \u0e41\u0e1a\u0e1a\u0e27\u0e48\u0e32\u0e2d\u0e35\u0e2b\u0e48\u0e32\u0e01\u0e39\u0e19\u0e2d\u0e19\u0e44\u0e21\u0e48\u0e44\u0e14\u0e49'}\r\n{'category': 'q', 'texts': '\u0e21\u0e35\u0e41\u0e2d\u0e25\u0e01\u0e2d\u0e2e\u0e2d\u0e25\u0e21\u0e31\u0e49\u0e22\u0e04\u0e30'}\r\n```\r\n\r\n### Data Fields\r\n\r\n- `texts`: texts \r\n- `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3)\r\n\r\n### Data Splits\r\n\r\n| | train | valid | test |\r\n|-----------|-------|-------|-------|\r\n| # samples | 21628 | 2404 | 2671 |\r\n| # neu | 11795 | 1291 | 1453 |\r\n| # neg | 5491 | 637 | 683 |\r\n| # pos | 3866 | 434 | 478 |\r\n| # q | 476 | 42 | 57 |\r\n| avg words | 27.21 | 27.18 | 27.12 |\r\n| avg chars | 89.82 | 89.50 | 90.36 |\r\n\r\n## Dataset Creation\r\n\r\n### Curation Rationale\r\n\r\nOriginally, the dataset was conceived for the [In-class Kaggle Competition](https:\/\/www.kaggle.com\/c\/wisesight-sentiment\/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https:\/\/www.cp.eng.chula.ac.th\/en\/about\/faculty\/ekapolc\/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai.\r\n\r\n### Source Data\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- Style: Informal and conversational. With some news headlines and advertisement.\r\n- Time period: Around 2016 to early 2019. With small amount from other period.\r\n- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.\r\n- Privacy:\r\n - Only messages that made available to the public on the internet (websites, blogs, social network sites).\r\n - For Facebook, this means the public comments (everyone can see) that made on a public page.\r\n - Private\/protected messages and messages in groups, chat, and inbox are not included.\r\n - Usernames and non-public figure names are removed\r\n - Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)\r\n - If you see any personal data still remain in the set, please tell us - so we can remove them.\r\n- Alternations and modifications:\r\n - Keep in mind that this corpus does not statistically represent anything in the language register.\r\n - Large amount of messages are not in their original form. Personal data are removed or masked.\r\n - Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.\r\n - (Mis)spellings are kept intact.\r\n - Messages longer than 2,000 characters are removed.\r\n - Long non-Thai messages are removed. Duplicated message (exact match) are removed.\r\n\r\n\r\n#### Who are the source language producers?\r\n\r\nSocial media users in Thailand\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\n- Sentiment values are assigned by human annotators.\r\n- A human annotator put his\/her best effort to assign just one label, out of four, to a message.\r\n- Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative.\r\n- Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product.\r\n- Saying that other product or service is better is counted as negative.\r\n- General information or news title tend to be counted as neutral.\r\n\r\n#### Who are the annotators?\r\n\r\nOutsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https:\/\/github.com\/wisesight\/)\r\n\r\n### Personal and Sensitive Information\r\n\r\n- We trying to exclude any known personally identifiable information from this data set.\r\n- Usernames and non-public figure names are removed\r\n- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)\r\n- If you see any personal data still remain in the set, please tell us - so we can remove them.\r\n\r\n## Considerations for Using the Data\r\n\r\n### Social Impact of Dataset\r\n\r\n- `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai\r\n- There are risks of personal information that escape the anonymization process\r\n\r\n### Discussion of Biases\r\n\r\n- A message can be ambiguous. When possible, the judgement will be based solely on the text itself.\r\n - In some situation, like when the context is missing, the annotator may have to rely on his\/her own world knowledge and just guess.\r\n - In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus.\r\n\r\n### Other Known Limitations\r\n\r\n- The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question).\r\n- Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance\r\n\r\n## Additional Information\r\n\r\n### Dataset Curators\r\n\r\nThanks [PyThaiNLP](https:\/\/github.com\/PyThaiNLP\/pythainlp) community, [Kitsuchart Pasupa](http:\/\/www.it.kmitl.ac.th\/~kitsuchart\/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https:\/\/www.cp.eng.chula.ac.th\/en\/about\/faculty\/ekapolc\/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https:\/\/www.kaggle.com\/c\/wisesight-sentiment\/ \r\n\r\n### Licensing Information\r\n\r\n- If applicable, copyright of each message content belongs to the original poster.\r\n- **Annotation data (labels) are released to public domain.**\r\n- [Wisesight (Thailand) Co., Ltd.](https:\/\/github.com\/wisesight\/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers.\r\n- The human annotator does not necessarily agree or disagree with the message. Likewise, the label he\/she made to the message does not necessarily reflect his\/her personal view towards the message.\r\n\r\n### Citation Information\r\n\r\nPlease cite the following if you make use of the dataset:\r\n\r\nArthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP\/wisesight-sentiment: First release.** September.\r\n\r\nBibTeX:\r\n```\r\n@software{bact_2019_3457447,\r\n author = {Suriyawongkul, Arthit and\r\n Chuangsuwanich, Ekapol and\r\n Chormai, Pattarawat and\r\n Polpanumas, Charin},\r\n title = {PyThaiNLP\/wisesight-sentiment: First release},\r\n month = sep,\r\n year = 2019,\r\n publisher = {Zenodo},\r\n version = {v1.0},\r\n doi = {10.5281\/zenodo.3457447},\r\n url = {https:\/\/doi.org\/10.5281\/zenodo.3457447}\r\n}\r\n```\r\n\r\n ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/939\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/938","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/938\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/938\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/938\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/938","id":753940979,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5OTIxNzU5","number":938,"title":"V-1.0.0 of isizulu_ner_corpus","user":{"login":"yvonnegitau","id":7923902,"node_id":"MDQ6VXNlcjc5MjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7923902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yvonnegitau","html_url":"https:\/\/github.com\/yvonnegitau","followers_url":"https:\/\/api.github.com\/users\/yvonnegitau\/followers","following_url":"https:\/\/api.github.com\/users\/yvonnegitau\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yvonnegitau\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yvonnegitau\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yvonnegitau\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yvonnegitau\/orgs","repos_url":"https:\/\/api.github.com\/users\/yvonnegitau\/repos","events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yvonnegitau\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["closing since it's been added in #957 "],"created_at":1606788272000,"updated_at":1606865676000,"closed_at":1606865676000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/938","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/938","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/938.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/938.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/938\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/937","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/937\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/937\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/937\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/937","id":753921078,"node_id":"MDU6SXNzdWU3NTM5MjEwNzg=","number":937,"title":"Local machine\/cluster Beam Datasets example\/tutorial","user":{"login":"shangw-nvidia","id":66387198,"node_id":"MDQ6VXNlcjY2Mzg3MTk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66387198?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shangw-nvidia","html_url":"https:\/\/github.com\/shangw-nvidia","followers_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/followers","following_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/orgs","repos_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/repos","events_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to make your processing work ?"],"created_at":1606785103000,"updated_at":1608731696000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm wondering if https:\/\/huggingface.co\/docs\/datasets\/beam_dataset.html has an non-GCP or non-Dataflow version example\/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output.\r\n\r\nThanks!\r\nShang","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/937\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/936","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/936\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/936\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/936\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/936","id":753915603,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5OTAxODMw","number":936,"title":"Added HANS parses and categories","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606784296000,"updated_at":1606828781000,"closed_at":1606828780000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/936","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/936","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/936.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/936.patch"},"body":"This pull request adds HANS missing information: the sentence parses, as well as the heuristic category.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/936\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/935","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/935\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/935\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/935\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/935","id":753863055,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5ODU5MjM4","number":935,"title":"add PIB dataset","user":{"login":"vasudevgupta7","id":53136577,"node_id":"MDQ6VXNlcjUzMTM2NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/53136577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vasudevgupta7","html_url":"https:\/\/github.com\/vasudevgupta7","followers_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/followers","following_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/orgs","repos_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/repos","events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vasudevgupta7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks","Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets\/pib\/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets\/pib\/pib.py:20:1: F401 'json' imported but unused\r\ndatasets\/pib\/pib.py:36:84: W291 trailing whitespace\r\n```\r\nand \r\n```\r\nFAILED tests\/test_file_encoding.py::TestFileEncoding::test_no_encoding_on_file_open\r\n```\r\n\r\nTo fix the `test_no_encoding_on_file_open` you just have to specify an encoding while opening a text file. For example `encoding=\"utf-8\"`\r\n","All suggested changes are done.","Nice ! can you re-generate the dataset_infos.json file to take into account the feature type change ?\r\n```\r\ndatasets-cli test .\/datasets\/pib --save_infos --all_configs --ignore_verifications\r\n```\r\nAnd also format your code ?\r\n```\r\nmake style\r\n```"],"created_at":1606776943000,"updated_at":1606864631000,"closed_at":1606864631000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/935","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/935","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/935.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/935.patch"},"body":"This pull request will add PIB dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/935\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/934","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/934\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/934\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/934\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/934","id":753860095,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5ODU2ODY4","number":934,"title":"small updates to the \"add new dataset\" guide","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @yjernite @lhoestq @thomwolf "],"created_at":1606776550000,"updated_at":1606798582000,"closed_at":1606778040000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/934","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/934","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/934.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/934.patch"},"body":"small updates (corrections\/typos) to the \"add new dataset\" guide","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/934\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/933","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/933\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/933\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/933\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/933","id":753854272,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5ODUyMTI1","number":933,"title":"Add NumerSense","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606775793000,"updated_at":1606854350000,"closed_at":1606852316000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/933","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/933","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/933.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/933.patch"},"body":"Adds the NumerSense dataset\r\n- Webpage\/leaderboard: https:\/\/inklab.usc.edu\/NumerSense\/\r\n- Paper: https:\/\/arxiv.org\/abs\/2005.00683\r\n- Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to see whether your MLM can figure out the right number in a fill-in-the-blank task based on commonsense knowledge (a bird has **two** legs)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/933\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/932","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/932\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/932\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/932\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/932","id":753840300,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5ODQwNjQ3","number":932,"title":"adding metooma dataset","user":{"login":"akash418","id":23264033,"node_id":"MDQ6VXNlcjIzMjY0MDMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23264033?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/akash418","html_url":"https:\/\/github.com\/akash418","followers_url":"https:\/\/api.github.com\/users\/akash418\/followers","following_url":"https:\/\/api.github.com\/users\/akash418\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/akash418\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/akash418\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/akash418\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/akash418\/orgs","repos_url":"https:\/\/api.github.com\/users\/akash418\/repos","events_url":"https:\/\/api.github.com\/users\/akash418\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/akash418\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. \r\n\r\nPaper: https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/7292\r\nDataset Link: https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/JN4EYU\r\n\r\nYAML tags:\r\nannotations_creators:\r\n- expert-generated\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- en\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\n- text-retrieval\r\ntask_ids:\r\n- multi-class-classification\r\n- multi-label-classification\r\n\r\n# Dataset Card for #MeTooMA dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-instances)\r\n - [Data Splits](#data-instances)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- **Homepage:** https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/JN4EYU\r\n- **Paper:** https:\/\/ojs.aaai.org\/\/index.php\/ICWSM\/article\/view\/7292\r\n- **Point of Contact:** https:\/\/github.com\/midas-research\/MeTooMA\r\n\r\n\r\n### Dataset Summary\r\n\r\n- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.\r\n- This dataset includes more data points and has more labels than any of the previous datasets in that contain social media\r\nposts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.\r\n- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,\r\nother data can be fetched via Twitter API.\r\n- The data has been labelled by experts, with the majority taken into the account for deciding the final label.\r\n- The authors provide these labels for each of the tweets.\r\n - Relevance\r\n - Directed Hate\r\n - Generalized Hate\r\n - Sarcasm\r\n - Allegation\r\n - Justification\r\n - Refutation\r\n - Support\r\n - Oppose\r\n- The definitions for each task\/label is in the main publication.\r\n- Please refer to the accompanying paper https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7292 for statistical analysis on the textual data\r\nextracted from this dataset.\r\n- The language of all the tweets in this dataset is English\r\n- Time period: October 2018 - December 2018\r\n- Suggested Use Cases of this dataset:\r\n - Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.\r\n - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.\r\n - Identifying how influential people were potrayed on public platform in the\r\n events of mass social movements.\r\n - Polarization analysis based on graph simulations of social nodes of users involved\r\n in the #MeToo movement.\r\n\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\nMulti Label and Multi-Class Classification\r\n\r\n### Languages\r\n\r\nEnglish\r\n\r\n## Dataset Structure\r\n- The dataset is structured into CSV format with TweetID and accompanying labels.\r\n- Train and Test sets are split into respective files.\r\n\r\n### Data Instances\r\n\r\nTweet ID and the appropriatelabels\r\n\r\n### Data Fields\r\n\r\nTweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID\r\n\r\n### Data Splits\r\n\r\n- Train: 7979\r\n- Test: 1996\r\n\r\n## Dataset Creation\r\n\r\n### Curation Rationale\r\n\r\n- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.\r\n- People expressed their opinions over issues which were previously missing from the social media space.\r\n- This provides an option to study the linguistic behaviours of social media users in an informal setting,\r\ntherefore the authors decide to curate this annotated dataset.\r\n- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.\r\n- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.\r\n\r\n\r\n### Source Data\r\n- Source of all the data points in this dataset is Twitter.\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.\r\n- Redundant keywords were removed based on manual inspection.\r\n- Public streaming APIs of Twitter were used for querying with the selected keywords.\r\n- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.\r\n- Non english tweets were removed.\r\n- The final set was labelled by experts with the majority label taken into the account for deciding the final label.\r\n- Please refer to this paper for detailed information: https:\/\/ojs.aaai.org\/\/index.php\/ICWSM\/article\/view\/7292\r\n\r\n#### Who are the source language producers?\r\n\r\nPlease refer to this paper for detailed information: https:\/\/ojs.aaai.org\/\/index.php\/ICWSM\/article\/view\/7292\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\n- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.\r\n- The annotators are domain experts having degress in advanced clinical psychology and gender studies.\r\n- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.\r\n- They studied the document, worked a few examples to get used to this annotation task.\r\n- They also provided feedback for improving the class definitions.\r\n- The annotation process is not mutually exclusive, implying that presence of one label does not mean the\r\nabsence of the other one.\r\n\r\n\r\n#### Who are the annotators?\r\n\r\n- The annotators are domain experts having a degree in clinical psychology and gender studies.\r\n- Please refer to the accompnaying paper for a detailed annotation process.\r\n\r\n### Personal and Sensitive Information\r\n\r\n- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.\r\n- It is highly encouraged to use this dataset for scientific purposes only.\r\n- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.\r\n\r\n## Considerations for Using the Data\r\n\r\n### Social Impact of Dataset\r\n\r\n- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.\r\n- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these\r\nshould be used to assist already existing human intervention tools and therapies.\r\n- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their\r\npersonal stance of issues pertaining to the #MeToo movement.\r\n- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.\r\n- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset\r\nand social impact of this work.\r\n\r\n\r\n### Discussion of Biases\r\n\r\n- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of\r\ncommunity affected by sexual abuse.\r\n- Any work undertaken on this dataset should aim to minimize the bias against minority groups which\r\nmight amplified in cases of sudden outburst of public reactions over sensitive social media discussions.\r\n\r\n### Other Known Limitations\r\n\r\n- Considering privacy concerns, social media practitioners should be aware of making automated interventions\r\nto aid the victims of sexual abuse as some people might not prefer to disclose their notions.\r\n- Concerned social media users might also repeal their social information, if they found out that their\r\ninformation is being used for computational purposes, hence it is important seek subtle individual consent\r\nbefore trying to profile authors involved in online discussions to uphold personal privacy.\r\n\r\n## Additional Information\r\n\r\nPlease refer to this link: https:\/\/dataverse.harvard.edu\/dataset.xhtml?persistentId=doi:10.7910\/DVN\/JN4EYU\r\n\r\n### Dataset Curators\r\n\r\n- If you use the corpus in a product or application, then please credit the authors\r\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\r\n(http:\/\/midas.iiitd.edu.in) appropriately.\r\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\r\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\r\nHowever, the contact listed above will be happy to respond to queries and clarifications\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your social media data.\r\n - if interested in a collaborative research project.\r\n\r\n### Licensing Information\r\n\r\n[More Information Needed]\r\n\r\n### Citation Information\r\n\r\nPlease cite the following publication if you make use of the dataset: https:\/\/ojs.aaai.org\/index.php\/ICWSM\/article\/view\/7292\r\n\r\n```\r\n\r\n@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https:\/\/aaai.org\/ojs\/index.php\/ICWSM\/article\/view\/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.<\/p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }\r\n\r\n```\r\n\r\n\r\n\r\n","Hi, @lhoestq I have resolved all the comments you have raised. Can you review the PR again? However, I do need assistance on how to remove other files that came along in my PR. Should I manually delete unwanted files from the PR raised?","I am closing this PR, @lhoestq please review this PR instead https:\/\/github.com\/huggingface\/datasets\/pull\/975 where I have removed the unwanted files of other datasets and addressed each of your points. "],"created_at":1606774189000,"updated_at":1606869474000,"closed_at":1606869474000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/932","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/932","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/932.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/932.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/932\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/931","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/931\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/931\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/931\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/931","id":753818193,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5ODIzMDYz","number":931,"title":"[WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606771821000,"updated_at":1606771821000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/931","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/931","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/931.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/931.patch"},"body":"Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https:\/\/www.dropbox.com\/sh\/7pkwkrfnwqhsnpo\/AABVENv_Q9rFtnM61liyzO0La\/web_snippets_train.json.zip?dl=1`\r\n\r\nDidn't managed to see how to solve that.\r\n\r\nPutting aside for now.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/931\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/930","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/930\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/930\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/930\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/930","id":753801204,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5ODA5MzM1","number":930,"title":"Lambada","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606770153000,"updated_at":1606783032000,"closed_at":1606783031000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/930","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/930","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/930.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/930.patch"},"body":"Added LAMBADA dataset.\r\n\r\nA couple of points of attention (mostly because I am not sure)\r\n- The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples.\r\n- The dev and test splits don't have the `category` field so I put `None` by default.\r\n\r\nHappy to make changes if it doesn't respect the guidelines!\r\n\r\nVictor","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/930\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/929","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/929\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/929\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/929\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/929","id":753737794,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3","number":929,"title":"Add weibo NER dataset","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606764167000,"updated_at":1607002615000,"closed_at":1607002614000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/929","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/929","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/929.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/929.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/929\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/928","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/928\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/928\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/928\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/928","id":753722324,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NzQ1OTIx","number":928,"title":"Add the Multilingual Amazon Reviews Corpus","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606762686000,"updated_at":1606838670000,"closed_at":1606838667000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/928","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/928","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/928.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/928.patch"},"body":"- **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`)\r\n- **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2010.02573\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/928\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/927","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/927\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/927\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/927\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/927","id":753679020,"node_id":"MDU6SXNzdWU3NTM2NzkwMjA=","number":927,"title":"Hello","user":{"login":"k125-ak","id":75259546,"node_id":"MDQ6VXNlcjc1MjU5NTQ2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75259546?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/k125-ak","html_url":"https:\/\/github.com\/k125-ak","followers_url":"https:\/\/api.github.com\/users\/k125-ak\/followers","following_url":"https:\/\/api.github.com\/users\/k125-ak\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/k125-ak\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/k125-ak\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/k125-ak\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/k125-ak\/orgs","repos_url":"https:\/\/api.github.com\/users\/k125-ak\/repos","events_url":"https:\/\/api.github.com\/users\/k125-ak\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/k125-ak\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606758605000,"updated_at":1606758630000,"closed_at":1606758630000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/927\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/926","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/926\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/926\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/926\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/926","id":753676069,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NzA4MTcy","number":926,"title":"add inquisitive","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?","> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should definitely find a way to make it work with only a few articles.\r\n\r\nIf it doesn't work right now for dummy data, I guess it's because it tries to load every single article file ?\r\n\r\nIf so, then maybe you can use `os.listdir` method to first check all the data files available in the path where the `articles.tgz` file is extracted. Then you can simply iter through the data files and depending on their ID, include them in the train or test set. With this method you should be able to have only a few articles files per split in the dummy data. Does that make sense ?","fixed! so the issue was, `articles_ids` were prepared based on the number of files in articles dir, so for dummy data questions it was not able to load some articles due to incorrect ids and the test was failing"],"created_at":1606758322000,"updated_at":1606916722000,"closed_at":1606916413000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/926","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/926","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/926.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/926.patch"},"body":"Adding inquisitive qg dataset\r\n\r\nMore info: https:\/\/github.com\/wjko2\/INQUISITIVE","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/926\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/925","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/925\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/925\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/925\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/925","id":753672661,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NzA1MzM4","number":925,"title":"Add Turku NLP Corpus for Finnish NER","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n"],"created_at":1606758019000,"updated_at":1607004431000,"closed_at":1607004430000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/925","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/925","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/925.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/925.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/925\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/924","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/924\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/924\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/924\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/924","id":753631951,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NjcyMzgw","number":924,"title":"Add DART","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM!"],"created_at":1606754557000,"updated_at":1606878822000,"closed_at":1606878821000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/924","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/924","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/924.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/924.patch"},"body":"- **Name:** *DART*\r\n- **Description:** *DART is a large dataset for open-domain structured data record to text generation.*\r\n- **Paper:** *https:\/\/arxiv.org\/abs\/2007.02871*\r\n- **Data:** *https:\/\/github.com\/Yale-LILY\/dart#leaderboard*\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/924\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/923","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/923\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/923\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/923\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/923","id":753569220,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NjIyMDQx","number":923,"title":"Add CC-100 dataset","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892913,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEz","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/wontfix","name":"wontfix","color":"ffffff","default":true,"description":"This will not be worked on"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @lhoestq, I would like just to ask you if it is OK that I include this feature 9f32ba1 in this PR or you would prefer to have it in a separate one.\r\n\r\nI was wondering whether include also a test, but I did not find any test for the other file formats...","Hi ! Sure that would be valuable to support .xz files. Feel free to open a separate PR for this.\r\nAnd feel free to create the first test case for extracting compressed files if you have some inspiration (maybe create test_file_utils.py ?). We can still spend more time on tests next week when the sprint is over though so don't spend too much time on it.","@lhoestq, DONE! ;) See PR #950.","Thanks for adding support for `.xz` files :)\r\n\r\nFeel free to rebase from master to include it in your PR","@lhoestq DONE; I have merged instead, to avoid changing the history of my public PR ;)","Hi @lhoestq, I would need that you generate the dataset_infos.json and the dummy data for this dataset with a bigger computer. Sorry, but my laptop did not succeed...","Thanks for your work @albertvillanova \r\nWe'll definitely look into it after this sprint :)","Looks like #1456 added CC100 already.\r\nThe difference with your approach is that this implementation uses the `BuilderConfig` parameters to allow the creation of custom configs for all the languages, without having to specify them in the `BUILDER_CONFIGS` class attribute.\r\nFor example even if the dataset doesn't have a config for english already, you can still load the english CC100 with\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cc100\", lang=\"en\")\r\n```","@lhoestq, oops!! I remember having assigned this dataset to me in the Google sheet, besides having mentioned the corresponding issue in the Pull Request... Nevermind! :)","Yes indeed I can see that...\r\nSorry for noticing that only now \r\n\r\nThe code of the other PR ended up being pretty close to yours though\r\nIf you want to add more details to the cc100 dataset card or in the script feel to do so, any addition is welcome"],"created_at":1606749802000,"updated_at":1618925657000,"closed_at":1618925657000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/923","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/923","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/923.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/923.patch"},"body":"Add CC-100.\r\n\r\nClose #773 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/923\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/922","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/922\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/922\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/922\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/922","id":753559130,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NjEzOTA4","number":922,"title":"Add XOR QA Dataset","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)","I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite ","> I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite\r\n\r\nThe best way is to run the tagging app locally and provide it the location to the `dataset_infos.json` after you've run the CLI:\r\nhttps:\/\/github.com\/huggingface\/datasets-tagging\r\n","This is a really good data card!!\r\n\r\nSmall changes to make it even better:\r\n- Tags: the dataset has both \"original\" data and data that is \"extended\" from a source dataset: TydiQA - you should choose both options in the tagging apps\r\n- The language and annotation creator tags are off: the language here is the questions: I understand it's a mix of crowd-sourced and expert-generated? Is there any machine translation involved? The annotations are the span selections: is that crowd-sourced?\r\n- Personal and sensitive information: there should be a statement there, even if only to say that none could be found or that it only mentions public figures"],"created_at":1606749054000,"updated_at":1606878741000,"closed_at":1606878741000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/922","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/922","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/922.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/922.patch"},"body":"Added XOR Question Answering Dataset. The link to the dataset can be found [here](https:\/\/nlp.cs.washington.edu\/xorqa\/)\r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully\r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/922\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/920","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/920\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/920\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/920\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/920","id":753445747,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NTIzMTgz","number":920,"title":"add dream dataset","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README_guide.md\r\n> If you can't fill some fields then just leave `[N\/A]`\r\n\r\nQuick amendment: `[N\/A]` is for fields that are not relevant: if you can't find the information just leave `[More Information Needed]`","@lhoestq since datset cards are optional for this sprint I'll add those later. Good for merge.","Indeed we only require the tags to be added now (the yaml part at the top of the dataset card).\r\nCould you add them please ?\r\nYou can find more infos here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card","@lhoestq added tags, I'll fill rest of the info after current sprint :)","The tests are failing tests for other datasets, not this one.","@lhoestq could you tell me why these tests are failing, they don't seem related to this PR. "],"created_at":1606740014000,"updated_at":1607013912000,"closed_at":1606923552000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/920","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/920","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/920.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/920.patch"},"body":"Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension\r\n\r\nMore details:\r\nhttps:\/\/dataset.org\/dream\/\r\nhttps:\/\/github.com\/nlpdata\/dream","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/920\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/919","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/919\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/919\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/919\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/919","id":753434472,"node_id":"MDU6SXNzdWU3NTM0MzQ0NzI=","number":919,"title":"wrong length with datasets ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ","sorry I misunderstood length of dataset with dataloader, closed. thanks "],"created_at":1606739019000,"updated_at":1606739847000,"closed_at":1606739846000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI have a MRPC dataset which I convert it to seq2seq format, then this is of this format:\r\n\r\n`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)\r\n`\r\n\r\nI feed it to a dataloader:\r\n```\r\ndataloader = DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler,\r\n collate_fn=self.data_collator,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_workers=self.args.dataloader_num_workers,\r\n )\r\n```\r\n\r\nnow if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/919\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/918","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/918\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/918\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/918\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/918","id":753397440,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NDgzOTk4","number":918,"title":"Add conll2002","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606735775000,"updated_at":1606761270000,"closed_at":1606761269000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/918","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/918","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/918.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/918.patch"},"body":"Adding the Conll2002 dataset for NER.\r\n\r\nMore info here : https:\/\/www.clips.uantwerpen.be\/conll2002\/ner\/\r\n\r\n### Checkbox\r\n\r\n- [x] Create the dataset script `\/datasets\/my_dataset\/my_dataset.py` using the template\r\n- [x] Fill the `_DESCRIPTION` and `_CITATION` variables\r\n- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`\r\n- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.\r\n- [x] Generate the metadata file `dataset_infos.json` for all configurations\r\n- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)\r\n- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs\r\n- [x] Both tests for the real data and the dummy data pass.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/918\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/917","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/917\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/917\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/917\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/917","id":753391591,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NDc5MTIy","number":917,"title":"Addition of Concode Dataset ","user":{"login":"reshinthadithyan","id":36307201,"node_id":"MDQ6VXNlcjM2MzA3MjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36307201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/reshinthadithyan","html_url":"https:\/\/github.com\/reshinthadithyan","followers_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/followers","following_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/repos","events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Testing command doesn't work\r\n###trace\r\n-- Docs: https:\/\/docs.pytest.org\/en\/stable\/warnings.html\r\n========================================================= short test summary info ========================================================== \r\nERROR tests\/test_dataset_common.py - absl.testing.parameterized.NoTestsError: parameterized test decorators did not generate any tests. Ma...\r\n====================================================== 2 warnings, 1 error in 54.23s ======================================================= \r\nERROR: not found: G:\\Work Related\\hf\\datasets\\tests\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode\r\n(no name 'G:\\\\Work Related\\\\hf\\\\datasets\\\\tests\\\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode' in any of [<Module test_dataset_common.py>])\r\n","Hello @lhoestq Test checks are passing in my local, but the commit fails in ci. Any idea onto why? \r\n#### Dummy Dataset Test \r\n====================================================== 1 passed, 6 warnings in 7.14s ======================================================= \r\n#### Real Dataset Test \r\n====================================================== 1 passed, 6 warnings in 25.54s ====================================================== ","Hello @lhoestq, Have a look, I've changed the file according to the reviews. Thanks!","@reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)","> @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n\r\nHello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks","> > @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n> \r\n> Hello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks\r\n\r\nHi @reshinthadithyan ! Did you try with the latest version of the tagger? What issues are you facing?\r\n\r\nWe're also relaxed the dataset requirement for now, you'll only add to add the tags :) ","Could you work on another branch when adding different datasets ?\r\nThe idea is to have one PR per dataset","Thanks ! The github diff looks all clean now :) \r\nTo fix the CI you just need to rebase from master\r\n\r\nDon't forget to add the tags of the dataset card. It's the yaml part at the top of the dataset card\r\nMore infor here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nThe issue you had with the tagger should be fixed now by https:\/\/github.com\/huggingface\/datasets-tagging\/pull\/5\r\n"],"created_at":1606735259000,"updated_at":1609210536000,"closed_at":1609210536000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/917","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/917","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/917.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/917.patch"},"body":"##Overview\r\nConcode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation)\r\n\r\nReference Links\r\nPaper Link = https:\/\/arxiv.org\/pdf\/1904.09086.pdf\r\nGithub Link =https:\/\/github.com\/microsoft\/CodeXGLUE\/tree\/main\/Text-Code\/text-to-code","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/917\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/916","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/916\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/916\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/916\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/916","id":753376643,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5NDY3MTkx","number":916,"title":"Add Swedish NER Corpus","user":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes the use of configs is optional","@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[More Information Needed]` text"],"created_at":1606733991000,"updated_at":1606878650000,"closed_at":1606878649000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/916","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/916","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/916.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/916.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/916\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/915","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/915\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/915\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/915\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/915","id":753118481,"node_id":"MDU6SXNzdWU3NTMxMTg0ODE=","number":915,"title":"Shall we change the hashing to encoding to reduce potential replicated cache files?","user":{"login":"zhuzilin","id":10428324,"node_id":"MDQ6VXNlcjEwNDI4MzI0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10428324?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhuzilin","html_url":"https:\/\/github.com\/zhuzilin","followers_url":"https:\/\/api.github.com\/users\/zhuzilin\/followers","following_url":"https:\/\/api.github.com\/users\/zhuzilin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhuzilin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhuzilin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhuzilin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhuzilin\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhuzilin\/repos","events_url":"https:\/\/api.github.com\/users\/zhuzilin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhuzilin\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?","@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.\r\n- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.\r\nIf we find one, we can adjust the list in `self._fingerprint` to it.\r\n\r\nAs for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.\r\n\r\nAnd for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.\r\n\r\nBecause we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset."],"created_at":1606708246000,"updated_at":1608786709000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.\r\n\r\nIf you have interest in this, I'd love to help :).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/915\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/914","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/914\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/914\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/914\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/914","id":752956106,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5MTM2Njk3","number":914,"title":"Add list_github_datasets api for retrieving dataset name list in github repo","user":{"login":"zhuzilin","id":10428324,"node_id":"MDQ6VXNlcjEwNDI4MzI0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10428324?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhuzilin","html_url":"https:\/\/github.com\/zhuzilin","followers_url":"https:\/\/api.github.com\/users\/zhuzilin\/followers","following_url":"https:\/\/api.github.com\/users\/zhuzilin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhuzilin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhuzilin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhuzilin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhuzilin\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhuzilin\/repos","events_url":"https:\/\/api.github.com\/users\/zhuzilin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhuzilin\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We can look into removing some of the attributes from `GET \/api\/datasets` to make it smaller\/faster, what do you think @lhoestq?","> We can look into removing some of the attributes from `GET \/api\/datasets` to make it smaller\/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip`","`GET \/api\/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?","> `GET \/api\/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?\r\n\r\nYes, much faster! Thank you!"],"created_at":1606668135000,"updated_at":1606893676000,"closed_at":1606893676000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/914","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/914","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/914.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/914.patch"},"body":"Thank you for your great effort on unifying data processing for NLP!\r\n\r\nThis pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https:\/\/huggingface.co\/api\/datasets to get a large json. However, this connection can be really slow... (I was visiting from China) and from my own experience, most of the time `requests.get` failed to download the whole json after a long wait and will trigger fault in `r.json()`.\r\nI also noticed that the current implementation will first try to download from github, which makes me be able to smoothly run `load_dataset('squad')` in the example.\r\nTherefore, I think it would be better if we can have an api to get the list of datasets that are available on github, and it will also improve newcomers' experience (it is a little frustrating if one cannot successfully run the first function in the README example.) before we have faster source for huggingface.co.\r\n\r\nAs for the implementation, I've added a `dataset_infos.json` file under the `datasets` folder, and it has the following structure:\r\n```json\r\n {\r\n \"id\": \"aeslc\",\r\n \"folder\": \"datasets\/aeslc\",\r\n \"dataset_infos\": \"datasets\/aeslc\/dataset_infos.json\"\r\n },\r\n ...\r\n {\r\n \"id\": \"json\",\r\n \"folder\": \"datasets\/json\"\r\n },\r\n ...\r\n```\r\nThe script I used to get this file is:\r\n```python\r\nimport json\r\nimport os\r\n\r\nDATASETS_BASE_DIR = \"\/root\/datasets\" \r\nDATASET_INFOS_JSON = \"dataset_infos.json\"\r\n\r\ndatasets = []\r\nfor item in os.listdir(os.path.join(DATASETS_BASE_DIR, \"datasets\")):\r\n if os.path.isdir(os.path.join(DATASETS_BASE_DIR, \"datasets\", item)):\r\n datasets.append(item)\r\n\r\ndatasets.sort()\r\n\r\ntotal_ds_info = []\r\nfor ds in datasets:\r\n ds_dir = os.path.join(\"datasets\", ds)\r\n ds_info_dir = os.path.join(ds_dir, DATASET_INFOS_JSON)\r\n if os.path.isfile(os.path.join(DATASETS_BASE_DIR, ds_info_dir)):\r\n total_ds_info.append({\"id\": ds,\r\n \"folder\": ds_dir,\r\n \"dataset_infos\": ds_info_dir})\r\n else:\r\n total_ds_info.append({\"id\": ds,\r\n \"folder\": ds_dir})\r\n\r\nwith open(DATASET_INFOS_JSON, \"w\") as f:\r\n json.dump(total_ds_info, f)\r\n```\r\nThe new `dataset_infos.json` was saved as a formated json so that it will be easy to add new dataset.\r\n\r\nWhen calling `list_github_datasets`, the user will get the list of dataset names in this github repo and if `with_details` is set to be `True`, they can get the url of specific dataset info.\r\n\r\nThank you for your time on reviewing this pr :).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/914\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/913","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/913\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/913\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/913\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/913","id":752892020,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3","number":913,"title":"My new dataset PEC","user":{"login":"zhongpeixiang","id":11826803,"node_id":"MDQ6VXNlcjExODI2ODAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11826803?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zhongpeixiang","html_url":"https:\/\/github.com\/zhongpeixiang","followers_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/followers","following_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/repos","events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zhongpeixiang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["How to resolve these failed checks?","Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor example : `encoding=\"utf-8\"`\r\nTo fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\n","Could you also add a dataset card ? you can find a template here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README.md\r\n\r\nThat would be awesome","> Thanks for adding this one :)\r\n> \r\n> To fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\n> To fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\n> For example : `encoding=\"utf-8\"`\r\n> To fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https:\/\/github.com\/huggingface\/datasets\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\nThank you for the detailed suggestion.\r\n\r\nI have added dummy_data but it still failed the DistributedDatasetTest check. My dataset has a central file (containing a python dict) that needs to be accessed by each data example. Is it because the central file cannot be distributed (which would lead to a partial dictionary)?\r\n\r\nSpecifically, the central file contains a dictionary of speakers with their attributes. Each data example is also associated with a speaker. As of now, I keep the central file and data files separately. If I remove the central file by appending the speaker attributes to each data example, then there would be lots of redundancy because there are lots of duplicate speakers in the data files.","The `DistributedDatasetTest` fail and the changes of this PR are not related, there was just a bug in the CI. You can ignore it","> Really cool thanks !\r\n> \r\n> Could you make the dummy files smaller ? For example by reducing the size of persona.txt ?\r\n> I also left a comment about the files concatenation. It would be cool to replace that with simple iterations through the different files.\r\n> \r\n> Then once this is done, you can add a dataset card using the template guide here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README_guide.md\r\n> If some fields can't be filled, just leave `[N\/A]`\r\n\r\nSmall change: if you don't have the information for a field, please leave `[More Information Needed]` rather than `[N\/A]`\r\n\r\nThe full information can be found [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)"],"created_at":1606648237000,"updated_at":1606819313000,"closed_at":1606819313000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/913","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/913","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/913.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/913.patch"},"body":"A new dataset PEC published in EMNLP 2020.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/913\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/911","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/911\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/911\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/911\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/911","id":752806215,"node_id":"MDU6SXNzdWU3NTI4MDYyMTU=","number":911,"title":"datasets module not found","user":{"login":"sbassam","id":15836274,"node_id":"MDQ6VXNlcjE1ODM2Mjc0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15836274?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sbassam","html_url":"https:\/\/github.com\/sbassam","followers_url":"https:\/\/api.github.com\/users\/sbassam\/followers","following_url":"https:\/\/api.github.com\/users\/sbassam\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sbassam\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sbassam\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sbassam\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sbassam\/orgs","repos_url":"https:\/\/api.github.com\/users\/sbassam\/repos","events_url":"https:\/\/api.github.com\/users\/sbassam\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sbassam\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["nvm, I'd made an assumption that the library gets installed with transformers. "],"created_at":1606613055000,"updated_at":1606660389000,"closed_at":1606660389000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/911\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/910","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/910\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/910\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/910\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/910","id":752772723,"node_id":"MDU6SXNzdWU3NTI3NzI3MjM=","number":910,"title":"Grindr meeting app web.Grindr","user":{"login":"jackin34","id":75184749,"node_id":"MDQ6VXNlcjc1MTg0NzQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75184749?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jackin34","html_url":"https:\/\/github.com\/jackin34","followers_url":"https:\/\/api.github.com\/users\/jackin34\/followers","following_url":"https:\/\/api.github.com\/users\/jackin34\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jackin34\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jackin34\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jackin34\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jackin34\/orgs","repos_url":"https:\/\/api.github.com\/users\/jackin34\/repos","events_url":"https:\/\/api.github.com\/users\/jackin34\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jackin34\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606599383000,"updated_at":1606644711000,"closed_at":1606644711000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\n- **Name:** *name of the dataset*\n- **Description:** *short description of the dataset (or link to social media or blog post)*\n- **Paper:** *link to the dataset paper if available*\n- **Data:** *link to the Github repository or current dataset location*\n- **Motivation:** *what are some good reasons to have this dataset*\n\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/910\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/909","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/909\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/909\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/909\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/909","id":752508299,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4ODE1NDYz","number":909,"title":"Add FiNER dataset","user":{"login":"stefan-it","id":20651387,"node_id":"MDQ6VXNlcjIwNjUxMzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20651387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stefan-it","html_url":"https:\/\/github.com\/stefan-it","followers_url":"https:\/\/api.github.com\/users\/stefan-it\/followers","following_url":"https:\/\/api.github.com\/users\/stefan-it\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stefan-it\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stefan-it\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stefan-it\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stefan-it\/orgs","repos_url":"https:\/\/api.github.com\/users\/stefan-it\/repos","events_url":"https:\/\/api.github.com\/users\/stefan-it\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stefan-it\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/templates\/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card\r\n","Thanks your suggestions! I've fixed them, and currently working on the dataset card!","@yjernite and @lhoestq I will add the dataset card a bit later in a separate PR if that's ok for you!","Yes I want to re-emphasize if it was not clear that dataset cards are optional for the sprint. \r\n\r\nOnly the tags are required for merging a datasets.\r\n\r\nPlease try to enforce this rule as well @lhoestq and @yjernite ","Yes @stefan-it if you could just add the tags (the yaml part at the top of the dataset card) that'd be perfect :) ","Oh, sorry, will add them now!\r\n","Initial README file is now added :) ","the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine","merging since the CI is fixed on master"],"created_at":1606521260000,"updated_at":1607360183000,"closed_at":1607360183000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/909","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/909","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/909.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/909.patch"},"body":"Hi,\r\n\r\nthis PR adds \"A Finnish News Corpus for Named Entity Recognition\" as new `finer` dataset.\r\n\r\nThe dataset is described in [this paper](https:\/\/arxiv.org\/abs\/1908.04212). The data is publicly available in [this GitHub](https:\/\/github.com\/mpsilfve\/finer-data).\r\n\r\nNotice: they provide two testsets. The additional test dataset taken from Wikipedia is named as \"test_wikipedia\" split.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/909\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/908","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/908\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/908\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/908\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/908","id":752428652,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz","number":908,"title":"Add dependency on black for tests","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..."],"created_at":1606504368000,"updated_at":1606513613000,"closed_at":1606513612000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/908","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/908","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/908.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/908.patch"},"body":"Add package 'black' as an installation requirement for tests.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/908\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/907","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/907\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/907\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/907\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/907","id":752422351,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NzQ4ODMx","number":907,"title":"Remove os.path.join from all URLs","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606503330000,"updated_at":1606690100000,"closed_at":1606690099000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/907","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/907","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/907.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/907.patch"},"body":"Remove `os.path.join` from all URLs in dataset scripts.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/907\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/906","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/906\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/906\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/906\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/906","id":752403395,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0","number":906,"title":"Fix url with backslash in windows for blimp and pg19","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606499951000,"updated_at":1606501196000,"closed_at":1606501196000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/906","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/906","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/906.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/906.patch"},"body":"Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls\r\n\r\ncc @albertvillanova","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/906\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/905","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/905\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/905\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/905\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/905","id":752395456,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NzI3OTEy","number":905,"title":"Disallow backslash in urls","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like the test doesn't detect all the problems fixed by #907 , I'll fix that","Ok found why it doesn't detect the problems fixed by #907 . That's because for all those datasets the urls are actually fine (no backslash) on windows, even if it uses `os.path.join`.\r\n\r\nThis is because of the behavior of `os.path.join` on windows when the first path ends with a slash : \r\n\r\n```python\r\nimport os\r\nos.path.join(\"https:\/\/test.com\/foo\", \"bar.txt\")\r\n# 'https:\/\/test.com\/foo\\\\bar.txt'\r\nos.path.join(\"https:\/\/test.com\/foo\/\", \"bar.txt\")\r\n# 'https:\/\/test.com\/foo\/bar.txt'\r\n```\r\n\r\nHowever even though the urls are correct, this is definitely bad practice and we should never use `os.path.join` for urls"],"created_at":1606498708000,"updated_at":1606690117000,"closed_at":1606690116000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/905","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/905","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/905.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/905.patch"},"body":"Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows.\r\n\r\nI'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts.\r\nThe tests works by adding a callback feature to the MockDownloadManager used to test the dataset scripts. In a download callback I just make sure that the url is valid.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/905\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/904","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/904\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/904\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/904\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/904","id":752372743,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NzA5NTUx","number":904,"title":"Very detailed step-by-step on how to add a dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome! Thanks @lhoestq "],"created_at":1606495521000,"updated_at":1606730187000,"closed_at":1606730186000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/904","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/904","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/904.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/904.patch"},"body":"Add very detailed step-by-step instructions to add a new dataset to the library.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/904\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/903","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/903\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/903\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/903\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/903","id":752360614,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3","number":903,"title":"Fix URL with backslash in Windows","user":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I was indeed working on that... to make another commit on this feature branch...","But as you prefer... nevermind! :)","Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happy to have your opinion","Indeed I was thinking of something similar: monckeypatching the HTTP request...","Therefore, if you agree, I am removing all the rest of `os.path.join`, both from the code and the docs...","If you spot other `os.path.join` for urls in dataset scripts or metrics scripts feel free to fix them.\r\nIn the library itself (\/src\/datasets) it should be fine since there are tests and a windows CI, but if you have doubts of some usage of `os.path.join` somewhere, let me know.","Alright create the test in #905 .\r\nThe windows CI is failing for all the datasets that have bad usage of `os.path.join` for urls.\r\nThere are of course the ones you fixed in this PR (thanks again !) but I found others as well such as pg19 and blimp.\r\nYou can check the full list by looking at the CI failures of the commit 1ce3354","I am merging this one as well as #906 that should fix all of the datasets.\r\nThen I'll rebase #905 which adds the test that checks for bad urls and make sure it' all green now"],"created_at":1606494384000,"updated_at":1606500286000,"closed_at":1606500286000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/903","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/903","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/903.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/903.patch"},"body":"In Windows, `os.path.join` generates URLs containing backslashes, when the first \"path\" does not end with a slash. \r\n\r\nIn general, `os.path.join` should be avoided to generate URLs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/903\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/902","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/902\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/902\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/902\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/902","id":752345739,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw","number":902,"title":"Follow cache_dir parameter to gcs downloader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606492926000,"updated_at":1606690134000,"closed_at":1606690133000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/902","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/902","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/902.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/902.patch"},"body":"As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).\r\n\r\nFix #900 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/902\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/901","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/901\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/901\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/901\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/901","id":752233851,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NTk3NDU5","number":901,"title":"Addition of Nl2Bash Dataset","user":{"login":"reshinthadithyan","id":36307201,"node_id":"MDQ6VXNlcjM2MzA3MjAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36307201?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/reshinthadithyan","html_url":"https:\/\/github.com\/reshinthadithyan","followers_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/followers","following_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/orgs","repos_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/repos","events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/reshinthadithyan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do? Thanks. ","@reshinthadithyan we should hold off on this for a couple of weeks till NeurIPS concludes. The [NLC2CMD](http:\/\/nlc2cmd.us-east.mybluemix.net\/) data will be out then; which includes a cleaner version of this NL2Bash data. The older data is sort of obsolete now. ","Ah nvm you already commented \ud83d\ude06 "],"created_at":1606481635000,"updated_at":1606673365000,"closed_at":1606673331000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/901","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/901","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/901.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/901.patch"},"body":"## Overview\r\nThe NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities.\r\n## Footnotes\r\nThe following dataset marks the first ML on source code related Dataset in datasets module. It'll be really useful as a lot of the research direction involves Transformer Based Model.\r\nThanks.\r\n### Reference Links\r\n\r\n> Paper Link = https:\/\/arxiv.org\/pdf\/1802.08979.pdf \r\n> Github Link = https:\/\/github.com\/TellinaTool\/nl2bash\r\n\r\n\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/901\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/900","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/900\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/900\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/900\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/900","id":752214066,"node_id":"MDU6SXNzdWU3NTIyMTQwNjY=","number":900,"title":"datasets.load_dataset() custom chaching directory bug","user":{"login":"SapirWeissbuch","id":44585792,"node_id":"MDQ6VXNlcjQ0NTg1Nzky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44585792?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SapirWeissbuch","html_url":"https:\/\/github.com\/SapirWeissbuch","followers_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/followers","following_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/orgs","repos_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/repos","events_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting ! I'm looking into it."],"created_at":1606479533000,"updated_at":1606690133000,"closed_at":1606690133000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\nI'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to \r\n `~\/.cache`.\r\n\r\n## Environment info\r\n- `datasets` version: 1.1.3\r\n- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1\r\n- Python version: 3.7.3\r\n\r\n## The code I'm running:\r\n```python\r\nimport datasets\r\nfrom pathlib import Path\r\n\r\nvalidation_dataset = datasets.load_dataset(\"natural_questions\", split=\"validation[:5%]\", cache_dir=Path(\".\/data\")) \r\n```\r\n\r\n## The output:\r\n\r\n* The dataset is downloaded to my home directory's `.cache` \r\n* A new empty directory named \"`natural_questions` is created in the specified directory `.data`\r\n* `tree data` in the shell outputs:\r\n```\r\ndata\r\n\u2514\u2500\u2500 natural_questions\r\n \u2514\u2500\u2500 default\r\n \u2514\u2500\u2500 0.0.2\r\n3 directories, 0 files\r\n```\r\n\r\nThe output:\r\n```\r\nDownloading: 8.61kB [00:00, 5.11MB\/s] \r\nDownloading: 13.6kB [00:00, 7.89MB\/s] \r\nUsing custom data configuration default \r\nDownloading and preparing dataset natural_questions\/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to .\/data\/natural_questions\/default\/0.0.2\/867dbbaf9137c1b8\r\n3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531... \r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13.6k\/13.6k [00:00<00:00, 1.51MB\/s] \r\nDownloading: 7%|\u2588\u2588\u2588\u258e | 6.70G\/97.4G [03:46<1:37:05, 15.6MB\/s]\r\n```\r\n\r\n## Expected behaviour:\r\nThe dataset \"Natural Questions\" should be downloaded to the directory \".\/data\"\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/900\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/899","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/899\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/899\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/899\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/899","id":752191227,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NTYzNzYz","number":899,"title":"Allow arrow based builder in auto dummy data generation","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606477178000,"updated_at":1606483809000,"closed_at":1606483808000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/899","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/899","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/899.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/899.patch"},"body":"Following #898 I added support for arrow based builder for the auto dummy data generator","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/899\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/898","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/898\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/898\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/898\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/898","id":752148284,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4NTI4MDY1","number":898,"title":"Adding SQA dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This dataset seems to have around 1000 configs. Therefore when creating the dummy data we end up with hundreds of MB of dummy data which we don't want to add in the repo.\r\nLet's make this PR on hold for now and find a solution after the sprint of next week","Closing in favor of #1566 "],"created_at":1606472958000,"updated_at":1608036880000,"closed_at":1608036859000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/898","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/898","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/898.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/898.patch"},"body":"As discussed in #880 \r\n\r\nSeems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/898\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/897","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/897\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/897\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/897\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/897","id":752100256,"node_id":"MDU6SXNzdWU3NTIxMDAyNTY=","number":897,"title":"Dataset viewer issues","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?","Ok, I redirected on our side to a new url. \u26a0\ufe0f @srush: if you update the Streamlit config too to `\/datasets\/viewer`, let me know because I'll need to change our nginx config at the same time","9","\u200f\u2800\u200f\u200f\u200f\u2800\u200f\u200f\u200f\u2800 \u200f\u2800 ","\u200f\u2800\u200f\u200f\u200f\u2800\u200f\u200f\u200f\u2800 \u200f\u2800 "],"created_at":1606468474000,"updated_at":1606797915000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues\/bugs though:\r\n\r\n- the URL is still under `nlp`, perhaps an alias for `datasets` can be made\r\n- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user\r\n\r\n```bash\r\nIndexError: list index out of range\r\nTraceback:\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/ScriptRunner.py\", line 322, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 316, in <module>\r\n st.table(style)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/DeltaGenerator.py\", line 122, in wrapped_method\r\n return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/DeltaGenerator.py\", line 367, in _enqueue_new_element_delta\r\n rv = marshall_element(msg.delta.new_element)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/DeltaGenerator.py\", line 120, in marshall_element\r\n return method(dg, element, *args, **kwargs)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/DeltaGenerator.py\", line 2944, in table\r\n data_frame_proto.marshall_data_frame(data, element.table)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/elements\/data_frame_proto.py\", line 54, in marshall_data_frame\r\n _marshall_styles(proto_df.style, df, styler)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/elements\/data_frame_proto.py\", line 73, in _marshall_styles\r\n translated_style = styler._translate()\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/pandas\/io\/formats\/style.py\", line 351, in _translate\r\n * (len(clabels[0]) - len(hidden_columns))\r\n```\r\n\r\n- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https:\/\/huggingface.co\/nlp\/viewer\/?dataset=wmt19&config=cs-en). This problem goes away when you enable \"List view\", because then some syntax highlighteris used, and the special characters are coded correctly.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/897\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/896","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/896\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/896\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/896\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/896","id":751834265,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MjcyMjc0","number":896,"title":"Add template and documentation for dataset card","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606426225000,"updated_at":1606525815000,"closed_at":1606525815000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/896","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/896","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/896.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/896.patch"},"body":"This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora \r\n\r\nNew pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and as a data statement.\r\n\r\nThe template is designed to be pretty extensive. The idea is that the person who uploads the dataset should put in all the basic information (at least the Dataset Description section) and whatever else they feel comfortable adding and leave the `[More Information Needed]` annotation everywhere else as a placeholder.\r\n\r\nWe will then work with @mcmillanmajora to involve the data authors more directly in filling out the remaining information.\r\n\r\nDirect links to:\r\n- [Documentation](https:\/\/github.com\/yjernite\/datasets\/blob\/add_dataset_card_doc\/templates\/README_guide.md)\r\n- [Empty template](https:\/\/github.com\/yjernite\/datasets\/blob\/add_dataset_card_doc\/templates\/README.md)\r\n- [ELI5 example](https:\/\/github.com\/yjernite\/datasets\/blob\/add_dataset_card_doc\/datasets\/eli5\/README.md)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/896\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/895","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/895\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/895\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/895\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/895","id":751782295,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MjMyMjU3","number":895,"title":"Better messages regarding split naming","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606416946000,"updated_at":1606483860000,"closed_at":1606483859000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/895","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/895","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/895.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/895.patch"},"body":"I made explicit the error message when a bad split name is used.\r\n\r\nAlso I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in the future we might want to use `{dataset_name}-{dataset_split}-{shard_id}_of_{n_shards}.arrow` and reuse the `-` symbol.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/895\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/894","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/894\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/894\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/894\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/894","id":751734905,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MTkzNzQy","number":894,"title":"Allow several tags sets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing since we don't need to update the tags of those three datasets (for each one of them there is only one tag set)"],"created_at":1606410253000,"updated_at":1620239057000,"closed_at":1606508149000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/894","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/894","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/894.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/894.patch"},"body":"Hi !\r\n\r\nCurrently we have three dataset cards : snli, cnn_dailymail and allocine.\r\nFor each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc.\r\n\r\nFor certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnli` etc. Therefore we should define one set of tags per configuration. However the current format used for tags only supports one set of tags per dataset.\r\n\r\nIn this PR I propose a simple change in the yaml format used for tags to allow for several sets of tags.\r\n\r\nLet me know what you think, especially @julien-c let me know if it's good for you since it's going to be parsed by moon-landing","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/894\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/893","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/893\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/893\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/893\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/893","id":751703696,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx","number":893,"title":"add metrec: arabic poetry dataset","user":{"login":"zaidalyafeai","id":15667714,"node_id":"MDQ6VXNlcjE1NjY3NzE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15667714?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zaidalyafeai","html_url":"https:\/\/github.com\/zaidalyafeai","followers_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/followers","following_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/orgs","repos_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/repos","events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zaidalyafeai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq removed prints and added the dataset card. ","@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ","Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n- The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N\/A]`","> Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n> \r\n> Couple of last comments:\r\n> \r\n> * this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n> * The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N\/A]`\r\n\r\nI have no idea how some other files changed. I tried to rebase and push but this created some errors. I had to run the command \r\n`git push -u --force origin add-metrec-dataset` which might cause some problems. ","Feel free to create another branch\/another PR without all the other changes","@yjernite can you explain which other files are changed because of the PR ? https:\/\/github.com\/huggingface\/datasets\/pull\/893\/files only shows files related to the dataset. ","Right ! github is nice with us today :)","Looks like this one is ready to merge, thanks @zaidalyafeai !","@lhoestq thanks for the merge. I am not a GitHub geek. I already have another dataset to add. I'm not sure how to add another given my forked repo. Do I follow the same steps with a different checkout name ?","If you've followed the instructions in here : https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md#start-by-preparing-your-environment\r\n\r\n(especially point 2. and the command `git remote add upstream ....`)\r\n\r\nThen you can try\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream\/master\r\ngit checkout -b add-<my-new-dataset-name>\r\n```"],"created_at":1606407016000,"updated_at":1606839895000,"closed_at":1606835707000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/893","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/893","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/893.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/893.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/893\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/892","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/892\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/892\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/892\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/892","id":751658262,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MTMxNTE1","number":892,"title":"Add a few datasets of reference in the documentation","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good to me. Do we also support TSV in this helper (explain if it should be text or CSV) and in the dummy-data creator?","snli is basically based on tsv files (but named as .txt) and it is in the list of datasets of reference.\r\nThe dummy data creator supports tsv","merging this one.\r\nIf you think of other datasets of reference to add we can still add them later"],"created_at":1606402959000,"updated_at":1606500525000,"closed_at":1606500524000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/892","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/892","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/892.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/892.patch"},"body":"I started making a small list of various datasets of reference in the documentation.\r\nSince many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from.\r\n\r\nLet me know what you think, and if you have ideas of other datasets that we may add to this list, please let me know.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/892\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/891","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/891\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/891\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/891\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/891","id":751576869,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MDY1MTQ3","number":891,"title":"gitignore .python-version","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606395958000,"updated_at":1606397307000,"closed_at":1606397306000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/891","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/891","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/891.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/891.patch"},"body":"ignore `.python-version` added by `pyenv`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/891\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/890","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/890\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/890\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/890\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/890","id":751534050,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3","number":890,"title":"Add LER","user":{"login":"JoelNiklaus","id":3775944,"node_id":"MDQ6VXNlcjM3NzU5NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3775944?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JoelNiklaus","html_url":"https:\/\/github.com\/JoelNiklaus","followers_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/followers","following_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/orgs","repos_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/repos","events_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat \/Users\/joelniklaus\/NextCloud\/PhDJoelNiklaus\/Code\/datasets\/datasets\/ler\/ler.py\r\nOh no! \ud83d\udca5 \ud83d\udc94 \ud83d\udca5\r\n1 file would be reformatted, 257 files would be left unchanged.\r\nmake: *** [quality] Error 1\r\n","Awesome thanks :)\r\nTo automatically format the python files you can run `make style`","I did that now. But still getting the following error:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! \u2728 \ud83c\udf70 \u2728\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets\/ler\/ler.py:46:96: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:47:68: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:48:102: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:49:112: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:50:92: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:51:116: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:52:84: W291 trailing whitespace\r\nmake: *** [quality] Error 1\r\n\r\nHowever: When I look at the file I don't see any trailing whitespace","maybe a bug with flake8 ? could you try to update it ? which version do you have ?","This is my flake8 version: 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.8.5 on Darwin\r\n","Now I updated to: 3.8.4 (mccabe: 0.6.1, pycodestyle: 2.6.0, pyflakes: 2.2.0) CPython 3.8.5 on Darwin\r\n\r\nAnd now I even get additional errors:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! \u2728 \ud83c\udf70 \u2728\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets\/polyglot_ner\/polyglot_ner.py:123:64: F541 f-string is missing placeholders\r\ndatasets\/ler\/ler.py:46:96: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:47:68: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:48:102: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:49:112: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:50:92: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:51:116: W291 trailing whitespace\r\ndatasets\/ler\/ler.py:52:84: W291 trailing whitespace\r\ndatasets\/math_dataset\/math_dataset.py:233:25: E741 ambiguous variable name 'l'\r\nmetrics\/coval\/coval.py:236:31: F541 f-string is missing placeholders\r\nmake: *** [quality] Error 1\r\n\r\nI do this on macOS Catalina 10.15.7 in case this matters","Code quality test now passes, thanks :) \r\n\r\nTo fix the other tests failing I think you can just rebase from master.\r\nAlso make sure that the dummy data test passes with\r\n```python\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_ler\r\n```","I will close this PR because abishek did the same better (https:\/\/github.com\/huggingface\/datasets\/pull\/944)","Sorry you had to close your PR ! It looks like this week's sprint doesn't always make it easy to see what's being added\/what's already added. \r\nThank you for contributing to the library. You did a great job on adding LER so feel free to add other ones that you would like to see in the library, it will be a pleasure to review"],"created_at":1606391903000,"updated_at":1606829615000,"closed_at":1606829176000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/890","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/890","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/890.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/890.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/890\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/889","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/889\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/889\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/889\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/889","id":751115691,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI3NjkwODE2","number":889,"title":"Optional per-dataset default config name","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I like the idea ! And the approach is right imo\r\n\r\nNote that by changing this we will have to add a way for users to get the config lists of a dataset. In the current user workflow, the user could see the list of the config when the missing config error is raised but now it won't be the case because of the default config.","Maybe let's add a test in the test_builder.py test script ?","@lhoestq Okay great, I added a test as well as two new inspect functions: `get_dataset_config_names` and `get_dataset_infos` (the latter is something I've been wanting anyway). As a quick hack, you can also just pass a random config name (e.g. an empty string) to `load_dataset` to get the config names in the error msg as before. Also added a couple paragraphs to the adding new datasets doc.\r\n\r\nI'll send a separate PR incorporating this in existing datasets so we can get this merged before our sprint on Monday.\r\n\r\nAny ideas on the failing tests? I'm having trouble making sense of it. **Edit**: nvm, it was master."],"created_at":1606338150000,"updated_at":1606757253000,"closed_at":1606757247000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/889","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/889","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/889.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/889.patch"},"body":"This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = \"combined\"` in PolyglotNER, a user can now do the following:\r\n\r\n```python\r\nds = load_dataset(\"polyglot_ner\")\r\n```\r\nwhich is equivalent to,\r\n```python\r\nds = load_dataset(\"polyglot_ner\", \"combined\")\r\n```\r\nIn effect (for this particular dataset configuration), this means that if the user doesn't specify a language, they are given the combined dataset including all languages.\r\n\r\nSince it doesn't always make sense to have a default config, this feature is opt-in. If `DEFAULT_CONFIG_NAME` is not defined and a user does not pass a config for a dataset with multiple configs available, a ValueError is raised like usual.\r\n\r\nLet me know what you think about this approach @lhoestq @thomwolf and I'll add some documentation and define a default for some of our existing datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/889\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/888","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/888\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/888\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/888\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/888","id":750944422,"node_id":"MDU6SXNzdWU3NTA5NDQ0MjI=","number":888,"title":"Nested lists are zipped unexpectedly","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https:\/\/huggingface.co\/docs\/datasets\/features.html?highlight=features) for more details","Thanks.\r\nThis is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :) \r\n"],"created_at":1606320466000,"updated_at":1606325439000,"closed_at":1606325439000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I might misunderstand something, but I expect that if I define:\r\n```python\r\n\"top\": datasets.features.Sequence({\r\n \"middle\": datasets.features.Sequence({\r\n \"bottom\": datasets.Value(\"int32\")\r\n })\r\n})\r\n```\r\n\r\nAnd I then create an example:\r\n```python\r\nyield 1, {\r\n \"top\": [{\r\n \"middle\": [\r\n {\"bottom\": 1},\r\n {\"bottom\": 2}\r\n ]\r\n }]\r\n}\r\n```\r\n\r\nI then load my dataset:\r\n```python\r\ntrain = load_dataset(\"my dataset\")[\"train\"]\r\n```\r\n\r\nand expect to be able to access `data[0][\"top\"][0][\"middle\"][0]`.\r\n\r\nThat is not the case. Here is `data[0]` as JSON:\r\n\r\n```json\r\n{\"top\": {\"middle\": [{\"bottom\": [1, 2]}]}}\r\n```\r\n\r\nClearly different than the thing I inputted.\r\n```json\r\n{\"top\": [{\"middle\": [{\"bottom\": 1},{\"bottom\": 2}]}]}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/888\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/887","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/887\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/887\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/887\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/887","id":750868831,"node_id":"MDU6SXNzdWU3NTA4Njg4MzE=","number":887,"title":"pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since the [underlying arrow type](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/features.py#L236) allows dynamic sizes.\r\n\r\nFor now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.\r\nWhat do you think ?","> Yes right now ArrayXD can only be used as a column feature type, not a subtype. \r\n\r\nMeaning it can't be nested under `Sequence`?\r\nIf so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.","Yea unfortunately..\r\nThat's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.\r\nWe already have an ExtensionArray that allows us to use them as column types but not for subtypes.\r\nMaybe we can extend it, I haven't experimented with that yet","Cool\r\nSo please consider this issue as a feature request for:\r\n```\r\nArray3D(shape=(None, 137, 2), dtype=\"float32\")\r\n```\r\n\r\nits a way to represent videos, poses, and other cool sequences","@lhoestq well, so sequence of sequences doesn't work either...\r\n\r\n```\r\npyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\n\r\n\r\n","Working with Arrow can be quite fun sometimes.\r\nYou can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https:\/\/github.com\/huggingface\/datasets\/issues\/741).\r\n\r\nLet me know if it works.\r\nI haven't investigated yet on https:\/\/github.com\/huggingface\/datasets\/issues\/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.","The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)\r\nLoading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough\r\n","Sorry it doesn't work. Will let you know once I fixed it","Hi @lhoestq , any update on dynamic sized arrays?\r\n(`Array3D(shape=(None, 137, 2), dtype=\"float32\")`)","Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.","Hi @lhoestq,\r\nAny chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?\r\n\r\ne.g.:\r\n`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))`\r\n`Array3D(shape=(None, 137, 2), dtype=\"float32\")`","Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.","@lhoestq, thanks for the update.\r\n\r\nI actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?\r\nI think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https:\/\/github.com\/huggingface\/datasets\/blob\/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c\/src\/datasets\/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)\r\nBelow are my modifications of this class.\r\n\r\n```\r\nclass ArrayExtensionArray(pa.ExtensionArray):\r\n def __array__(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n return self.to_numpy(zero_copy_only=zero_copy_only)\r\n\r\n def __getitem__(self, i):\r\n return self.storage[i]\r\n\r\n def to_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n size = 1\r\n for i in range(self.type.ndims):\r\n size *= self.type.shape[i]\r\n storage = storage.flatten()\r\n numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)\r\n numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)\r\n return numpy_arr\r\n\r\n def to_list_of_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n shape = self.type.shape\r\n arrays = []\r\n for dim in range(1, self.type.ndims):\r\n assert shape[dim] is not None, f\"Support only dynamic size on first dimension. Got: {shape}\"\r\n\r\n first_dim_offsets = np.array([off.as_py() for off in storage.offsets])\r\n for i in range(len(storage)):\r\n storage_el = storage[i:i+1]\r\n first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]\r\n # flatten storage\r\n for dim in range(self.type.ndims):\r\n storage_el = storage_el.flatten()\r\n\r\n numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)\r\n arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))\r\n\r\n return arrays\r\n\r\n def to_pylist(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n if self.type.shape[0] is None:\r\n return self.to_list_of_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```\r\n\r\nI ran few tests and it works as expected. Let me know what you think.","Thanks for diving into this !\r\n\r\nIndeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).\r\nYour code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.\r\n\r\nFeel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.\r\nIn particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\n# this works\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(1, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix]]})\r\nprint(d.to_pandas())\r\n\r\n# this should work as well\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(None, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix] * 2]})\r\nprint(d.to_pandas())\r\n```\r\n\r\nI'll be happy to help you on this :)"],"created_at":1606314741000,"updated_at":1631207020000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) \r\n\r\n```python\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n # This defines the different columns of the dataset and their types\r\n features=datasets.Features(\r\n {\r\n \"pose\": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))\r\n }\r\n ),\r\n homepage=_HOMEPAGE,\r\n citation=_CITATION,\r\n )\r\n def _generate_examples(self):\r\n \"\"\" Yields examples. \"\"\"\r\n\r\n yield 1, {\r\n \"pose\": [np.zeros(shape=(137, 2), dtype=np.float32)]\r\n }\r\n```\r\n\r\nBut this doesn't work -\r\n> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/887\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/886","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/886\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/886\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/886\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/886","id":750829314,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5","number":886,"title":"Fix wikipedia custom config","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https:\/\/github.com\/huggingface\/datasets\/issues\/577#issuecomment-868122769)"],"created_at":1606311852000,"updated_at":1624598656000,"closed_at":1606318933000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/886","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/886","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/886.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/886.patch"},"body":"It should be possible to use the wikipedia dataset with any `language` and `date`.\r\nHowever it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.\r\n\r\nI fixed that and was able to run \r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\".\/datasets\/wikipedia\", language=\"zh\", date=\"20201120\", beam_runner='DirectRunner')\r\n```\r\n\r\ncc @stvhuang @SamuelCahyawijaya\r\n\r\nFix #784","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/886\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/885","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/885\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/885\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/885\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/885","id":750789052,"node_id":"MDU6SXNzdWU3NTA3ODkwNTI=","number":885,"title":"Very slow cold-start","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good point!","Yes indeed. We can probably improve that by using lazy imports","#1690 added fast start-up of the library "],"created_at":1606308478000,"updated_at":1610537485000,"closed_at":1610537485000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant.\r\nWhen I load a metric, or a dataset, its fine that it takes time.\r\n\r\nThe following ranges from 3 to 9 seconds:\r\n```\r\npython -m timeit -n 1 -r 1 'from datasets import load_dataset'\r\n```\r\n\r\nedit:\r\nsorry for the mis-tag, not sure how I added it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/885\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/884","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/884\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/884\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/884\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/884","id":749862034,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI2NjA5MDc1","number":884,"title":"Auto generate dummy data","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I took your comments into account.\r\nAlso now after compressing the dummy_data.zip file it runs a dummy data test (=make sure each split has at least 1 example using the dummy data)","I just tested the tool with some datasets and found out that it's not working for datasets that download files using `download_and_extract(file_url)` (where file_url is a `str`). That's because in that case the dummy_data.zip is not a folder but a single zipped file.\r\n\r\nI think we have to fix that or we can have unexpected behavior when a scripts calls `download_and_extract(file_url)` several times, since it would always point to the same dummy data file.\r\n\r\nSo I decided to change that to have a folder containing the dummy files instead but it breaks around 90 tests so I need to update 90 dummy data files to follow this scheme. I'll probably fix them tomorrow morning.\r\n\r\nWhat do you guys think ? Also cc @patrickvonplaten to make sure I understand things correctly","Ok I changed to use the dummy_data.zip content to be a folder even for single url calls to `dl_manager.download_and_extract`. Therefore the automatic dummy data generation tool works for most datasets now.\r\n\r\nTo avoid having to change all the old dummy_data.zip files I added backward compatiblity. \r\n\r\nThe only test failing is `tests\/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xcopa`\r\nIt is expected to fail since I had modify its dummy data structure that was wrong. It was causing issue with backward compatibility. It will be fixed as soon as this PR is merged"],"created_at":1606235494000,"updated_at":1606400327000,"closed_at":1606400326000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/884","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/884","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/884.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/884.patch"},"body":"When adding a new dataset to the library, dummy data creation can take some time.\r\nTo make things easier I added a command line tool that automatically generates dummy data when possible.\r\n\r\nThe tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml.\r\n\r\nHere are some examples:\r\n```\r\npython datasets-cli dummy_data .\/datasets\/snli --auto_generate\r\npython datasets-cli dummy_data .\/datasets\/squad --auto_generate --json_field data\r\npython datasets-cli dummy_data .\/datasets\/iwslt2017 --auto_generate --xml_tag seg --match_text_files \"train*\" --n_lines 15\r\n# --xml_tag seg => each sample corresponds to a \"seg\" tag in the xml tree\r\n# --match_text_files \"train*\" => also match text files that don't have a proper text file extension (no suffix like \".txt\" for example)\r\n# --n_lines 15 => some text files have headers so we have to use at least 15 lines\r\n```\r\n\r\nand here is the command usage:\r\n\r\n```\r\nusage: datasets-cli <command> [<args>] dummy_data [-h] [--auto_generate]\r\n [--n_lines N_LINES]\r\n [--json_field JSON_FIELD]\r\n [--xml_tag XML_TAG]\r\n [--match_text_files MATCH_TEXT_FILES]\r\n [--keep_uncompressed]\r\n [--cache_dir CACHE_DIR]\r\n path_to_dataset\r\n\r\npositional arguments:\r\npath_to_dataset Path to the dataset (example: .\/datasets\/squad)\r\n\r\noptional arguments:\r\n-h, --help show this help message and exit\r\n--auto_generate Try to automatically generate dummy data\r\n--n_lines N_LINES Number of lines or samples to keep when auto-\r\n generating dummy data\r\n--json_field JSON_FIELD\r\n Optional, json field to read the data from when auto-\r\n generating dummy data. In the json data files, this\r\n field must point to a list of samples as json objects\r\n (ex: the 'data' field for squad-like files)\r\n--xml_tag XML_TAG Optional, xml tag name of the samples inside the xml\r\n files when auto-generating dummy data.\r\n--match_text_files MATCH_TEXT_FILES\r\n Optional, a comma separated list of file patterns that\r\n looks for line-by-line text files other than *.txt or\r\n *.csv. Example: --match_text_files *.label\r\n--keep_uncompressed Don't compress the dummy data folders when auto-\r\n generating dummy data. Useful for debugging for to do\r\n manual adjustements before compressing.\r\n--cache_dir CACHE_DIR\r\n Cache directory to download and cache files when auto-\r\n generating dummy data\r\n```\r\n\r\nThe command generates all the necessary `dummy_data.zip` files (one per config).\r\n\r\nHow it works:\r\n- it runs the split_generators() method of the dataset script to download the original data files\r\n- when downloading it records a mapping between the downloaded files and the corresponding expected dummy data files paths\r\n- then for each data file it creates the dummy data file keeping only the first samples (the strategy depends on the type of file)\r\n- finally it compresses the dummy data folders into dummy_zip files ready for dataset tests\r\n\r\nLet me know if that makes sense or if you have ideas to improve this tool !\r\n\r\nI also added a unit test.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/884\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/883","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/883\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/883\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/883\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/883","id":749750801,"node_id":"MDU6SXNzdWU3NDk3NTA4MDE=","number":883,"title":"Downloading\/caching only a part of a datasets' dataset.","user":{"login":"SapirWeissbuch","id":44585792,"node_id":"MDQ6VXNlcjQ0NTg1Nzky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/44585792?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SapirWeissbuch","html_url":"https:\/\/github.com\/SapirWeissbuch","followers_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/followers","following_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/orgs","repos_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/repos","events_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SapirWeissbuch\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Not at the moment but we could likely support this feature.","?","I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources."],"created_at":1606227918000,"updated_at":1606485115000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI want to use the validation data *only* (of natural question).\r\nI don't want to have the whole dataset cached in my machine, just the dev set.\r\nIs this possible? I can't find a way to do it in the docs.\r\n\r\nThank you,\r\nSapir","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/883\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/882","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/882\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/882\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/882\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/882","id":749662188,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI2NDQyMjA2","number":882,"title":"Update README.md","user":{"login":"vaibhavad","id":32997732,"node_id":"MDQ6VXNlcjMyOTk3NzMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32997732?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vaibhavad","html_url":"https:\/\/github.com\/vaibhavad","followers_url":"https:\/\/api.github.com\/users\/vaibhavad\/followers","following_url":"https:\/\/api.github.com\/users\/vaibhavad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vaibhavad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vaibhavad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vaibhavad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vaibhavad\/orgs","repos_url":"https:\/\/api.github.com\/users\/vaibhavad\/repos","events_url":"https:\/\/api.github.com\/users\/vaibhavad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vaibhavad\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606220632000,"updated_at":1611916867000,"closed_at":1611916867000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/882","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/882","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/882.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/882.patch"},"body":"\"no label\" is \"-\" in the original dataset but \"-1\" in Huggingface distribution.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/882\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/881","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/881\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/881\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/881\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/881","id":749548107,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI2MzQ5MDM2","number":881,"title":"Use GCP download url instead of tensorflow custom download for boolq","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1606211231000,"updated_at":1606212754000,"closed_at":1606212753000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/881","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/881","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/881.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/881.patch"},"body":"BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket.\r\nIt prevented the dataset to be downloaded twice because of a FileAlreadyExistsError.\r\nEven though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and use regular downloads instead and remove the tensorflow dependency.\r\n\r\nFix #875 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/881\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/880","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/880\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/880\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/880\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/880","id":748949606,"node_id":"MDU6SXNzdWU3NDg5NDk2MDY=","number":880,"title":"Add SQA","user":{"login":"NielsRogge","id":48327001,"node_id":"MDQ6VXNlcjQ4MzI3MDAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48327001?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NielsRogge","html_url":"https:\/\/github.com\/NielsRogge","followers_url":"https:\/\/api.github.com\/users\/NielsRogge\/followers","following_url":"https:\/\/api.github.com\/users\/NielsRogge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NielsRogge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NielsRogge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NielsRogge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NielsRogge\/orgs","repos_url":"https:\/\/api.github.com\/users\/NielsRogge\/repos","events_url":"https:\/\/api.github.com\/users\/NielsRogge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NielsRogge\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I\u2019ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ","@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https:\/\/github.com\/google-research\/tapas\/blob\/master\/tapas\/utils\/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples\/strings:\r\n\r\n```\r\nimport pandas as pd\r\nimport ast\r\n\r\ndata = pd.read_csv(\"\/content\/sqa_data\/random-split-1-dev.tsv\", sep='\\t')\r\n\r\ndef _parse_answer_coordinates(answer_coordinate_str):\r\n \"\"\"Parses the answer_coordinates of a question.\r\n Args:\r\n answer_coordinate_str: A string representation of a Python list of tuple\r\n strings.\r\n For example: \"['(1, 4)','(1, 3)', ...]\"\r\n \"\"\"\r\n\r\n try:\r\n answer_coordinates = []\r\n # make a list of strings\r\n coords = ast.literal_eval(answer_coordinate_str)\r\n # parse each string as a tuple\r\n for row_index, column_index in sorted(\r\n ast.literal_eval(coord) for coord in coords):\r\n answer_coordinates.append((row_index, column_index))\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_coordinate_str)\r\n \r\n return answer_coordinates\r\n\r\n\r\ndef _parse_answer_text(answer_text):\r\n \"\"\"Populates the answer_texts field of `answer` by parsing `answer_text`.\r\n Args:\r\n answer_text: A string representation of a Python list of strings.\r\n For example: \"[u'test', u'hello', ...]\"\r\n \"\"\"\r\n try:\r\n answer = []\r\n for value in ast.literal_eval(answer_text):\r\n answer.append(value)\r\n except SyntaxError:\r\n raise ValueError('Unable to evaluate %s' % answer_text)\r\n\r\n return answer\r\n\r\ndata['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str))\r\ndata['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt))\r\n```\r\n\r\nHere I'm using Pandas to read in one of the TSV files (the dev set). \r\n\r\n","Closing since SQA was added in #1566 "],"created_at":1606149115000,"updated_at":1608731904000,"closed_at":1608731903000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** SQA (Sequential Question Answering) by Microsoft. \r\n- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.\r\n- **Paper:** https:\/\/www.microsoft.com\/en-us\/research\/publication\/search-based-neural-structured-learning-sequential-question-answering\/\r\n- **Data:** https:\/\/www.microsoft.com\/en-us\/download\/details.aspx?id=54253\r\n- **Motivation:** currently, the [Tapas](https:\/\/ai.googleblog.com\/2020\/04\/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https:\/\/github.com\/huggingface\/transformers\/pull\/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71).\r\n\r\nNote 1: this dataset actually consists of 2 types of files: \r\n1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test)\r\n2) a folder of csv files, which contain the actual tabular data\r\n\r\nNote 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub.\r\n\r\nAdding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https:\/\/github.com\/ppasupat\/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https:\/\/github.com\/wenhuchen\/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/880\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/879","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/879\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/879\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/879\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/879","id":748848847,"node_id":"MDU6SXNzdWU3NDg4NDg4NDc=","number":879,"title":"boolq does not load ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.","hey\ni do the exact same commands. for me it fails i guess might be issues with\ncaching maybe?\nthanks\nbest\nrabeeh\n\nOn Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hi ! It runs on my side without issues. I tried\n>\n> from datasets import load_datasetload_dataset(\"boolq\")\n>\n> What version of datasets and tensorflow are your runnning ?\n> Also if you manage to get a minimal reproducible script (on google colab\n> for example) that would be useful.\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/issues\/879#issuecomment-732769114>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>\n> .\n>\n","Could you check if it works on the master branch ?\r\nYou can use `load_dataset(\"boolq\", script_version=\"master\")` to do so.\r\nWe did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https:\/\/github.com\/huggingface\/datasets\/pull\/881"],"created_at":1606141708000,"updated_at":1606485071000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am getting these errors trying to load boolq thanks \r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 5, in <module>\r\n data = AutoTask().get(\"boolq\").get_dataset(\"train\", n_obs=10)\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/internship\/seq2seq\/tasks\/tasks.py\", line 42, in get_dataset\r\n dataset = self.load_dataset(split=split)\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/internship\/seq2seq\/tasks\/tasks.py\", line 38, in load_dataset\r\n return datasets.load_dataset(self.task.name, split=split)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" \/idiap\/home\/rkarimi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/boolq\/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11\/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 150, in download_custom\r\n get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 472, in get_from_cache\r\n f\"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been\"\r\nFileNotFoundError: Cannot find the requested files in the cached path at \/idiap\/home\/rkarimi\/.cache\/huggingface\/datasets\/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/879\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/878","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/878\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/878\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/878\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/878","id":748621981,"node_id":"MDU6SXNzdWU3NDg2MjE5ODE=","number":878,"title":"Loading Data From S3 Path in Sagemaker","user":{"login":"mahesh1amour","id":42795522,"node_id":"MDQ6VXNlcjQyNzk1NTIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42795522?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mahesh1amour","html_url":"https:\/\/github.com\/mahesh1amour","followers_url":"https:\/\/api.github.com\/users\/mahesh1amour\/followers","following_url":"https:\/\/api.github.com\/users\/mahesh1amour\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mahesh1amour\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mahesh1amour\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mahesh1amour\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mahesh1amour\/orgs","repos_url":"https:\/\/api.github.com\/users\/mahesh1amour\/repos","events_url":"https:\/\/api.github.com\/users\/mahesh1amour\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mahesh1amour\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This would be a neat feature","> neat feature\r\n\r\nI dint get these clearly, can you please elaborate like how to work on these ","It could maybe work almost out of the box just by using `cached_path` in the text\/csv\/json scripts, no?","Thanks thomwolf and julien-c\r\n\r\nI'm still confusion on what you guys said, \r\n\r\nI have solved the problem as follows:\r\n\r\n1. read the csv file using pandas from s3 \r\n2. Convert to dictionary key as column name and values as list column data\r\n3. convert it to Dataset using \r\n`from datasets import Dataset`\r\n`train_dataset = Dataset.from_dict(train_dict)`","We were brainstorming around your use-case.\r\n\r\nLet's keep the issue open for now, I think this is an interesting question to think about.","> We were brainstorming around your use-case.\r\n> \r\n> Let's keep the issue open for now, I think this is an interesting question to think about.\r\n\r\nSure thomwolf, Thanks for your concern ","I agree it would be cool to have that feature. Also that's good to know that pandas supports this.\r\nFor the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files","Don't get\n","Any updates on this issue?\r\nI face a similar issue. I have many parquet files in S3 and I would like to train on them. \r\nTo be honest I even face issues with only getting the last layer embedding out of them.","Hi dorlavie, \r\nYou can find one solution that i have mentioned above, that can help you. \r\nAnd there is one more solution also which is downloading files locally\r\n","> Hi dorlavie,\r\n> You can find one solution that i have mentioned above, that can help you.\r\n> And there is one more solution also which is downloading files locally\r\n\r\nmahesh1amour, thanks for the fast reply\r\n\r\nUnfortunately, in my case I can not read with pandas. The dataset is too big (50GB). \r\nIn addition, due to security concerns I am not allowed to save the data locally","@dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset`\r\n\r\nboto3 example [documentation](https:\/\/boto3.amazonaws.com\/v1\/documentation\/api\/latest\/guide\/s3-example-download-file.html)\r\n```python\r\nimport boto3\r\n\r\ns3 = boto3.client('s3')\r\ns3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')\r\n```\r\n\r\ndatasets example [documentation](https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html)\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv'])\r\n```\r\n","Thanks @philschmid for the suggestion.\r\nAs I mentioned in the previous comment, due to security issues I can not save the data locally.\r\nI need to read it from S3 and process it directly.\r\n\r\nI guess that many other people try to train \/ fit those models on huge datasets (e.g entire Wiki), what is the best practice in those cases?","If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ?\r\nOr is it the use of the `boto3` library that is not allowed in your case ?","@lhoestq yes you are correct.\r\nI am not allowed to save the \"raw text\" locally - The \"raw text\" must be saved only on S3.\r\nI am allowed to save the output of any model locally. \r\nIt doesn't matter how I do it boto3\/pandas\/pyarrow, it is forbidden","@dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3:\/\/my-bucket\/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk\r\n\r\n**sagemaker start training job**\r\n```python\r\npytorch_estimator.fit({'train':'s3:\/\/my-bucket\/my-training-data','eval':'s3:\/\/my-bucket\/my-evaluation-data'})\r\n```\r\n\r\n**in the train.py script**\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ntrain_dataset = load_from_disk(os.environ['SM_CHANNEL_TRAIN'])\r\n```\r\n\r\nI have created an example of how to use transformers and datasets with sagemaker. \r\nhttps:\/\/github.com\/philschmid\/huggingface-sagemaker-example\/tree\/main\/03_huggingface_sagemaker_trainer_with_data_from_s3\r\n\r\nThe example contains a jupyter notebook `sagemaker-example.ipynb` and an `src\/` folder. The sagemaker-example is a jupyter notebook that is used to create the training job on AWS Sagemaker. The `src\/` folder contains the `train.py`, our training script, and `requirements.txt` for additional dependencies.\r\n\r\n"],"created_at":1606123042000,"updated_at":1608717188000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In Sagemaker Im tring to load the data set from S3 path as follows\r\n\r\n`train_path = 's3:\/\/xxxxxxxxxx\/xxxxxxxxxx\/train.csv'\r\n valid_path = 's3:\/\/xxxxxxxxxx\/xxxxxxxxxx\/validation.csv'\r\n test_path = 's3:\/\/xxxxxxxxxx\/xxxxxxxxxx\/test.csv'\r\n \r\n data_files = {}\r\n data_files[\"train\"] = train_path\r\n data_files[\"validation\"] = valid_path\r\n data_files[\"test\"] = test_path\r\n extension = train_path.split(\".\")[-1]\r\n datasets = load_dataset(extension, data_files=data_files, s3_enabled=True)\r\n print(datasets)`\r\n\r\n\r\nI getting an error of\r\n\r\n`algo-1-7plil_1 | File \"main.py\", line 21, in <module>\r\nalgo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files)\r\nalgo-1-7plil_1 | File \"\/opt\/conda\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 603, in load_dataset\r\nalgo-1-7plil_1 | **config_kwargs,\r\nalgo-1-7plil_1 | File \"\/opt\/conda\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 155, in __init__\r\nalgo-1-7plil_1 | **config_kwargs,\r\nalgo-1-7plil_1 | File \"\/opt\/conda\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 305, in _create_builder_config\r\nalgo-1-7plil_1 | m.update(str(os.path.getmtime(data_file)))\r\nalgo-1-7plil_1 | File \"\/opt\/conda\/lib\/python3.6\/genericpath.py\", line 55, in getmtime\r\nalgo-1-7plil_1 | return os.stat(filename).st_mtime\r\nalgo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3:\/\/lsmv-sagemaker\/pubmedbert\/test.csv`\r\n\r\nBut when im trying with pandas , it is able to load from S3\r\n\r\nDoes the datasets library support S3 path to load","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/878\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/877","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/877\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/877\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/877\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/877","id":748234438,"node_id":"MDU6SXNzdWU3NDgyMzQ0Mzg=","number":877,"title":"DataLoader(datasets) become more and more slowly within iterations","user":{"login":"shexuan","id":25664170,"node_id":"MDQ6VXNlcjI1NjY0MTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25664170?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shexuan","html_url":"https:\/\/github.com\/shexuan","followers_url":"https:\/\/api.github.com\/users\/shexuan\/followers","following_url":"https:\/\/api.github.com\/users\/shexuan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shexuan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shexuan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shexuan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shexuan\/orgs","repos_url":"https:\/\/api.github.com\/users\/shexuan\/repos","events_url":"https:\/\/api.github.com\/users\/shexuan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shexuan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not","> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\n> It would be nice to know whether it comes from the dataloader or not\r\n\r\nI did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it\/s."],"created_at":1606048870000,"updated_at":1606664712000,"closed_at":1606664712000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!\r\n```\r\ndataset = load_from_disk(dataset_path) # around 21,000,000 lines\r\n\r\nlineloader = tqdm(DataLoader(dataset, batch_size=1))\r\nfor idx, line in enumerate(lineloader):\r\n # do some thing for each line\r\n```\r\nIn the begining, the loading speed is around 2000it\/s, but after 1 minutes later, the speed is much slower, just around 800it\/s.\r\n\r\nAnd when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it\/s.\r\n\r\nCould you please help me with this problem?\r\nThanks a lot!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/877\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/876","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/876\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/876\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/876\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/876","id":748195104,"node_id":"MDU6SXNzdWU3NDgxOTUxMDQ=","number":876,"title":"imdb dataset cannot be loaded ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n```\r\nto make sure it's not a corrupted file issue ?","I was using version 1.1.2 and this resolved with version 1.1.3, thanks. "],"created_at":1606033483000,"updated_at":1608831528000,"closed_at":1608831527000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying to load the imdb train dataset\r\n\r\n`dataset = datasets.load_dataset(\"imdb\", split=\"train\")`\r\n\r\ngetting following errors, thanks for your help \r\n```\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n>>> dataset = datasets.load_dataset(\"imdb\", split=\"train\")\r\n\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/876\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/875","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/875\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/875\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/875\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/875","id":748194311,"node_id":"MDU6SXNzdWU3NDgxOTQzMTE=","number":875,"title":"bug in boolq dataset loading","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just opened a PR to fix this.\r\nThanks for reporting !"],"created_at":1606033114000,"updated_at":1606212753000,"closed_at":1606212753000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying to load boolq dataset:\r\n\r\n```\r\nimport datasets\r\ndatasets.load_dataset(\"boolq\")\r\n```\r\n\r\nI am getting the following errors, thanks for your help \r\n\r\n```\r\n>>> import datasets\r\n2020-11-22 09:16:30.070332: W tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-11-22 09:16:30.070389: I tensorflow\/stream_executor\/cuda\/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n>>> datasets.load_dataset(\"boolq\")\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home\/datasets\r\nUsing custom data configuration default\r\nDownloading and preparing dataset boolq\/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to \/idiap\/temp\/rkarimi\/cache_home\/datasets\/boolq\/default\/0.1.0\/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home\/datasets\r\ncahce dir \/idiap\/temp\/rkarimi\/cache_home\/datasets\/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" \/idiap\/home\/rkarimi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/boolq\/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11\/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/tensorflow\/python\/lib\/io\/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\ntensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists\r\n\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/875\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/874","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/874\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/874\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/874\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/874","id":748193140,"node_id":"MDU6SXNzdWU3NDgxOTMxNDA=","number":874,"title":"trec dataset unavailable ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This was fixed in #740 \r\nCould you try to update `datasets` and try again ?","This has been fixed in datasets 1.1.3"],"created_at":1606032576000,"updated_at":1606485402000,"closed_at":1606485402000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nwhen I try to load the trec dataset I am getting these errors, thanks for your help\r\n\r\n`datasets.load_dataset(\"trec\", split=\"train\")\r\n`\r\n```\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" \/idiap\/home\/rkarimi\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/trec\/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7\/trec.py\", line 140, in _split_generators\r\n dl_files = dl_manager.download_and_extract(_URLs)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 179, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 477, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/874\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/873","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/873\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/873\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/873\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/873","id":747959523,"node_id":"MDU6SXNzdWU3NDc5NTk1MjM=","number":873,"title":"load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error","user":{"login":"vishal-burman","id":19861874,"node_id":"MDQ6VXNlcjE5ODYxODc0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19861874?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vishal-burman","html_url":"https:\/\/github.com\/vishal-burman","followers_url":"https:\/\/api.github.com\/users\/vishal-burman\/followers","following_url":"https:\/\/api.github.com\/users\/vishal-burman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vishal-burman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vishal-burman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vishal-burman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vishal-burman\/orgs","repos_url":"https:\/\/api.github.com\/users\/vishal-burman\/repos","events_url":"https:\/\/api.github.com\/users\/vishal-burman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vishal-burman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I get the same error. It was fixed some days ago, but again it appears","Hi @mrm8488 it's working again today without any fix so I am closing this issue.","I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to \/root\/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to \/root\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '\/root\/.cache\/huggingface\/datasets\/downloads\/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\/cnn\/stories'\r\n\r\nCan someone please take a look ?","Sometimes happens. Try in a while","It is working now, thank you. "],"created_at":1605940245000,"updated_at":1606993455000,"closed_at":1606047485000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('cnn_dailymail', '3.0.0')\r\n```\r\nStack trace:\r\n```\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-6-2e06a8332652> in <module>()\r\n 1 from datasets import load_dataset\r\n----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n5 frames\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 608 download_config=download_config,\r\n 609 download_mode=download_mode,\r\n--> 610 ignore_verifications=ignore_verifications,\r\n 611 )\r\n 612 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 513 if not downloaded_from_gcs:\r\n 514 self._download_and_prepare(\r\n--> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 516 )\r\n 517 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 568 split_dict = SplitDict(dataset_name=self.name)\r\n 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 571 \r\n 572 # Checksums verification\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _split_generators(self, dl_manager)\r\n 252 def _split_generators(self, dl_manager):\r\n 253 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 255 # Generate shared vocabulary\r\n 256 \r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _subset_filenames(dl_paths, split)\r\n 153 else:\r\n 154 logging.fatal(\"Unsupported split: %s\", split)\r\n--> 155 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 156 dm = _find_files(dl_paths, \"dm\", urls)\r\n 157 return cnn + dm\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '\/root\/.cache\/huggingface\/datasets\/downloads\/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\/cnn\/stories'\r\n```\r\nI have ran the code on Google Colab","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/873\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/872","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/872\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/872\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/872\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/872","id":747653697,"node_id":"MDExOlB1bGxSZXF1ZXN0NTI0ODM4NjEx","number":872,"title":"Add IndicGLUE dataset and Metrics","user":{"login":"sumanthd17","id":28291870,"node_id":"MDQ6VXNlcjI4MjkxODcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28291870?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sumanthd17","html_url":"https:\/\/github.com\/sumanthd17","followers_url":"https:\/\/api.github.com\/users\/sumanthd17\/followers","following_url":"https:\/\/api.github.com\/users\/sumanthd17\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sumanthd17\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sumanthd17\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sumanthd17\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sumanthd17\/orgs","repos_url":"https:\/\/api.github.com\/users\/sumanthd17\/repos","events_url":"https:\/\/api.github.com\/users\/sumanthd17\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sumanthd17\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks ! merging now"],"created_at":1605892174000,"updated_at":1606323671000,"closed_at":1606317967000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/872","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/872","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/872.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/872.patch"},"body":"Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https:\/\/indicnlp.ai4bharat.org\/indic-glue\/)\r\n\r\n- [x] Followed the instructions in CONTRIBUTING.md\r\n- [x] Ran the tests successfully \r\n- [x] Created the dummy data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/872\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/871","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/871\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/871\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/871\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/871","id":747470136,"node_id":"MDU6SXNzdWU3NDc0NzAxMzY=","number":871,"title":"terminate called after throwing an instance of 'google::protobuf::FatalException'","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)","closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks "],"created_at":1605876984000,"updated_at":1607807792000,"closed_at":1607807792000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am using the dataset \"iwslt2017-en-nl\", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks \r\n\r\n\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 63\/63 [02:47<00:00, 2.18s\/it][libprotobuf FATAL \/sentencepiece\/src\/..\/third_party\/protobuf-lite\/google\/protobuf\/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nrun_t5_base_eval.sh: line 19: 5795 Aborted ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/871\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/870","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/870\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/870\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/870\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/870","id":747021996,"node_id":"MDU6SXNzdWU3NDcwMjE5OTY=","number":870,"title":"[Feature Request] Add optional parameter in text loading script to preserve linebreaks","user":{"login":"jncasey","id":31020859,"node_id":"MDQ6VXNlcjMxMDIwODU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31020859?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jncasey","html_url":"https:\/\/github.com\/jncasey","followers_url":"https:\/\/api.github.com\/users\/jncasey\/followers","following_url":"https:\/\/api.github.com\/users\/jncasey\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jncasey\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jncasey\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jncasey\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jncasey\/orgs","repos_url":"https:\/\/api.github.com\/users\/jncasey\/repos","events_url":"https:\/\/api.github.com\/users\/jncasey\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jncasey\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)"],"created_at":1605829891000,"updated_at":1606484891000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. \r\n\r\nI recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. \r\n\r\nBut the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. \r\n\r\nOnce I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned.\r\n\r\nBut if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/870\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/869","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/869\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/869\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/869\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/869","id":746495711,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw","number":869,"title":"Update ner datasets infos","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[":+1: Thanks for fixing it!"],"created_at":1605785283000,"updated_at":1605795258000,"closed_at":1605795257000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/869","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/869","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/869.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/869.patch"},"body":"Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel)\r\nI also fixed the ner types of conll2003","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/869\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/868","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/868\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/868\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/868\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/868","id":745889882,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIzMzc2MzQ3","number":868,"title":"Consistent metric outputs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I keep this PR in stand-by for next week's datasets sprint. If the next release is 2.0.0 then we can include it given that it's breaking for many metrics"],"created_at":1605722759000,"updated_at":1606411947000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/868","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/868","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/868.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/868.patch"},"body":"To automate the use of metrics, they should return consistent outputs.\r\nIn particular I'm working on adding a conversion of metrics to keras metrics.\r\nTo achieve this we need two things:\r\n- have each metric return dictionaries of string -> floats since each keras metrics should return one float\r\n- define in the metric info the different fields of the output dictionary\r\n\r\nIn this PR I'm adding these two features.\r\nI also fixed a few bugs in some metrics\r\n\r\n#867 needs to be merged first","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/868\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/867","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/867\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/867\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/867\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/867","id":745773955,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIzMjc4MjI4","number":867,"title":"Fix some metrics feature types","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605714371000,"updated_at":1605807358000,"closed_at":1605807357000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/867","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/867","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/867.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/867.patch"},"body":"Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics:\r\n- accuracy\r\n- precision\r\n- recall\r\n- f1\r\nI also added the sklearn citation and used keyword arguments to remove future warnings","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/867\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/866","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/866\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/866\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/866\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/866","id":745719222,"node_id":"MDU6SXNzdWU3NDU3MTkyMjI=","number":866,"title":"OSCAR from Inria group","user":{"login":"jchwenger","id":34098722,"node_id":"MDQ6VXNlcjM0MDk4NzIy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34098722?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jchwenger","html_url":"https:\/\/github.com\/jchwenger","followers_url":"https:\/\/api.github.com\/users\/jchwenger\/followers","following_url":"https:\/\/api.github.com\/users\/jchwenger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jchwenger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jchwenger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jchwenger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jchwenger\/orgs","repos_url":"https:\/\/api.github.com\/users\/jchwenger\/repos","events_url":"https:\/\/api.github.com\/users\/jchwenger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jchwenger\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled\/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though","Grand, thanks for this!"],"created_at":1605710454000,"updated_at":1605711690000,"closed_at":1605711690000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https:\/\/oscar-corpus.com\/).\r\n- **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.*\r\n- **Paper:** *[here](https:\/\/hal.inria.fr\/hal-02148693)*\r\n- **Data:** *[here](https:\/\/oscar-corpus.com\/)*\r\n- **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).*\r\n\r\nI am aware that you do offer the \"colossal\" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/866\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/865","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/865\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/865\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/865\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/865","id":745430497,"node_id":"MDU6SXNzdWU3NDU0MzA0OTc=","number":865,"title":"Have Trouble importing `datasets`","user":{"login":"forest1988","id":2755894,"node_id":"MDQ6VXNlcjI3NTU4OTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2755894?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/forest1988","html_url":"https:\/\/github.com\/forest1988","followers_url":"https:\/\/api.github.com\/users\/forest1988\/followers","following_url":"https:\/\/api.github.com\/users\/forest1988\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/forest1988\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/forest1988\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/forest1988\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/forest1988\/orgs","repos_url":"https:\/\/api.github.com\/users\/forest1988\/repos","events_url":"https:\/\/api.github.com\/users\/forest1988\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/forest1988\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise."],"created_at":1605686681000,"updated_at":1605687395000,"closed_at":1605687395000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets.\r\n\r\nI cloned the newest version of datasets (master branch), and do `pip install -e .`.\r\n\r\nThen, `import datasets` causes the error below.\r\n\r\n```\r\n~\/workspace\/Clone\/datasets\/src\/datasets\/utils\/file_utils.py in <module>\r\n 116 sys.path.append(str(HF_MODULES_CACHE))\r\n 117 \r\n--> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True)\r\n 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, \"__init__.py\")):\r\n 120 with open(os.path.join(HF_MODULES_CACHE, \"__init__.py\"), \"w\"):\r\n\r\n~\/.pyenv\/versions\/anaconda3-2020.07\/lib\/python3.8\/os.py in makedirs(name, mode, exist_ok)\r\n 221 return\r\n 222 try:\r\n--> 223 mkdir(name, mode)\r\n 224 except OSError:\r\n 225 # Cannot rely on checking for EEXIST, since the operating system \r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>\/.cache\/huggingface\/modules'\r\n```\r\n\r\nThe error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set.\r\n(I use Python 3.8, so `exist_ok` is expected to work.)\r\n\r\nI've checked some environment variables, and they are set as below.\r\n\r\n```\r\n*** NameError: name 'HF_MODULES_CACHE' is not defined\r\n*** NameError: name 'hf_cache_home' is not defined\r\n*** NameError: name 'XDG_CACHE_HOME' is not defined\r\n```\r\n\r\nShould I set some environment variables before using this library?\r\nAnd, do you have any idea why \"No such file or directory\" occurs even though the `exist_ok = True` option is set?\r\n\r\nThank you in advance.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/865\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/864","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/864\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/864\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/864\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/864","id":745322357,"node_id":"MDU6SXNzdWU3NDUzMjIzNTc=","number":864,"title":"Unable to download cnn_dailymail dataset","user":{"login":"rohitashwa1907","id":46031058,"node_id":"MDQ6VXNlcjQ2MDMxMDU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/46031058?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rohitashwa1907","html_url":"https:\/\/github.com\/rohitashwa1907","followers_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/followers","following_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/orgs","repos_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/repos","events_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rohitashwa1907\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Same error here!\r\n","Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2","I'm looking at it right now","I couldn't reproduce unfortunately. I tried\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cnn_dailymail\", \"3.0.0\", download_mode=\"force_redownload\")\r\n```\r\nand it worked fine on both my env (python 3.7.2) and colab (python 3.6.9)\r\n\r\nMaybe there was an issue with the google drive download link of the dataset ?\r\nAre you still having the issue ? If so could your give me more info about your python and requests version ?","No, It's working fine now. Very strange. Here are my python and request versions\r\n\r\nrequests 2.24.0\r\nPython 3.8.2","It's working as expected. Closing the issue \r\n\r\nThanks everybody."],"created_at":1605674282000,"updated_at":1605849731000,"closed_at":1605849730000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"### Script to reproduce the error\r\n```\r\nfrom datasets import load_dataset\r\n\r\ntrain_dataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", split= 'train[:10%')\r\nvalid_dataset = load_dataset(\"cnn_dailymail\",\"3.0.0\", split=\"validation[:5%]\")\r\n```\r\n\r\n\r\n### Error\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n<ipython-input-8-47c39c228935> in <module>()\r\n 1 from datasets import load_dataset\r\n 2 \r\n----> 3 train_dataset = load_dataset(\"cnn_dailymail\", \"3.0.0\", split= 'train[:10%')\r\n 4 valid_dataset = load_dataset(\"cnn_dailymail\",\"3.0.0\", split=\"validation[:5%]\")\r\n\r\n5 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 609 download_config=download_config,\r\n 610 download_mode=download_mode,\r\n--> 611 ignore_verifications=ignore_verifications,\r\n 612 )\r\n 613 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 469 if not downloaded_from_gcs:\r\n 470 self._download_and_prepare(\r\n--> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 472 )\r\n 473 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 524 split_dict = SplitDict(dataset_name=self.name)\r\n 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 527 \r\n 528 # Checksums verification\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _split_generators(self, dl_manager)\r\n 252 def _split_generators(self, dl_manager):\r\n 253 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 255 # Generate shared vocabulary\r\n 256 \r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _subset_filenames(dl_paths, split)\r\n 153 else:\r\n 154 logging.fatal(\"Unsupported split: %s\", split)\r\n--> 155 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 156 dm = _find_files(dl_paths, \"dm\", urls)\r\n 157 return cnn + dm\r\n\r\n\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/cnn_dailymail\/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '\/root\/.cache\/huggingface\/datasets\/downloads\/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\/cnn\/stories'\r\n\r\n```\r\n\r\nThanks for any suggestions.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/864\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/863","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/863\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/863\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/863\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/863","id":744954534,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIyNTk0Mjg1","number":863,"title":"Add clear_cache parameter in the test command","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605635549000,"updated_at":1605710665000,"closed_at":1605710664000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/863","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/863","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/863.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/863.patch"},"body":"For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space.\r\n\r\nI added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier generation for the `dataset_infos.json` file for OSCAR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/863\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/862","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/862\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/862\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/862\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/862","id":744906131,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIyNTUzMzY1","number":862,"title":"Update head requests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605631746000,"updated_at":1605710633000,"closed_at":1605710630000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/862","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/862","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/862.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/862.patch"},"body":"Get requests and Head requests didn't have the same parameters.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/862\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/861","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/861\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/861\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/861\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/861","id":744753458,"node_id":"MDU6SXNzdWU3NDQ3NTM0NTg=","number":861,"title":"Possible Bug: Small training\/dataset file creates gigantic output","user":{"login":"NebelAI","id":7240417,"node_id":"MDQ6VXNlcjcyNDA0MTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7240417?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/NebelAI","html_url":"https:\/\/github.com\/NebelAI","followers_url":"https:\/\/api.github.com\/users\/NebelAI\/followers","following_url":"https:\/\/api.github.com\/users\/NebelAI\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/NebelAI\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/NebelAI\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/NebelAI\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/NebelAI\/orgs","repos_url":"https:\/\/api.github.com\/users\/NebelAI\/repos","events_url":"https:\/\/api.github.com\/users\/NebelAI\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/NebelAI\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why the tokenization takes so much space.\r\n\r\nI'm sure we can optimize that though\r\nWhat do you think @sgugger ?","First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nThen I'm wondering if you need attention_mask and token_type_ids at this point ?\r\n\r\nFinally we can also specify the output feature types at this line https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm.py#L280 to use more optimized integer precisions for the output. Maybe something like:\r\n- input_ids: uint16 or uint32\r\n- token_type_ids: uint8 or bool\r\n- attention_mask: bool\r\n- special_tokens_mask: bool\r\n\r\nAlso IMO these changes are all on the `transformers` side. Maybe we should discuss on the `transformers` repo","> First I think we should disable padding in the dataset processing and let the data collator do it.\r\n\r\nNo, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching.\r\n\r\nFor the other optimizations, they can be done by changing the script directly for each user's use case. Not sure we can find something that is general enough to be in transformers or the examples script.","Oh yes right..\r\nDo you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?","I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ).\r\nIf it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!","> Hey guys,\r\n> \r\n> I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.\r\n> \r\n> I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?\r\n> \r\n> I've used the following CMD:\r\n> `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`\r\n\r\nIt's actually because of the parameter 'preprocessing_num_worker' when using TPU. \r\nI am also planning to have my model trained on the google TPU with a 11gb text corpus. With x8 cores enabled, each TPU core has its own dataset. When not using distributed training, the preprocessed file is about 77gb. On the opposite, if enable xla, the file produced will easily consume all my free space(more than 220gb, I think it will be, in the end, around 600gb ). \r\nSo I think that's maybe where the problem came from. \r\n\r\nIs there any possibility that all of the cores share the same preprocess dataset?\r\n\r\n@sgugger @RammMaschine ","Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs."],"created_at":1605620939000,"updated_at":1617113044000,"closed_at":1616414695000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hey guys,\r\n\r\nI was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely.\r\n\r\nI've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug?\r\n\r\nI've used the following CMD:\r\n`python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/861\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/860","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/860\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/860\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/860\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/860","id":744750691,"node_id":"MDU6SXNzdWU3NDQ3NTA2OTE=","number":860,"title":"wmt16 cs-en does not donwload ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605620735000,"updated_at":1606484824000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks\r\n\r\n split=\"train\", n_obs=data_args.n_train) for task in data_args.task}\r\n File \"finetune_t5_trainer.py\", line 109, in <dictcomp>\r\n split=\"train\", n_obs=data_args.n_train) for task in data_args.task}\r\n File \"\/home\/rabeeh\/internship\/seq2seq\/tasks\/tasks.py\", line 82, in get_dataset\r\n dataset = load_dataset(\"wmt16\", self.pair, split=split)\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/rabeeh\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt16\/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308\/wmt_utils.py\", line 755, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/download_manager.py\", line 179, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 181, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 181, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/opt\/conda\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach http:\/\/www.statmt.org\/wmt13\/training-parallel-commoncrawl.tgz","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/860\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/859","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/859\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/859\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/859\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/859","id":743917091,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIxNzI4MDM4","number":859,"title":"Integrate file_lock inside the lib for better logging control","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605539619000,"updated_at":1605546404000,"closed_at":1605546402000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/859","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/859","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/859.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/859.patch"},"body":"Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors.\r\n\r\nFor example\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\nimport datasets\r\ndatasets.set_verbosity_warning()\r\ndatasets.load_dataset(\"squad\")\r\n```\r\nwould still log the file lock events:\r\n```\r\nINFO:filelock:Lock 5737989232 acquired on \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock\r\nINFO:filelock:Lock 5737989232 released on \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/44801f118d500eff6114bfc56ab4e6def941f1eb14b70ac1ecc052e15cdac49d.85f43de978b9b25921cb78d7a2f2b350c04acdbaedb9ecb5f7101cd7c0950e68.py.lock\r\nINFO:filelock:Lock 4393489968 acquired on \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock\r\nINFO:filelock:Lock 4393489968 released on \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock\r\nINFO:filelock:Lock 4393490808 acquired on \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock\r\nReusing dataset squad (\/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/squad\/plain_text\/1.0.0\/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41)\r\nINFO:filelock:Lock 4393490808 released on \/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/_Users_quentinlhoest_.cache_huggingface_datasets_squad_plain_text_1.0.0_1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41.lock\r\n```\r\n\r\nWith the integration of file_lock in the library, the ouput is much cleaner:\r\n```\r\nReusing dataset squad (\/Users\/quentinlhoest\/.cache\/huggingface\/datasets\/squad\/plain_text\/1.0.0\/1244d044b266a5e4dbd4174d23cb995eead372fbca31a03edc3f8a132787af41)\r\n```\r\n\r\nSince the file_lock package is only a 450 lines file I think it's fine to have it inside the lib.\r\n\r\nFix #812 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/859\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/858","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/858\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/858\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/858\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/858","id":743904516,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIxNzE3ODQ4","number":858,"title":"Add SemEval-2010 task 8","user":{"login":"JoelNiklaus","id":3775944,"node_id":"MDQ6VXNlcjM3NzU5NDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3775944?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JoelNiklaus","html_url":"https:\/\/github.com\/JoelNiklaus","followers_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/followers","following_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/orgs","repos_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/repos","events_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JoelNiklaus\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Added dummy data and encoding to open(). Now everything should be fine, hopefully :)"],"created_at":1605538677000,"updated_at":1606411735000,"closed_at":1606411735000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/858","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/858","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/858.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/858.patch"},"body":"Hi,\r\nI don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.\r\nCheers,\r\nJoel","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/858\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/857","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/857\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/857\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/857\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/857","id":743863214,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIxNjg0ODIx","number":857,"title":"Use pandas reader in csv","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605535545000,"updated_at":1605807340000,"closed_at":1605807338000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/857","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/857","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/857.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/857.patch"},"body":"The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ).\r\nTo fix that I switched to the pandas csv reader.\r\nThe new reader is compatible with all the pandas parameters to read csv files.\r\nMoreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory.\r\n\r\nFix #836 \r\nFix #794 \r\n\r\nBreaking: now all the parameters to read to csv file can be used in the `load_dataset` kwargs when loading csv, and the previous pyarrow objects `pyarrow.csv.ReadOptions`, `pyarrow.csv.ParseOptions` and `pyarrow.csv.ConvertOptions` are not used anymore.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/857\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/856","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/856\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/856\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/856\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/856","id":743799239,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIxNjMzNTYz","number":856,"title":"Add open book corpus","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I fixed issues except for the dummy_data zip file. But I think I know why is it happening. So when unzipping dummy_data.zip it gets save in \/tmp directory where glob doesn't pick it up. For regular downloads, the archive gets unzipped in ~\/.cache\/huggingface. Could that be a reason?","Nice thanks :)\r\n\r\nWhen testing with the dummy data, the `download_manager.download_and_extract()` call returns the path to the unzipped dummy_data.zip archive. Therefore glob should be able to find your dummy .epub.txt file","@lhoestq I understand but for some reason, it is not happening. I added logs to see where dummy_data.zip gets unzipped in \/tmp but I suppose when the test process finishes that tmp is gone. I also tried to glob anything in _generate_examples from that directory using \/* instead of **\/*.epub.txt and nothing is being returned. Always an empty array. ","Ok weird ! I can take a look tomorrow if you want","Please do, I will take a fresh look as well. ","In _generate_examples_ I wrote the following:\r\n```\r\nglob_target = os.path.join(directory, \"**\/*.epub.txt\")\r\nprint(f\"Glob target {glob_target }\")\r\n```\r\n\r\nAnd here is the test failure:\r\n\r\n\r\n========================================================================================== FAILURES ===========================================================================================\r\n________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_bookcorpusopen ________________________________________________________________\r\n\r\nself = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_bookcorpusopen>, dataset_name = 'bookcorpusopen'\r\n\r\n @slow\r\n def test_load_dataset_all_configs(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests\/test_dataset_common.py:232: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\/test_dataset_common.py:193: in check_load_dataset\r\n self.parent.assertTrue(len(dataset[split]) > 0)\r\nE AssertionError: False is not true\r\n------------------------------------------------------------------------------------ Captured stdout call -------------------------------------------------------------------------------------\r\nDownloading and preparing dataset book_corpus_open\/plain_text (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to \/var\/folders\/y_\/6k6zhblx0k9dsdz5nd_z9x5c0000gp\/T\/tmpmuu0_ln2\/book_corpus_open\/plain_text\/1.0.0...\r\nGlob target \/var\/folders\/y_\/6k6zhblx0k9dsdz5nd_z9x5c0000gp\/T\/tmpm6tpvb3f\/extracted\/d953b414cceb4fe3985eeaf68aec2f4435f166b2edf66863d805e3825b7d336b\/dummy_data\/**\/*.epub.txt\r\nDataset book_corpus_open downloaded and prepared to \/var\/folders\/y_\/6k6zhblx0k9dsdz5nd_z9x5c0000gp\/T\/tmpmuu0_ln2\/book_corpus_open\/plain_text\/1.0.0. Subsequent calls will reuse this data.\r\n------------------------------------------------------------------------------------ Captured stderr call -------------------------------------------------------------------------------------\r\n \r\n","And when I do os.listdir on the given directory I get:\r\n\r\n glob_target = os.path.join(directory, \"**\/*.epub.txt\")\r\n print(f\"Glob target {glob_target }\")\r\n> print(os.listdir(path=directory))\r\nE FileNotFoundError: [Errno 2] No such file or directory: '\/var\/folders\/y_\/6k6zhblx0k9dsdz5nd_z9x5c0000gp\/T\/tmpbu_aom5q\/extracted\/d953b414cceb4fe3985eeaf68aec2f4435f166b2edf66863d805e3825b7d336b\/dummy_data'\r\n","Thanks for the info, I'm looking at it right now","Ok found the issue !\r\n\r\nThe dummy_data.zip file must be an archive of a folder named dummy_data. Currently the dummy_data.zip is an archive of a folder named book1. In order to have a valid dummy_data.zip file you must first take the dummy book1 folder, place it inside a folder named dummy_data and then compress the dummy_data folder to get dummy_data.zip","Excellent, I am on it @lhoestq ","> Awesome thank you so much for adding it :)\r\n\r\nYou're welcome, ok all tests are green now! I needed it asap as well. Thanks for your help @lhoestq .","I just wanted to say thank you to everyone involved in making this happen! I was certain that I would have to add bookcorpusnew myself, but then @vblagoje came along and did it, and @lhoestq gave some great support in a timely fashion.\r\n\r\nBy the way @vblagoje, are you on Twitter? I'm https:\/\/twitter.com\/theshawwn if you'd like to DM and say hello. Once again, thanks for doing this!\r\n\r\nI'll mention over at https:\/\/github.com\/soskek\/bookcorpus\/issues\/27 that this was merged.","Thank you Shawn. You did all the heavy lifting ;-)","@vblagoje Would you be interested in adding books3 as well? https:\/\/twitter.com\/theshawwn\/status\/1320282149329784833\r\n\r\nHuggingface is interested and asked me to add it, but I had a bit of trouble during setup (https:\/\/github.com\/huggingface\/datasets\/issues\/790) and never got around to it. At this point you have much more experience than I do with the datasets lib.\r\n\r\nIt *seems* like it might simply be a matter of copy-pasting this PR, changing books1 to books3, and possibly trimming off the leading paths -- each book is at e.g. the-eye\/Books\/Bibliotok\/J\/Jurassic Park.epub.txt, which is rather lengthy compared to just the filename -- but the full path is probably fine, so feel free to do the least amount of work that gets the job done. Otherwise I suppose I'll get around to it eventually; thanks again!","@shawwn I'll take a look as soon as I clear my work queue. TBH, I would likely work on making sure HF datasets has all the datasets used to train https:\/\/github.com\/alexa\/bort\/ and these are: Wikipedia, Wiktionary, OpenWebText (Gokaslan and Cohen, 2019), UrbanDictionary, Onel Billion Words (Chelba et al., 2014), the news subset of Common Crawl (Nagel, 2016)10, and Bookcorpus. cc @lhoestq "],"created_at":1605529802000,"updated_at":1605701026000,"closed_at":1605626538000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/856","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/856","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/856.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/856.patch"},"body":"Adds book corpus based on Shawn Presser's [work](https:\/\/github.com\/soskek\/bookcorpus\/issues\/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https:\/\/github.com\/huggingface\/datasets\/issues\/486). I named it BookCorpusOpen to be easily located alphabetically. But, of course, we can rename it if needed. \r\n\r\nIt contains 17868 dataset items; each item contains two fields: title and text. The title is the name of the book (just the file name) while the text contains unprocessed book text. Note that bookcorpus is pre-segmented into a sentence while this bookcorpus is not. This is intentional (see https:\/\/github.com\/huggingface\/datasets\/issues\/486) as some users might want to further process the text themselves. \r\n\r\n@lhoestq and others please review this PR thoroughly. cc @shawwn ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/856\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/855","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/855\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/855\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/855\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/855","id":743690839,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIxNTQ2Njkx","number":855,"title":"Fix kor nli csv reader","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605520421000,"updated_at":1605535154000,"closed_at":1605535152000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/855","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/855","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/855.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/855.patch"},"body":"The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason.\r\nI fixed that by iterating through the lines directly instead of using a csv reader.\r\nI also changed the feature names to match the other NLI datasets (i.e. use \"premise\", \"hypothesis\", \"label\" features)\r\n\r\nFix #821 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/855\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/854","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/854\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/854\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/854\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/854","id":743675376,"node_id":"MDU6SXNzdWU3NDM2NzUzNzY=","number":854,"title":"wmt16 does not download ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks ","It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).\r\nI searched a bit and couldn't find a mirror except maybe http:\/\/nlp.ffzg.hr\/resources\/corpora\/setimes\/ (the data are a cleaned version of the original ones though)\r\nShould we consider replacing the old urls with these ones even though it's not the exact same data ?","The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...","Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this. ","We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied ","thank you @thomwolf and HuggingFace team for the help. ","OPUS is still down - hopefully back tomorrow.","Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much. ","Hi\r\nI am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks ","It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.","Hi all, \r\nI pulled a request that fix this issue by replacing urls. \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/1901\r\n\r\nThanks!\r\n","It's still down for the wmt."],"created_at":1605519111000,"updated_at":1614222909000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, I appreciate your help with the following error, thanks \r\n\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nDownloading and preparing dataset wmt16\/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt16\/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308\/wmt_utils.py\", line 755, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py\", line 179, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 181, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 181, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach http:\/\/opus.nlpl.eu\/download.php?f=SETIMES\/v2\/tmx\/en-ro.tmx.gz","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/854\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/853","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/853\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/853\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/853\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/853","id":743426583,"node_id":"MDU6SXNzdWU3NDM0MjY1ODM=","number":853,"title":"concatenate_datasets support axis=0 or 1 \uff1f","user":{"login":"renqingcolin","id":12437751,"node_id":"MDQ6VXNlcjEyNDM3NzUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12437751?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/renqingcolin","html_url":"https:\/\/github.com\/renqingcolin","followers_url":"https:\/\/api.github.com\/users\/renqingcolin\/followers","following_url":"https:\/\/api.github.com\/users\/renqingcolin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/renqingcolin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/renqingcolin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/renqingcolin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/renqingcolin\/orgs","repos_url":"https:\/\/api.github.com\/users\/renqingcolin\/repos","events_url":"https:\/\/api.github.com\/users\/renqingcolin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/renqingcolin\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":1935892884,"node_id":"MDU6TGFiZWwxOTM1ODkyODg0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/help%20wanted","name":"help wanted","color":"008672","default":true,"description":"Extra attention is needed"},{"id":1935892912,"node_id":"MDU6TGFiZWwxOTM1ODkyOTEy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/question","name":"question","color":"d876e3","default":true,"description":"Further information is requested"}],"state":"closed","locked":false,"assignee":{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false},"assignees":[{"login":"albertvillanova","id":8515462,"node_id":"MDQ6VXNlcjg1MTU0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8515462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/albertvillanova","html_url":"https:\/\/github.com\/albertvillanova","followers_url":"https:\/\/api.github.com\/users\/albertvillanova\/followers","following_url":"https:\/\/api.github.com\/users\/albertvillanova\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/albertvillanova\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/albertvillanova\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/albertvillanova\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/albertvillanova\/orgs","repos_url":"https:\/\/api.github.com\/users\/albertvillanova\/repos","events_url":"https:\/\/api.github.com\/users\/albertvillanova\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/albertvillanova\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_columns(example, index):\r\n example.update(d2[index])\r\n example.update(d3[index])\r\n return example\r\n\r\nfull_dataset = d1.map(add_columns, with_indices=True)\r\n```","Closing this one, feel free to re-open if you have other questions about this issue","That's not really difficult to add, though, no?\r\nI think it can be done without copy.\r\nMaybe let's add it to the roadmap?","Actually it's doable but requires to update the `Dataset._data_files` schema to support this.\r\nI'm re-opening this since we may want to add this in the future","Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows. ","Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help !\r\n\r\nHere is a few things about the current implementation:\r\n- A dataset object is a wrapper of one `pyarrow.Table` that contains the data\r\n- Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc.\r\n\r\nTherefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) \r\n\r\nHowever this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0).\r\n\r\nTherefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0.\r\n\r\nI'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.","@lhoestq, we have two Pull Requests to implement:\r\n- Dataset.add_item: #1870\r\n- Dataset.add_column: #2145\r\nwhich add a single row or column, repectively.\r\n\r\nThe request here is to implement the concatenation of *multiple* rows\/columns. Am I right?\r\n\r\nWe should agree on the API:\r\n- `concatenate_datasets` with `axis`?\r\n- other Dataset method name?","For the API, I like `concatenate_datasets` with `axis` personally :)\r\nFrom a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns).\r\n\r\nRegarding what we need to implement:\r\nThe axis=0 is already supported and is the current behavior of `concatenate_datasets`.\r\nAlso `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library).\r\n\r\nTo implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally.\r\nI have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column.\r\n\r\nMaybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ?\r\n`axis` could also be an argument of `ConcatenationTable.from_tables`","@lhoestq I think I guessed your suggestions in advance... \ud83d\ude09 #2151","Cool ! Sorry I missed this one ^^\r\nI'm taking a look ;)"],"created_at":1605494783000,"updated_at":1618848438000,"closed_at":1618848438000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I want to achieve the following result\r\n![image](https:\/\/user-images.githubusercontent.com\/12437751\/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/853\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/852","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/852\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/852\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/852\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/852","id":743396240,"node_id":"MDU6SXNzdWU3NDMzOTYyNDA=","number":852,"title":"wmt cannot be downloaded ","user":{"login":"rabeehk","id":6278280,"node_id":"MDQ6VXNlcjYyNzgyODA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6278280?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehk","html_url":"https:\/\/github.com\/rabeehk","followers_url":"https:\/\/api.github.com\/users\/rabeehk\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehk\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehk\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehk\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605488681000,"updated_at":1605519118000,"closed_at":1605519118000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, I appreciate your help with the following error, thanks \r\n\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nDownloading and preparing dataset wmt16\/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/wmt16\/ro-en\/1.0.0\/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/root\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wmt16\/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308\/wmt_utils.py\", line 755, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py\", line 179, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 181, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 181, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach http:\/\/opus.nlpl.eu\/download.php?f=SETIMES\/v2\/tmx\/en-ro.tmx.gz","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/852\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/851","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/851\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/851\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/851\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/851","id":743343278,"node_id":"MDU6SXNzdWU3NDMzNDMyNzg=","number":851,"title":"Add support for other languages for rouge","user":{"login":"alexyalunin","id":23011284,"node_id":"MDQ6VXNlcjIzMDExMjg0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23011284?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/alexyalunin","html_url":"https:\/\/github.com\/alexyalunin","followers_url":"https:\/\/api.github.com\/users\/alexyalunin\/followers","following_url":"https:\/\/api.github.com\/users\/alexyalunin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/alexyalunin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/alexyalunin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/alexyalunin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/alexyalunin\/orgs","repos_url":"https:\/\/api.github.com\/users\/alexyalunin\/repos","events_url":"https:\/\/api.github.com\/users\/alexyalunin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/alexyalunin\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400959,"node_id":"MDU6TGFiZWwyMDY3NDAwOTU5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Metric%20discussion","name":"Metric discussion","color":"d722e8","default":false,"description":"Discussions on the metrics"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@alexyalunin \r\n\r\nI did something similar for others languages.\r\n\r\n[Repo: rouge-metric](https:\/\/github.com\/m3hrdadfi\/rouge-metric)"],"created_at":1605473865000,"updated_at":1622970472000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I calculate rouge with\r\n```\r\nfrom datasets import load_metric\r\nrouge = load_metric(\"rouge\")\r\nrouge_output = rouge.compute(predictions=['\u0442\u0435\u0441\u0442 \u0442\u0435\u0441\u0442 \u043f\u0440\u0438\u0432\u0435\u0442'], references=['\u0442\u0435\u0441\u0442 \u0442\u0435\u0441\u0442 \u043f\u043e\u043a\u0430'], rouge_types=[\r\n \"rouge2\"])[\"rouge2\"].mid\r\nprint(rouge_output)\r\n```\r\nthe result is\r\n`Score(precision=0.0, recall=0.0, fmeasure=0.0)`\r\nIt seems like the `rouge_score` library that this metric uses filters all non-alphanueric latin characters \r\nin `rouge_scorer\/tokenize.py` with `text = re.sub(r\"[^a-z0-9]+\", \" \", six.ensure_str(text))`.\r\nPlease add support for other languages. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/851\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/850","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/850\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/850\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/850\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/850","id":742369419,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIwNTE0MDY3","number":850,"title":"Create ClassLabel for labelling tasks datasets","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq Better?"],"created_at":1605265642000,"updated_at":1605522725000,"closed_at":1605522718000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/850","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/850","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/850.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/850.patch"},"body":"This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/850\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/849","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/849\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/849\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/849\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/849","id":742263333,"node_id":"MDU6SXNzdWU3NDIyNjMzMzM=","number":849,"title":"Load amazon dataset","user":{"login":"bhavitvyamalik","id":19718818,"node_id":"MDQ6VXNlcjE5NzE4ODE4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19718818?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bhavitvyamalik","html_url":"https:\/\/github.com\/bhavitvyamalik","followers_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/followers","following_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/orgs","repos_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/repos","events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bhavitvyamalik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nWe plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.\r\n\r\nAlso I think the bullet points formatting has been fixed"],"created_at":1605256464000,"updated_at":1605597779000,"closed_at":1605597779000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. \r\n\r\nEg. what API usage is on the [website](https:\/\/huggingface.co\/datasets\/amazon_us_reviews) \r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"amazon_us_reviews\")\r\n```\r\nHow it is when I tried (the error generated does point me to the right direction though)\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"amazon_us_reviews\", 'Books_v1_00')\r\n``` \r\nAlso, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/849\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/848","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/848\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/848\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/848\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/848","id":742240942,"node_id":"MDU6SXNzdWU3NDIyNDA5NDI=","number":848,"title":"Error when concatenate_datasets","user":{"login":"shexuan","id":25664170,"node_id":"MDQ6VXNlcjI1NjY0MTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25664170?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shexuan","html_url":"https:\/\/github.com\/shexuan","followers_url":"https:\/\/api.github.com\/users\/shexuan\/followers","following_url":"https:\/\/api.github.com\/users\/shexuan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shexuan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shexuan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shexuan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shexuan\/orgs","repos_url":"https:\/\/api.github.com\/users\/shexuan\/repos","events_url":"https:\/\/api.github.com\/users\/shexuan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shexuan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n\r\nThe indices mapping correspond to a mapping on top of the data table that is used to re-order\/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n\r\nBefore saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n","> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n> \r\n> The indices mapping correspond to a mapping on top of the data table that is used to re-order\/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test.\r\n> \r\n> Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.\r\n\r\n`dataset.flatten_indices()` solved my problem, thanks so much!","@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)","Yup I agree ! And in the docs as well"],"created_at":1605254162000,"updated_at":1605289259000,"closed_at":1605282910000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, when I concatenate two dataset loading from disk, I encountered a problem:\r\n```\r\ntest_dataset = load_from_disk('data\/test_dataset')\r\ntrn_dataset = load_from_disk('data\/train_dataset')\r\n\r\ntrain_dataset = concatenate_datasets([trn_dataset, test_dataset])\r\n```\r\nAnd it reported ValueError blow:\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-38-74fa525512ca> in <module>\r\n----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset])\r\n\r\n\/opt\/miniconda3\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in concatenate_datasets(dsets, info, split)\r\n 2547 \"However datasets' indices {} come from memory and datasets' indices {} come from disk.\".format(\r\n 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]],\r\n-> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]],\r\n 2550 )\r\n 2551 )\r\n\r\nValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.\r\nHowever datasets' indices [1] come from memory and datasets' indices [0] come from disk.\r\n```\r\n\r\nBut it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error:\r\n```\r\ntrn_dataset._data_files\r\n# output\r\n[{'filename': 'data\/train_dataset\/csv-train.arrow', 'skip': 0, 'take': 593264}]\r\n\r\ntest_dataset._data_files\r\n# output\r\n[{'filename': 'data\/test_dataset\/csv-test.arrow', 'skip': 0, 'take': 424383}]\r\n\r\nprint([not dset._data_files for dset in [trn_dataset, test_dataset]])\r\n# [False, False]\r\n\r\n# And I tested the code the same as arrow_dataset, but nothing happened\r\ndsets = [trn_dataset, test_dataset]\r\ndsets_in_memory = [not dset._data_files for dset in dsets]\r\nif any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory):\r\n raise ValueError(\r\n \"Datasets should ALL come from memory, or should ALL come from disk.\\n\"\r\n \"However datasets {} come from memory and datasets {} come from disk.\".format(\r\n [i for i in range(len(dsets)) if dsets_in_memory[i]],\r\n [i for i in range(len(dsets)) if not dsets_in_memory[i]],\r\n )\r\n )\r\n```\r\n\r\nAny suggestions would be greatly appreciated! \r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/848\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/847","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/847\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/847\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/847\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/847","id":742179495,"node_id":"MDU6SXNzdWU3NDIxNzk0OTU=","number":847,"title":"multiprocessing in dataset map \"can only test a child process\"","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like an issue with wandb\/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?","hi facing the same issue here - \r\n\r\n`AssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 996, in emit\r\n stream.write(msg)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/lib\/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/interface\/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/interface\/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/interface\/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"\/usr\/lib\/python3.6\/multiprocessing\/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"<ipython-input-8-a4d9a08d114e>\", line 20, in __getitem__\r\n return_token_type_ids=True\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/tokenization_utils_base.py\", line 2405, in encode_plus\r\n **kwargs,\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/transformers\/tokenization_utils_base.py\", line 2125, in _get_padding_truncation_strategies\r\n \"Truncation was not explicitly activated but `max_length` is provided a specific value, \"\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 1320, in warning\r\n self._log(WARNING, msg, args, **kwargs)\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 1444, in _log\r\n self.handle(record)\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 1454, in handle\r\n self.callHandlers(record)\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 1516, in callHandlers\r\n hdlr.handle(record)\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 865, in handle\r\n self.emit(record)\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 1000, in emit\r\n self.handleError(record)\r\n File \"\/usr\/lib\/python3.6\/logging\/__init__.py\", line 917, in handleError\r\n sys.stderr.write('--- Logging error ---\\n')\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/lib\/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/interface\/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/interface\/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/wandb\/sdk\/interface\/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"\/usr\/lib\/python3.6\/multiprocessing\/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process`\r\n","It looks like this warning : \r\n\"Truncation was not explicitly activated but max_length is provided a specific value, \"\r\nis not handled well by wandb.\r\n\r\nThe error occurs when calling the tokenizer.\r\nMaybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ?\r\nOtherwise I don't know why wandb would fail on a warning. Maybe one of its logging handlers have some issues with the logging of tokenizers. Maybe @n1t0 knows more about this ?","I'm having a similar issue but when I try to do multiprocessing with the `DataLoader`\r\n\r\nCode to reproduce:\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='\/home\/ad\/Desktop\/bookcorpus', split='train[:1%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=5000)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\nfrom transformers import DataCollatorForWholeWordMask\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ndata_collator = DataCollatorForWholeWordMask(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\".\/mobile_linear_att_8L_128_128_03layerdrop_shared\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=64,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n gradient_accumulation_steps=1,\r\n fp16=True,\r\n **dataloader_num_workers=10**,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n\r\ntrainer.train()\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<timed eval> in <module>\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/transformers\/trainer.py in train(self, model_path, trial)\r\n 869 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)\r\n 870 \r\n--> 871 for step, inputs in enumerate(epoch_iterator):\r\n 872 \r\n 873 # Skip past any already trained steps if resuming training\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py in __next__(self)\r\n 433 if self._sampler_iter is None:\r\n 434 self._reset()\r\n--> 435 data = self._next_data()\r\n 436 self._num_yielded += 1\r\n 437 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py in _next_data(self)\r\n 1083 else:\r\n 1084 del self._task_info[idx]\r\n-> 1085 return self._process_data(data)\r\n 1086 \r\n 1087 def _try_put_index(self):\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py in _process_data(self, data)\r\n 1109 self._try_put_index()\r\n 1110 if isinstance(data, ExceptionWrapper):\r\n-> 1111 data.reraise()\r\n 1112 return data\r\n 1113 \r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/_utils.py in reraise(self)\r\n 426 # have message field\r\n 427 raise self.exc_type(message=msg)\r\n--> 428 raise self.exc_type(msg)\r\n 429 \r\n 430 \r\n\r\nAssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/_utils\/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1087, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1074, in _getitem\r\n format_kwargs=format_kwargs,\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 890, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 851, in command\r\n return torch.tensor(x, **format_kwargs)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/warnings.py\", line 101, in _showwarnmsg\r\n _showwarnmsg_impl(msg)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/warnings.py\", line 30, in _showwarnmsg_impl\r\n file.write(text)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/lib\/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/wandb_run.py\", line 723, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 153, in publish_output\r\n self._publish_output(o)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 158, in _publish_output\r\n self._publish(rec)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 456, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/multiprocessing\/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n```\r\n\r\nAs a workaround I have commented line 456 and 457 in `\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py`","Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`","Yep this time this is a warning from pytorch that causes wandb to not work properly.\r\nCould this by a wandb issue ?","Hi @timothyjlaurent @gaceladri \r\nIf you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https:\/\/github.com\/huggingface\/transformers\/pull\/9896) and try again ?\r\nThis issue might be related to https:\/\/github.com\/huggingface\/transformers\/issues\/9623 ","I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well."],"created_at":1605247264000,"updated_at":1612198408000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.\r\n\r\n``` \r\ndef tokenizer_fn(example):\r\n return tokenizer.batch_encode_plus(example['text'])\r\n\r\nds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])\r\n```\r\n\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/multiprocess\/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 156, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1510, in _map_single\r\n for i in pbar:\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/tqdm\/notebook.py\", line 228, in __iter__\r\n for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/tqdm\/std.py\", line 1186, in __iter__\r\n self.close()\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/tqdm\/notebook.py\", line 251, in close\r\n super(tqdm_notebook, self).close(*args, **kwargs)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/tqdm\/std.py\", line 1291, in close\r\n fp_write('')\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/tqdm\/std.py\", line 1288, in fp_write\r\n self.fp.write(_unicode(s))\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/wandb\/sdk\/lib\/redirect.py\", line 91, in new_write\r\n cb(name, data)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/wandb\/sdk\/wandb_run.py\", line 598, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 146, in publish_output\r\n self._publish_output(o)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 151, in _publish_output\r\n self._publish(rec)\r\n File \"\/home\/jovyan\/share\/users\/tlaurent\/invitae-bert\/ve\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 431, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"\/usr\/lib\/python3.6\/multiprocessing\/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n\"\"\"\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/847\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/846","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/846\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/846\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/846\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/846","id":741885174,"node_id":"MDU6SXNzdWU3NDE4ODUxNzQ=","number":846,"title":"Add HoVer multi-hop fact verification dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?","Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here:\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md","Closed by #1399 "],"created_at":1605210946000,"updated_at":1607636853000,"closed_at":1607636853000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** HoVer\r\n- **Description:** https:\/\/twitter.com\/YichenJiang9\/status\/1326954363806429186 contains 20K claim verification examples\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2011.03088\r\n- **Data:** https:\/\/hover-nlp.github.io\/\r\n- **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding)\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/846\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/845","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/845\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/845\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/845\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/845","id":741841350,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIwMDg1NDMy","number":845,"title":"amazon description fields as bullets","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605207041000,"updated_at":1605207054000,"closed_at":1605207054000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/845","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/845","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/845.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/845.patch"},"body":"One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/845\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/844","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/844\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/844\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/844\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/844","id":741835661,"node_id":"MDExOlB1bGxSZXF1ZXN0NTIwMDgwNzM5","number":844,"title":"add newlines to amazon desc","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605206480000,"updated_at":1605206545000,"closed_at":1605206541000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/844","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/844","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/844.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/844.patch"},"body":"Just a quick formatting fix to hopefully make it render nicer on Viewer","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/844\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/843","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/843\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/843\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/843\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/843","id":741531121,"node_id":"MDU6SXNzdWU3NDE1MzExMjE=","number":843,"title":"use_custom_baseline still produces errors for bertscore","user":{"login":"penatbater","id":37921244,"node_id":"MDQ6VXNlcjM3OTIxMjQ0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37921244?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/penatbater","html_url":"https:\/\/github.com\/penatbater","followers_url":"https:\/\/api.github.com\/users\/penatbater\/followers","following_url":"https:\/\/api.github.com\/users\/penatbater\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/penatbater\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/penatbater\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/penatbater\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/penatbater\/orgs","repos_url":"https:\/\/api.github.com\/users\/penatbater\/repos","events_url":"https:\/\/api.github.com\/users\/penatbater\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/penatbater\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting ! That's a bug indeed\r\nIf you want to contribute, feel free to fix this issue and open a PR :)","This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. ","Thanks for the heads up @pvl and for the PR as well :)","Hello everyone,\r\n\r\nI think the problem is not solved: \r\n\r\n```\r\nfrom datasets import load_metric\r\nmetric=load_metric('bertscore')\r\nmetric.compute(\r\n predictions=predictions,\r\n references=references,\r\n lang='fr',\r\n rescale_with_baseline=True\r\n)\r\nTypeError: get_hash() missing 2 required positional arguments: 'use_custom_baseline' and 'use_fast_tokenizer'\r\n```\r\nThis code is produced using `Python 3.6.9 datasets==1.1.2 and bert_score==0.3.10`","Hi ! This has been fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/2770, we'll do a new release soon to make the fix available :)\r\n\r\nIn the meantime please use an older version of `bert_score`"],"created_at":1605181472000,"updated_at":1630404404000,"closed_at":1612880508000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"`metric = load_metric('bertscore')`\r\n`a1 = \"random sentences\"`\r\n`b1 = \"random sentences\"`\r\n`metric.compute(predictions = [a1], references = [b1], lang = 'en')`\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/stephen_chan\/.local\/lib\/python3.6\/site-packages\/datasets\/metric.py\", line 393, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"\/home\/stephen_chan\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/bertscore\/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363\/bertscore.py\", line 108, in _compute\r\n hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)\r\nTypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'`\r\n\r\nAdding 'use_custom_baseline = False' as an argument produces this error\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/stephen_chan\/.local\/lib\/python3.6\/site-packages\/datasets\/metric.py\", line 393, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\nTypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'`\r\n\r\nThis is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/843\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/842","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/842\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/842\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/842\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/842","id":741208428,"node_id":"MDU6SXNzdWU3NDEyMDg0Mjg=","number":842,"title":"How to enable `.map()` pre-processing pipelines to support multi-node parallelism?","user":{"login":"shangw-nvidia","id":66387198,"node_id":"MDQ6VXNlcjY2Mzg3MTk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66387198?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shangw-nvidia","html_url":"https:\/\/github.com\/shangw-nvidia","followers_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/followers","following_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/orgs","repos_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/repos","events_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Right now multiprocessing only runs on single node.\r\n\r\nHowever it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on the pathos repo](https:\/\/github.com\/uqfoundation\/pathos).\r\n\r\nIf you're familiar with pathos or if you want to give it a try, it could be a nice addition to the library :)"],"created_at":1605146678000,"updated_at":1605223707000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nCurrently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish?\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/842\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/841","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/841\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/841\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/841\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/841","id":740737448,"node_id":"MDU6SXNzdWU3NDA3Mzc0NDg=","number":841,"title":"Can not reuse datasets already downloaded","user":{"login":"jc-hou","id":30210529,"node_id":"MDQ6VXNlcjMwMjEwNTI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30210529?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jc-hou","html_url":"https:\/\/github.com\/jc-hou","followers_url":"https:\/\/api.github.com\/users\/jc-hou\/followers","following_url":"https:\/\/api.github.com\/users\/jc-hou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jc-hou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jc-hou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jc-hou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jc-hou\/orgs","repos_url":"https:\/\/api.github.com\/users\/jc-hou\/repos","events_url":"https:\/\/api.github.com\/users\/jc-hou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jc-hou\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It seems the process needs '\/datasets.huggingface.co\/datasets\/datasets\/wikipedia\/wikipedia.py'\r\nWhere and how to assign this ```wikipedia.py``` after I manually download it ?","\r\ndownload the ```wikipedia.py``` at the working directory and go with ```dataset = load_dataset('wikipedia.py', '20200501.en')``` works."],"created_at":1605098535000,"updated_at":1605118636000,"closed_at":1605118636000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\nI need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).\r\nI successfully downloaded and reuse the wikipedia datasets in a frontal node. \r\nWhen I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error.\r\n\r\nOn frontal node:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('wikipedia', '20200501.en')\r\nReusing dataset wikipedia (\/linkhome\/rech\/genini01\/uua34ms\/.cache\/huggingface\/datasets\/wikipedia\/20200501.en\/1.0.0\/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd)\r\n\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/torch\/cuda\/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http:\/\/www.nvidia.com\/Download\/index.aspx (Triggered internally at \/pytorch\/c10\/cuda\/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\n```\r\n\r\nOn gpu node:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset('wikipedia', '20200501.en')\r\nTraceback (most recent call last):\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line 160, in _new_conn\r\n (self._dns_host, self.port), self.timeout, **extra_kw\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/util\/connection.py\", line 84, in create_connection\r\n raise err\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/util\/connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nTimeoutError: [Errno 110] Connection timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 677, in urlopen\r\n chunked=chunked,\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line 172, in _new_conn\r\n self, \"Failed to establish a new connection: %s\" % e\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/requests\/adapters.py\", line 449, in send\r\n timeout=timeout\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 727, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/urllib3\/util\/retry.py\", line 446, in increment\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/wikipedia\/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 590, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 264, in prepare_module\r\n head_hf_s3(path, filename=name, dataset=dataset)\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 200, in head_hf_s3\r\n return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset))\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/requests\/api.py\", line 104, in head\r\n return request('head', url, **kwargs)\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/requests\/api.py\", line 61, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/requests\/sessions.py\", line 530, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/requests\/sessions.py\", line 643, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"\/linkhome\/rech\/genini01\/uua34ms\/work\/anaconda3\/envs\/pytorch_pip170_cuda102\/lib\/python3.6\/site-packages\/requests\/adapters.py\", line 516, in send\r\n raise ConnectionError(e, request=request)\r\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/wikipedia\/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',))\r\n\r\n```\r\n\r\nAny advice?Thanks!\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/841\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/840","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/840\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/840\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/840\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/840","id":740632771,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE5MDg2NDUw","number":840,"title":"Update squad_v2.py","user":{"login":"Javier-Jimenez99","id":38747614,"node_id":"MDQ6VXNlcjM4NzQ3NjE0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38747614?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Javier-Jimenez99","html_url":"https:\/\/github.com\/Javier-Jimenez99","followers_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/followers","following_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/orgs","repos_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/repos","events_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Javier-Jimenez99\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["With this change all the checks are passed.","Good"],"created_at":1605088721000,"updated_at":1605108574000,"closed_at":1605108395000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/840","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/840","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/840.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/840.patch"},"body":"Change lines 100 and 102 to prevent overwriting ```predictions``` variable.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/840\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/839","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/839\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/839\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/839\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/839","id":740355270,"node_id":"MDU6SXNzdWU3NDAzNTUyNzA=","number":839,"title":"XSum dataset missing spaces between sentences","user":{"login":"loganlebanoff","id":10007282,"node_id":"MDQ6VXNlcjEwMDA3Mjgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10007282?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/loganlebanoff","html_url":"https:\/\/github.com\/loganlebanoff","followers_url":"https:\/\/api.github.com\/users\/loganlebanoff\/followers","following_url":"https:\/\/api.github.com\/users\/loganlebanoff\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/loganlebanoff\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/loganlebanoff\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/loganlebanoff\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/loganlebanoff\/orgs","repos_url":"https:\/\/api.github.com\/users\/loganlebanoff\/repos","events_url":"https:\/\/api.github.com\/users\/loganlebanoff\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/loganlebanoff\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605054883000,"updated_at":1605054883000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):\r\n\r\n`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category.\"We got told like this morning 'Oh I think you're nominated'\", said Dappy.\"And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!\"Bandmate Fazer added: \"We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations.\"The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around.\"At the end of the day we're grateful to be where we are in our careers.\"If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans.\"Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border.\"We just done Edinburgh the other day,\" said Dappy.\"We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!\"`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/839\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/838","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/838\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/838\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/838\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/838","id":740328382,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE4ODM0NTE5","number":838,"title":"CNN\/Dailymail Dataset Card","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605052603000,"updated_at":1606338591000,"closed_at":1606338590000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/838","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/838","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/838.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/838.patch"},"body":"Link to the card page: https:\/\/github.com\/mcmillanmajora\/datasets\/tree\/cnn_dailymail_card\/datasets\/cnn_dailymail\n\nOne of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may not be reflected in the versions that we currently have in the repo?), but it's only the structure that's changing rather than the content in this particular case, at least between versions 2.0.0 and 3.0.0. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/838\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/837","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/837\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/837\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/837\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/837","id":740250215,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE4NzcwNDM5","number":837,"title":"AlloCin\u00e9 dataset card","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605043193000,"updated_at":1606341387000,"closed_at":1606341387000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/837","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/837","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/837.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/837.patch"},"body":"Link to the card page: https:\/\/github.com\/mcmillanmajora\/datasets\/blob\/allocine_card\/datasets\/allocine\/README.md\n\nThere wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creator used come from?\n\nI'm also wondering how best to go about talking about limitations when so little is known about the data. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/837\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/836","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/836\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/836\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/836\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/836","id":740187613,"node_id":"MDU6SXNzdWU3NDAxODc2MTM=","number":836,"title":"load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas","user":{"login":"randubin","id":8919490,"node_id":"MDQ6VXNlcjg5MTk0OTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8919490?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/randubin","html_url":"https:\/\/github.com\/randubin","followers_url":"https:\/\/api.github.com\/users\/randubin\/followers","following_url":"https:\/\/api.github.com\/users\/randubin\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/randubin\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/randubin\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/randubin\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/randubin\/orgs","repos_url":"https:\/\/api.github.com\/users\/randubin\/repos","events_url":"https:\/\/api.github.com\/users\/randubin\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/randubin\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?","Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5","I think that the issue is similar to this one:https:\/\/issues.apache.org\/jira\/browse\/ARROW-9612\r\nThe problem is in arrow when the column data contains long strings.\r\nAny ideas on how to bypass this?","We should expose the [`block_size` argument](https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/csv\/csv.py).\r\n\r\n\r\nIn the meantime you can specify yourself the `ReadOptions` config like this:\r\n```python\r\nimport pyarrow.csv as pac # PyArrow is installed with `datasets`\r\n\r\nread_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case\r\ndataset = load_dataset('csv', data_files=files, read_options=read_options)\r\n```\r\n","This did help to load the data. But the problem now is that I get:\r\nArrowInvalid: CSV parse error: Expected 5 columns, got 187\r\n\r\nIt seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow\r\nBut I got a similar error, again it loaded fine in pandas so I am not sure what to do.\r\n\r\n\r\n\r\n","Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error."],"created_at":1605036940000,"updated_at":1605807338000,"closed_at":1605807338000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi All\r\nI am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:\r\ndataset = load_dataset('csv', data_files=files)\r\nWhen I run it I get:\r\n\r\nDownloading and preparing dataset csv\/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache\/huggingface\/datasets\/csv\/default-35575a1051604c88\/0.0.0\/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4...\r\n\r\nI am getting this error:\r\n6a4ac4\/csv.py in _generate_tables(self, files)\r\n 78 def _generate_tables(self, files):\r\n 79 for i, file in enumerate(files):\r\n---> 80 pa_table = pac.read_csv(\r\n 81 file,\r\n 82 read_options=self.config.pa_read_options,\r\n\r\n~\/anaconda2\/envs\/nlp\/lib\/python3.8\/site-packages\/pyarrow\/_csv.pyx in pyarrow._csv.read_csv()\r\n\r\n~\/anaconda2\/envs\/nlp\/lib\/python3.8\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n~\/anaconda2\/envs\/nlp\/lib\/python3.8\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\n**ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**\r\n\r\n\r\n\r\nThe size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need.\r\nThere is no issue reading the file with pandas. any idea what could be the issue?\r\nWhen I am running a different CSV I do not get this line:\r\n (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size)\r\n\r\nAny ideas?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/836\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/835","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/835\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/835\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/835\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/835","id":740102210,"node_id":"MDU6SXNzdWU3NDAxMDIyMTA=","number":835,"title":"Wikipedia postprocessing","user":{"login":"bminixhofer","id":13353204,"node_id":"MDQ6VXNlcjEzMzUzMjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13353204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bminixhofer","html_url":"https:\/\/github.com\/bminixhofer","followers_url":"https:\/\/api.github.com\/users\/bminixhofer\/followers","following_url":"https:\/\/api.github.com\/users\/bminixhofer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bminixhofer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bminixhofer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bminixhofer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bminixhofer\/orgs","repos_url":"https:\/\/api.github.com\/users\/bminixhofer\/repos","events_url":"https:\/\/api.github.com\/users\/bminixhofer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bminixhofer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https:\/\/github.com\/earwig\/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool","Ok, thanks! I'll try the Wiki40b dataset.","If anyone else is concerned about this, `wiki40b` does indeed seem very well cleaned."],"created_at":1605029198000,"updated_at":1605032600000,"closed_at":1605030561000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, thanks for this library!\r\n\r\nRunning this code:\r\n\r\n```py\r\nimport datasets\r\nwikipedia = datasets.load_dataset(\"wikipedia\", \"20200501.de\")\r\nprint(wikipedia['train']['text'][0])\r\n```\r\n\r\nI get:\r\n\r\n```\r\nmini|Ricardo Flores Mag\u00f3n\r\nmini|Mexikanische Revolution\u00e4re, Mag\u00f3n in der Mitte anf\u00fchrend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gem\u00e4lde \u201eTierra y Libertad\u201c von Idelfonso Carrara (?) von 1930.\r\n\r\nRicardo Flores Mag\u00f3n (* 16. September 1874 in San Antonio Eloxochitl\u00e1n im mexikanischen Bundesstaat Oaxaca; \u2020 22. November 1922 im Bundesgef\u00e4ngnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein f\u00fchrender anarchistischer Theoretiker und Aktivist, der die revolution\u00e4re mexikanische Bewegung radikal beeinflusste. Mag\u00f3n war Gr\u00fcnder der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World.\r\n\r\nPolitische Biografie \r\nJournalistisch und politisch k\u00e4mpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung \u201eLand und Freiheit\u201c (Tierra y Libertad) popul\u00e4r. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte gro\u00dfen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gr\u00fcndete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gef\u00e4ngnissen und im Exil und wurde 1918 in den USA wegen \u201eBehinderung der Kriegsanstrengungen\u201c zu zwanzig Jahren Gef\u00e4ngnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Mag\u00f3n von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM ver\u00f6ffentlichte 1923 einen Beitrag, nachdem Mag\u00f3n von einem Gef\u00e4ngnisw\u00e4rter erschlagen wurde.\r\nmini|Die Br\u00fcder Ricardo (links) und Enrique Flores Mag\u00f3n (rechts) vor dem Los Angeles County Jail, 1917\r\n\r\n[...]\r\n```\r\n\r\nso some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup?\r\n\r\nApologies if this has been asked before.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/835\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/834","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/834\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/834\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/834\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/834","id":740082890,"node_id":"MDU6SXNzdWU3NDAwODI4OTA=","number":834,"title":"[GEM] add WikiLingua cross-lingual abstractive summarization dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?","Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https:\/\/huggingface.co\/datasets\/gem)\r\n\r\nYou can use it for example to load the French to English translation with:\r\n```python\r\nfrom datasets import load_dataset\r\nwikilingua = load_dataset(\"gem\", \"wiki_lingua_french_fr\")\r\n```\r\n\r\nClosed by https:\/\/github.com\/huggingface\/datasets\/pull\/1807"],"created_at":1605027643000,"updated_at":1618488249000,"closed_at":1618488098000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** WikiLingua\r\n- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.\r\n- **Paper:** https:\/\/arxiv.org\/pdf\/2010.03093.pdf\r\n- **Data:** https:\/\/github.com\/esdurmus\/Wikilingua\r\n- **Motivation:** Included in the GEM shared task. Multilingual.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/834\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/833","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/833\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/833\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/833\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/833","id":740079692,"node_id":"MDU6SXNzdWU3NDAwNzk2OTI=","number":833,"title":"[GEM] add ASSET text simplification dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605027390000,"updated_at":1607002695000,"closed_at":1607002695000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** ASSET\r\n- **Description:** ASSET is a crowdsourced\r\nmulti-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations.\r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/2020.acl-main.424.pdf\r\n- **Data:** https:\/\/github.com\/facebookresearch\/asset\r\n- **Motivation:** Included in the GEM shared task\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/833\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/832","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/832\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/832\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/832\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/832","id":740077228,"node_id":"MDU6SXNzdWU3NDAwNzcyMjg=","number":832,"title":"[GEM] add WikiAuto text simplification dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605027203000,"updated_at":1607002688000,"closed_at":1607002688000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** WikiAuto\r\n- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing. \r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/2020.acl-main.709.pdf\r\n- **Data:** https:\/\/github.com\/chaojiang06\/wiki-auto\r\n- **Motivation:** Included in the GEM shared task\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/832\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/831","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/831\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/831\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/831\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/831","id":740071697,"node_id":"MDU6SXNzdWU3NDAwNzE2OTc=","number":831,"title":"[GEM] Add WebNLG dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605026808000,"updated_at":1607002681000,"closed_at":1607002681000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** WebNLG\r\n- **Description:** WebNLG consists of Data\/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian\r\n- **Paper:** https:\/\/www.aclweb.org\/anthology\/P17-1017.pdf\r\n- **Data:** https:\/\/webnlg-challenge.loria.fr\/download\/\r\n- **Motivation:** Included in the GEM shared task, multilingual\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/831\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/830","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/830\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/830\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/830\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/830","id":740065376,"node_id":"MDU6SXNzdWU3NDAwNjUzNzY=","number":830,"title":"[GEM] add ToTTo Table-to-text dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["closed via #1098 "],"created_at":1605026314000,"updated_at":1607605562000,"closed_at":1607605561000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** ToTTo\r\n- **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2004.14373\r\n- **Data:** https:\/\/github.com\/google-research-datasets\/totto\r\n- **Motivation:** Included in the GEM shared task\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/830\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/829","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/829\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/829\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/829\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/829","id":740061699,"node_id":"MDU6SXNzdWU3NDAwNjE2OTk=","number":829,"title":"[GEM] add Schema-Guided Dialogue","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605026024000,"updated_at":1607002670000,"closed_at":1607002670000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** The Schema-Guided Dialogue Dataset\r\n- **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather.\r\n- **Paper:** https:\/\/arxiv.org\/pdf\/2002.01359.pdf https:\/\/arxiv.org\/pdf\/2004.15006.pdf\r\n- **Data:** https:\/\/github.com\/google-research-datasets\/dstc8-schema-guided-dialogue\r\n- **Motivation:** Included in the GEM shared task\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/829\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/828","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/828\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/828\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/828\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/828","id":740008683,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE4NTcwMjY3","number":828,"title":"Add writer_batch_size attribute to GeneratorBasedBuilder","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605022099000,"updated_at":1605025656000,"closed_at":1605025656000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/828","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/828","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/828.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/828.patch"},"body":"As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/828\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/827","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/827\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/827\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/827\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/827","id":739983024,"node_id":"MDU6SXNzdWU3Mzk5ODMwMjQ=","number":827,"title":"[GEM] MultiWOZ dialogue dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @yjernite can I help in adding this dataset? \r\n\r\nI am excited about this because this will be my first contribution to the datasets library as well as to hugginface."],"created_at":1605020270000,"updated_at":1607780550000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)\r\n- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts \u2013 there are no annotations from the user side.\r\n- **Paper:** https:\/\/arxiv.org\/pdf\/2007.12720.pdf\r\n- **Data:** https:\/\/github.com\/budzianowski\/multiwoz\r\n- **Motivation:** Will likely be part of the GEM shared task\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/827\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/826","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/826\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/826\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/826\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/826","id":739976716,"node_id":"MDU6SXNzdWU3Mzk5NzY3MTY=","number":826,"title":"[GEM] Add E2E dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605019840000,"updated_at":1607002677000,"closed_at":1607002677000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** E2E NLG dataset (for End-to-end natural language generation)\r\n- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average\r\n- **Paper:** https:\/\/arxiv.org\/pdf\/1706.09254.pdf https:\/\/arxiv.org\/abs\/1901.07931\r\n- **Data:** http:\/\/www.macs.hw.ac.uk\/InteractionLab\/E2E\/#data\r\n- **Motivation:** This dataset will likely be included in the GEM shared task\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/826\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/825","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/825\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/825\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/825\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/825","id":739925960,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx","number":825,"title":"Add accuracy, precision, recall and F1 metrics","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1605016235000,"updated_at":1605122628000,"closed_at":1605122623000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/825","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/825","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/825.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/825.patch"},"body":"This PR adds several single metrics, namely:\r\n\r\n- Accuracy\r\n- Precision\r\n- Recall\r\n- F1\r\n\r\nThey all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel\/multiclass model:\r\n- have a macro\/micro\/per label\/weighted\/binary\/per sample score\r\n- score only the selected labels (usually what we call the positive labels) and ignore the negative ones. For example in case of a Named Entity Recognition task, positive labels are (`PERSON`, `LOCATION` or `ORGANIZATION`) and the negative one is `O`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/825\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/824","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/824\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/824\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/824\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/824","id":739896526,"node_id":"MDU6SXNzdWU3Mzk4OTY1MjY=","number":824,"title":"Discussion using datasets in offline mode","user":{"login":"mandubian","id":77193,"node_id":"MDQ6VXNlcjc3MTkz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/77193?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mandubian","html_url":"https:\/\/github.com\/mandubian","followers_url":"https:\/\/api.github.com\/users\/mandubian\/followers","following_url":"https:\/\/api.github.com\/users\/mandubian\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mandubian\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mandubian\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mandubian\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mandubian\/orgs","repos_url":"https:\/\/api.github.com\/users\/mandubian\/repos","events_url":"https:\/\/api.github.com\/users\/mandubian\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mandubian\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["No comments ?","I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going to try option 2 you mention for now though! Thanks ;)","Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine.\r\n\r\n@mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?","here is my way to load a dataset offline, but it **requires** an online machine\r\n1. (online machine)\r\n```\r\nimport datasets\r\ndata = datasets.load_dataset(...)\r\ndata.save_to_disk(\/YOUR\/DATASET\/DIR)\r\n```\r\n2. copy the dir from online to the offline machine\r\n3. (offline machine)\r\n```\r\nimport datasets\r\ndata = datasets.load_from_disk(\/SAVED\/DATA\/DIR)\r\n```\r\n\r\nHTH.","> here is my way to load a dataset offline, but it **requires** an online machine\n> \n> 1. (online machine)\n> \n> ```\n> \n> import datasets\n> \n> data = datasets.load_dataset(...)\n> \n> data.save_to_disk(\/YOUR\/DATASET\/DIR)\n> \n> ```\n> \n> 2. copy the dir from online to the offline machine\n> \n> 3. (offline machine)\n> \n> ```\n> \n> import datasets\n> \n> data = datasets.load_from_disk(\/SAVED\/DATA\/DIR)\n> \n> ```\n> \n> \n> \n> HTH.\n\n","I opened a PR that allows to reload modules that have already been loaded once even if there's no internet.\r\n\r\nLet me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :) \r\n\r\nI already note the \"freeze\" modules option, to prevent local modules updates. It would be a cool feature.\r\n\r\n----------\r\n\r\n> @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?\r\n\r\nIndeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones.\r\nFor example if you have a dataset script at `.\/my_dataset\/my_dataset.py` then you can do\r\n```python\r\nload_dataset(\".\/my_dataset\")\r\n```\r\nand the dataset script will generate your dataset once and for all.\r\n\r\n----------\r\n\r\nAbout I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded.\r\ncf #1724 ","The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :)\r\nYou can now use them offline\r\n```python\r\ndatasets = load_dataset('text', data_files=data_files)\r\n```\r\n\r\nWe'll do a new release soon"],"created_at":1605013851000,"updated_at":1611151504000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"`datasets.load_dataset(\"csv\", ...)` breaks if you have no connection (There is already this issue https:\/\/github.com\/huggingface\/datasets\/issues\/761 about it). It seems to be the same for metrics too.\r\n\r\nI create this ticket to discuss a bit and gather what you have in mind or other propositions.\r\n\r\nHere are some points to open discussion:\r\n- if you want to prepare your code\/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine.\r\n- AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset(\"MY_PATH\/csv.py\", ...)`. But it would be much better if you could run ths same code without modification if files are available locally.\r\n- I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV\/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least \"freeze\" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet.\r\n \r\nWDYT? (thks)\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/824\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/823","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/823\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/823\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/823\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/823","id":739815763,"node_id":"MDU6SXNzdWU3Mzk4MTU3NjM=","number":823,"title":"how processing in batch works in datasets ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi I don\u2019t think this is a request for a dataset like you labeled it.\r\n\r\nI also think this would be better suited for the forum at https:\/\/discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features\/dataset requests and have usage questions discussed on the forum. Thanks.","Hi Thomas,\nwhat I do not get from documentation is that why when you set batched=True,\nthis is processed in batch, while data is not divided to batched\nbeforehand, basically this is a question on the documentation and I do not\nget the batched=True, but sure, if you think this is more appropriate in\nforum I will post it there.\nthanks\nBest\nRabeeh\n\nOn Tue, Nov 10, 2020 at 12:21 PM Thomas Wolf <notifications@github.com>\nwrote:\n\n> Hi I don\u2019t think this is a request for a dataset like you labeled it.\n>\n> I also think this would be better suited for the forum at\n> https:\/\/discuss.huggingface.co. we try to keep the issue for the repo for\n> bug reports and new features\/dataset requests and have usage questions\n> discussed on the forum. Thanks.\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/issues\/823#issuecomment-724639476>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ARPXHH4FIPFHVVUHANAE4F3SPEO2JANCNFSM4TQQVEXQ>\n> .\n>\n","Yes the forum is perfect for that. You can post in the `datasets` section.\r\nThanks a lot!"],"created_at":1605006677000,"updated_at":1605013870000,"closed_at":1605013869000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI need to process my datasets before it is passed to dataloader in batch, \r\nhere is my codes \r\n\r\n```\r\nclass AbstractTask(ABC):\r\n task_name: str = NotImplemented\r\n preprocessor: Callable = NotImplemented\r\n split_to_data_split: Mapping[str, str] = NotImplemented\r\n tokenizer: Callable = NotImplemented\r\n max_source_length: str = NotImplemented\r\n max_target_length: str = NotImplemented\r\n # TODO: should not be a task item, but cannot see other ways.\r\n tpu_num_cores: int = None\r\n\r\n # The arguments set are for all tasks and needs to be kept common.\r\n def __init__(self, config):\r\n self.max_source_length = config['max_source_length']\r\n self.max_target_length = config['max_target_length']\r\n self.tokenizer = config['tokenizer']\r\n self.tpu_num_cores = config['tpu_num_cores']\r\n\r\n def _encode(self, batch) -> Dict[str, torch.Tensor]:\r\n batch_encoding = self.tokenizer.prepare_seq2seq_batch(\r\n [x[\"src_texts\"] for x in batch],\r\n tgt_texts=[x[\"tgt_texts\"] for x in batch],\r\n max_length=self.max_source_length,\r\n max_target_length=self.max_target_length,\r\n padding=\"max_length\" if self.tpu_num_cores is not None else \"longest\", # TPU hack\r\n return_tensors=\"pt\"\r\n )\r\n return batch_encoding.data\r\n\r\n\r\n def data_split(self, split):\r\n return self.split_to_data_split[split]\r\n\r\n def get_dataset(self, split, n_obs=None):\r\n split = self.data_split(split)\r\n if n_obs is not None:\r\n split = split+\"[:{}]\".format(n_obs)\r\n dataset = load_dataset(self.task_name, split=split)\r\n dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names)\r\n dataset = dataset.map(lambda batch: self._encode(batch), batched=True)\r\n dataset.set_format(type=\"torch\", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'])\r\n return dataset\r\n\r\n```\r\n\r\nI call it like \r\n\r\n`AutoTask.get(task, train_dataset_config).get_dataset(split=\"train\", n_obs=data_args.n_train) \r\n`\r\n\r\nThis gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks \r\n\r\n File \"finetune_multitask_trainer.py\", line 192, in main\r\n if training_args.do_train else None\r\n File \"finetune_multitask_trainer.py\", line 191, in <dictcomp>\r\n split=\"train\", n_obs=data_args.n_train) for task in data_args.task}\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/internship\/seq2seq\/tasks.py\", line 56, in get_dataset\r\n dataset = dataset.map(lambda batch: self._encode(batch), batched=True)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1236, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"\/idiap\/user\/rkarimi\/libs\/anaconda3\/envs\/internship\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py\", line 1207, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/internship\/seq2seq\/tasks.py\", line 56, in <lambda>\r\n dataset = dataset.map(lambda batch: self._encode(batch), batched=True)\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/internship\/seq2seq\/tasks.py\", line 37, in _encode\r\n [x[\"src_texts\"] for x in batch],\r\n File \"\/remote\/idiap.svm\/user.active\/rkarimi\/dev\/internship\/seq2seq\/tasks.py\", line 37, in <listcomp>\r\n [x[\"src_texts\"] for x in batch],\r\nTypeError: string indices must be integers\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/823\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/822","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/822\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/822\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/822\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/822","id":739579314,"node_id":"MDU6SXNzdWU3Mzk1NzkzMTQ=","number":822,"title":"datasets freezes ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns"],"created_at":1604985019000,"updated_at":1605223383000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks \r\n\r\ndataset1 = load_dataset(\"squad\", split=\"train[:10]\")\r\ndataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])\r\n\r\ndataset2 = load_dataset(\"imdb\", split=\"train[:10]\")\r\ndataset2 = dataset2.set_format(type=\"torch\", columns=[\"text\", \"label\"])\r\nprint(len(dataset1))\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/822\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/821","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/821\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/821\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/821\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/821","id":739506859,"node_id":"MDU6SXNzdWU3Mzk1MDY4NTk=","number":821,"title":"`kor_nli` dataset doesn't being loaded properly","user":{"login":"sackoh","id":30492059,"node_id":"MDQ6VXNlcjMwNDkyMDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30492059?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sackoh","html_url":"https:\/\/github.com\/sackoh","followers_url":"https:\/\/api.github.com\/users\/sackoh\/followers","following_url":"https:\/\/api.github.com\/users\/sackoh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sackoh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sackoh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sackoh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sackoh\/orgs","repos_url":"https:\/\/api.github.com\/users\/sackoh\/repos","events_url":"https:\/\/api.github.com\/users\/sackoh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sackoh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604973852000,"updated_at":1605535152000,"closed_at":1605535152000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"There are two issues from `kor_nli` dataset\r\n\r\n1. csv.DictReader failed to split features by tab\r\n - Should not exist `None` value in label feature, but there it is.\r\n ```python\r\n kor_nli_train['train'].unique('gold_label')\r\n # ['neutral', 'entailment', 'contradiction', None]\r\n ```\r\n - I found a reason why there is `None` values in label feature as following code\r\n ```python\r\n from datasets import load_dataset\r\n kor_nli_train = load_dataset('kor_nli', 'multi_nli')\r\n \r\n for idx, example in enumerate(kor_nli_train['train']):\r\n if example['gold_label'] is None:\r\n print(idx, example)\r\n break\r\n # 16835 {'gold_label': None, 'sentence1': '\uadf8\ub294 \uc804\uc7c1 \uc804\uc5d0 \uac00\ubcbc\uc6b4 \ubc85\uc2a4\ud0a8 \uc554\ub9d0\uc744 \uac00\uc9c0\uace0 \ub2ec\ub9ac\uae30 \uc704\ud574 \uc6b0\uc720\ucc98\ub7fc \ud558\uc580 \uc2a4\ud130\ub4dc\ub97c \ub123\uc5c8\ub2e4.\\t\uc804\uc7c1 \uc804\uc5d0 \ub2e4\uc778\uc885 \uc5ec\uc131\ub4e4\uacfc \ud568\uaed8 \uc788\ub294 \ubc31\uc778 \ub0a8\uc790\uac00 \uc788\uc5c8\ub2e4.\\tentailment\\n\uc2ac\ub9bc\uc740 \uc7ac\ube68\ub9ac \uc637\uc744 \uc785\uc5c8\uace0, \uc21c\uac04\uc801\uc73c\ub85c \ubbf8\uc9c0\uadfc\ud55c \ubb3c\uc744 \ubfcc\ub9b4 \uc218 \uc788\ub294 \uc544\uce68 \uc138\ud0c1\ubb3c\uc744 \uae30\uaebc\uc774 \uac00\ub450\uc5c8\ub2e4.\\t\uc2ac\ub9bc\uc740 \uc9c1\uc7a5\uc5d0 \ub2a6\uc5c8\ub2e4.\\tneutral\\n\ub274\uc695\uc5d0\uc11c \uadf8 \uc2dd\uc0ac\ub97c \ud574\ubd24\ub294\ub370, \uac70\uae30\uc11c \uc18c\uace0\uae30\uc758 \uba4b\uc9c4 \uc18c\uace0\uae30 \ubd80\ubd84\uc744 \uc694\ub9ac\ud558\uace0 \ubc14\ubca0\ud050\ub85c \ub9cc\ub4e0 \ub110\ube64\uc9c0 \uac19\uc740 \uac78 \uac00\uc838\uc654\ub294\ub370, \uc815\ub9d0 \ub300\ub2e8\ud574.\\t\uadf8\ub4e4\uc774 \uac70\uae30\uc11c \uc694\ub9ac\ud558\ub294 \uc1e0\uace0\uae30\ub294 \uc5ed\uacb9\ub2e4. \uac70\uae30\uc11c \uc808\ub300 \uba39\uc9c0 \ub9c8\ub77c.\\tcontradiction\\n\ud310\ub9e4\uc6d0\uc758 \uc8fd\uc74c\uc5d0\uc11c \ube0c\ub77c\uc774\uc5b8 \ub370\ub124\ud788... \ud06c\ub9ac\uc2a4 \ucf08\ub9ac\\t\ud06c\ub9ac\uc2a4 \ucf08\ub9ac\ub294 \uc138\uc77c\uc988\ub9e8\uc758 \uc8fd\uc74c\uc744 \uc5b8\uae09\ud558\uc9c0 \uc54a\ub294\ub2e4.\\tcontradiction\\n\uadf8\ub7ec\ub294 \ub3d9\uc548 \uc694\ub9ac\uc0ac\ub294 \uadf8\ub0e5 \ud654\uac00 \ub0ac\uc5b4.\\t\uc2a4\ud29c\uac00 \ub053\ub294 \ub3d9\uc548 \uc694\ub9ac\uc0ac\ub294 \ud654\uac00 \ub0ac\ub2e4.\\tneutral\\n\ub9c8\uc9c0\ub9c9 \ub85c\ub9c8\uc758 \ub9f9\uacf5\uaca9 \uc804\ub0a0 \ubc24, 900\uba85 \uc774\uc0c1\uc758 \uc720\ub300\uc778 \uc218\ube44\uc218\ub4e4\uc774 \ub85c\ub9c8\uc778\ub4e4\uc5d0\uac8c \uadf8\ub4e4\uc744 \uc0ac\ub85c\uc7a1\ub294 \uc2b9\ub9ac\ub97c \uc8fc\uae30 \ubcf4\ub2e4\ub294 \ub300\ub7c9 \uc790\uc0b4\uc744 \uc800\uc9c8\ub800\ub2e4.\\t\ub85c\ub9c8\uc778\ub4e4\uc774 \uadf8\ub4e4\uc758 \ud3ec\ud68d\uc5d0 \uc2b9\ub9ac\ud558\ub3c4\ub85d \ub0b4\ubc84\ub824\ub450\uae30 \ubcf4\ub2e4\ub294 900\uba85\uc758 \uc720\ub300\uc778 \uc218\ube44\uc218\ub4e4\uc774 \uc790\uc0b4\ud588\ub2e4.\\tentailment\\n\uc55e\uc73c\ub85c \ubc1c\uc0ac\ud558\ub77c.\\t\ubc1c\uc0ac.\\tneutral\\n\uadf8\ub9ac\uace0 \ub2f9\uc2e0\uc740 \uc6b0\ub9ac \ub545\uc774 \uc5d0\uc774\ucee4\uc5d0 \uc788\ub2e4\ub294 \uac83\uc744 \uc54c\uace0 \uc788\ub2e4. \uc6b0\ub9ac \uc0ac\ub78c\ub4e4\uc740 \uc5b4\ub5a4 \uac83\uc774 \uc5bc\ub9c8\ub098 \ub9ce\uc740\uc9c0 \uc774\ud574\ud558\uc9c0 \ubabb\ud560 \uac83\uc774\ub2e4.\\t\ubaa8\ub4e0 \uc0ac\ub78c\ub4e4\uc740 \uc6b0\ub9ac\uc758 \uce21\uc815 \uc2dc\uc2a4\ud15c\uc774 \uc5b4\ub5bb\uac8c \uc791\ub3d9\ud558\ub294\uc9c0 \uc54c\uace0 \uc774\ud574\ud569\ub2c8\ub2e4.\\tcontradiction\\n\uc8fc\ubbf8\uac8c\uc2a4\\tJumiyges\ub294 \ub3c4\uc2dc\uc758 \uc774\ub984\uc774\ub2e4.\\tneutral\\n\uc0ac\ub78c\uc740 \uc790\uae30 \ubbfc\uc871\uc744 \ub3cc\ubd10\uc57c \ud55c\ub2e4...\\t\uc0ac\ub78c\uc740 \uc870\uad6d\uc5d0 \uacf5\uac10\ud574\uc57c \ud55c\ub2e4.\\tentailment\\n\ub610\ud55c PDD 63\uc740 \uc815\ubd80\uc640 \uc5c5\uacc4\uac00 \ucef4\ud4e8\ud130 \uae30\ubc18 \uacf5\uaca9\uc5d0 \ub300\ud574 \uacbd\uace0\ud558\uace0 \ubc29\uc5b4\ud560 \uc900\ube44\ub97c \ub354 \uc798\ud560 \uc218 \uc788\ub3c4\ub85d \uc2dc\uc2a4\ud15c \ucde8\uc57d\uc131, \uc704\ud611, \uce68\uc785 \ubc0f \uc774\uc0c1\uc5d0 \ub300\ud55c \uc815\ubcf4\ub97c \uacf5\uc720\ud558\ub294 \uba54\ucee4\ub2c8\uc998\uc744 \uc218\ub9bd\ud558\ub294 \uac83\uc774 \uc911\uc694\ud558\ub2e4\ub294 \uac83\uc744 \uc778\uc2dd\ud588\uc2b5\ub2c8\ub2e4.\\t\uc815\ubcf4 \uc804\uc1a1 \ud504\ub85c\ud1a0\ucf5c\uc744 \ub9cc\ub4dc\ub294 \uac83\uc740 \uc911\uc694\ud558\ub2e4.\\tentailment\\n\uce74\ud398 \ub9c1 \ud53c\uc544\uc790 \ub378\ub77c \ub808\ud4cc\ube14\ub9ac\uce74 \ubc14\ub85c \ub0a8\ucabd\uc5d0\ub294 \ud53c\ub80c\uccb4\uac00 \uc54c\ub824\uc9c4 \uc9da \uc81c\ud488 \ub54c\ubb38\uc5d0 \ud55c\ub54c \uc2a4\ud2b8\ub85c \ub9c8\ucf13\uc774\ub77c\uace0 \ubd88\ub838\ub358 16\uc138\uae30 \ub85c\uc9c0\uc544\uc778 \uba54\ub974\uce74\ud1a0 \ub204\uc624\ubcf4(Mercato Nuovo)\uac00 \uc788\ub2e4.\\t\ud53c\uc544\uc790 \ub378\ub77c \ub808\ud4cc\ube14\ub9ac\uce74\uc5d0\ub294 \uce74\ud398\uac00 \ub9ce\uc774 \uc788\ub2e4.\\tentailment\\n\uc6b0\ub9ac\uac00 \uc5ec\uae30 \uc788\ub294 \ud55c \ud2b8\ub9b0\ud310\uc774 \ubb58 \uc8fc\uc6e0\ub294\uc9c0 \uc0b4\ud3b4\ubd10\uc57c\uaca0\uc5b4\\t\uc6b0\ub9ac\ub294 \ud2b8\ub9b0\ud310\uc774 \ubb34\uc5c7\uc744 \uc8fc\uc6e0\ub294\uc9c0 \ubcf4\ub294 \ub370 \uc2dc\uac04\uc744 \ub0ad\ube44\ud558\uc9c0 \uc54a\uc744 \uac83\uc774\ub2e4.\\tcontradiction\\n\uadf8\ub7ec\ub098 \ucf08\ud2b8\uc871\uc758 \ubb38\ud654\uc801 \uae30\ubc18\uc744 \uac00\uc9c4 \uc544\uc77c\ub79c\ub4dc \uad50\ud68c\ub294 \uc720\ub7fd\uc758 \uc2e0\ud765 \uae30\ub3c5\uad50 \uc138\uacc4\uc640\ub294 \ub2e4\ub974\uac8c \ubc1c\uc804\ud588\uace0 \uacb0\uad6d \ub85c\ub9c8\uc640 \uc911\uc559\uc9d1\uad8c\uc801 \ud589\uc815\uc73c\ub85c \ub300\uccb4\ub418\uc5c8\ub2e4.\\t\uc544\uc77c\ub79c\ub4dc \uad50\ud68c\uc5d0\ub294 \ucf08\ud2b8\uc871\uc758 \uae30\uc9c0\uac00 \uc788\uc5c8\ub2e4.\\tentailment\\n\uae00\uc384, \ub10c \uc120\ud0dd\uc758 \uc5ec\uc9c0\uac00 \uc5c6\uc5b4\\t\uae00\uc384, \ub108\uc5d0\uac90 \ub9ce\uc740 \uc120\ud0dd\uad8c\uc774 \uc788\uc5b4.\\tcontradiction\\n\uc0ac\uc2e4, \uacf5\uc2dd\uc801\uc778 \ubcf4\uc7a5\uc740 \uc5c6\ub2e4.\\t\ub0b4\uac00 \uc0b0 \ubb3c\uac74\uc5d0 \ub300\ud55c \ubcf4\uc99d\uc774 \uc5c6\uc5c8\ub2e4.\\tneutral\\n\ub35c \ud65c\uae30\ucc28\uae34 \ud558\uc9c0\ub9cc, \uc548\uc2dc\uc640 \ub974 \ubd80\ub974\uc82f\uc758 \uc0ac\ub791\uc2a4\ub7ec\uc6b4 \ud638\uc218\uc5d0\uc11c\ub3c4 \uc0b6\uc740 \ub611\uac19\uc774 \uc0c1\ucf8c\ud558\ub2e4.\\t\uc548\uc2dc\uc640 \ub974 \ubd80\ub974\uac9f\uc5d0\uc11c\ub294 \ud638\uc218\uc5d0\uc11c\uc758 \ud65c\ub3d9\uc774 \uc11c\ub450\ub974\uace0 \ubc14\uc05c \ubd84\uc704\uae30\ub97c \uc5f0\ucd9c\ud55c\ub2e4.\\tcontradiction\\n\uadf8\uc758 \uc5ec\ud589 \uc18c\uc2dd\uc774 \uc774\ubbf8 \ud37c\uc84c\ub2e4\uba74 \uacf5\uaca9 \uc18c\uc2dd\ub3c4 \ud37c\uc84c\uc744 \ud14c\uc9c0\ub9cc \ub9c8\uc744\uc5d0\uc11c\ub294 \uc804\ud600 \uacf5\ud669\uc758 \uae30\ubbf8\uac00 \ubcf4\uc774\uc9c0 \uc54a\uc558\ub2e4.\\t\uadf8\ub294 \uc65c \ub9c8\uc744\uc774 \ub2f9\ud669\ud558\uc9c0 \uc54a\uc558\ub294\uc9c0 \uc54c \uc218 \uc5c6\uc5c8\ub2e4.\\tneutral\\n\uacfc\uac70\uc5d0\ub294 \uc8fd\uc74c\uc758 \uc704\ud611\uc774 \ud1a0\uc9c0\uc758 \ud310\ub9e4\ub97c \ub9c9\ub294 \ub370 \uac70\uc758 \ub3c4\uc6c0\uc774 \ub418\uc9c0 \uc54a\uc558\ub2e4.\\t\ud1a0\uc9c0 \ud310\ub9e4\ub294 \uc5b4\ub5a0\ud55c \uc704\ud611\ub3c4 \uad50\ud658\ud558\uc9c0 \uc54a\uace0 \uc774\ub8e8\uc5b4\uc9c4\ub2e4.\\tcontradiction\\n\uc5b4\ub290 \uc2dc\uc810\uc5d0 \uc774\ub974\ub7ec \ub098\ub294 \uc9c0\uae08 \ub2e4\uac00\uc624\ub294 \uc0c8\ub85c\uc6b4 \uac83\ub4e4\uacfc \ub098\uc624\ub294 \ub9ce\uc740 \uc0c8\ub85c\uc6b4 \uac83\ub4e4\uc774 \ub0b4\uac00 \ub299\uc5b4\uac00\uace0 \uc788\ub2e4\uace0 \ub9d0\ud558\ub294 \uc2dc\ub300\ub85c \uc811\uc5b4\ub4e4\uace0 \uc788\ub2e4.\\t\ub098\ub294 \uc5ec\uc804\ud788 \ub0b4\uac00 \ubcf4\ub294 \ubaa8\ub4e0 \uc0c8\ub85c\uc6b4 \uac83\uc744 \uc0ac\ub791\ud55c\ub2e4.\\tcontradiction\\n\ub274\uc2a4\uc704\ud06c\ub294 \ubb3c\ub9ac\ud559\uc790\ub4e4\uc774 \uacbd\uae30\uc7a5 \ud589\uc0ac\uc5d0\uc11c \uace0\uc18d\ub3c4\ub85c\uc758 \uc790\ub3d9\ucc28 \uad50\ud1b5\uacfc \ubcf4\ud589\uc790 \uad50\ud1b5\uc744 \uac1c\uc120\ud558\uae30 \uc704\ud574 \uc0c8\ub5bc\uc758 \uc6c0\uc9c1\uc784\uc744 \uc5f0\uad6c\ud558\uace0 \uc788\ub2e4\uace0 \ub9d0\ud55c\ub2e4.\\t\uace0\uc18d\ub3c4\ub85c\uc758 \uc790\ub3d9\ucc28 \uad50\ud1b5 \ud750\ub984\uc744 \uac1c\uc120\ud558\ub294 \uac83\uc740 \ubb3c\ub9ac\ud559\uc790\ub4e4\uc774 \uc0c8\ub5bc\ub97c \uc5f0\uad6c\ud558\ub294 \uc774\uc720 \uc911 \ud558\ub098\uc774\ub2e4.\\tentailment\\n\uc5bc\ub9c8\ub098 \ub2e4\ub978\uac00? \uadf8\ub294 \uc7a0\uc2dc \ub9d0\uc744 \uba48\ucd94\uc5c8\ub2e4\uac00 \ub9d0\uc744 \uc774\uc5c8\ub2e4.\\t\uadf8\ub294 \uadf8 \uc18c\ub140\uac00 \uc5b4\ub514\uc5d0 \uc788\ub294\uc9c0 \uc54c\uace0 \uc2f6\uc5c8\ub2e4.\\tentailment\\n\uae00\uc384, \uadf8\uc5d0\uac8c \ub108\ubb34 \ub9ce\uc740 \uac83\uc744 \uc8fc\uc9c0\ub9c8.\\t\uadf8\ub294 \ud6e8\uc52c \ub354 \ub9ce\uc740 \uac83\uc744 \uc694\uad6c\ud560 \uac83\uc774\ub2e4.\\tneutral\\n\uc544\ubb34\ub9ac \uadf8\uc758 \ucc3d\uc791\ubb3c\uc774 \uc644\ubcbd\ud574 \ubcf4\uc778\ub2e4\uace0 \ud574\ub3c4, \uadf8\ub4e4\uc744 \ubbff\ub294 \uac83\uc740 \uc544\ub9c8\ub3c4 \uc88b\uc740 \uc0dd\uac01\uc774 \uc544\ub2d0 \uac83\uc774\ub2e4.\\'\\t\ub3c4\uc790\uae30\ub97c \uc798 \ub9cc\ub4e0\ub2e4\uace0 \ud574\uc11c \ub204\uad70\uac00\ub97c \ubbff\ub294 \uac83\uc740 \uc544\ub9c8 \uc88b\uc9c0 \uc54a\uc744 \uac83\uc774\ub2e4.\\tneutral\\n\ubc84\uc2a4\ud2c0\ub9c1 \uadf8\ub780 \ube44\uc544(Bustling Gran Via)\ub294 \ud638\ud154, \uc0c1\uc810, \uadf9\uc7a5, \ub098\uc774\ud2b8\ud074\ub7fd, \uce74\ud398 \ub4f1\uc774 \uc5b4\uc6b0\ub7ec\uc838 \uc0b0\ucc45\uacfc \ucc3d\uac00\ub97c \ubcfc \uc218 \uc788\ub2e4.\\tGran Via\ub294 \ud638\ud154, \uc0c1\uc810, \uadf9\uc7a5, \ub098\uc774\ud2b8\ud074\ub7fd, \uce74\ud398\uc758 \ubc88\ud654\ud55c \uc870\ud569\uc774\ub2e4.\\tentailment\\n\uc815\ubd80 \uc778\uc1c4\uc18c\\t\uadf8 \uc0ac\ubb34\uc2e4\uc740 \uc6cc\uc2f1\ud134\uc5d0 \uc704\uce58\ud574 \uc788\ub2e4.\\tneutral\\n\uc2e4\uc81c \ubb38\ud654 \uc804\uc7c1\uc774 \uc5b4\ub514 \uc788\ub294\uc9c0 \uc54c\uace0 \uc2f6\ub2e4\uba74 \ud559\uc6d0\uc744 \uc78a\uc5b4\ubc84\ub9ac\uace0 \uc2e4\ub9ac\ucf58 \ubc38\ub9ac\uc640 \ub808\ub4dc\ubaac\ub4dc\ub97c \uc0dd\uac01\ud574 \ubcf4\ub77c.\\t\uc2e4\uc81c \ubb38\ud654 \uc804\uc7c1\uc740 \ub808\ub4dc\ubaac\ub4dc\uc5d0\uc11c \uc77c\uc5b4\ub09c\ub2e4.\\tentailment\\n\uadf8\ub9ac\uace0 \ud398\ub2c8\uc2e4\ub9b0\uc744 \uc8fc\uc9c0 \uc54a\uae30 \uc704\ud574 \uce68\ub300 \uc704\uc5d0 \uc62c\ub824\ub1a8\uc5b4\\t\uadf8\ub140\uc758 \ubc29\uc5d0\ub294 \ud398\ub2c8\uc2e4\ub9b0\uc774 \uc5c6\ub2e4\ub294 \uc9d5\ud6c4\uac00 \uc804\ud600 \uc5c6\uc5c8\ub2e4.\\tcontradiction\\nL.A.\uc758 \uc57c\uc678 \uc2dc\uc7a5\uc744 \ud65c\ubcf4\ud558\ub294 \uac83\uc740 \ub9db\uc788\uace0 \uc800\ub834\ud55c \uadf8\ub8e8\ube0c\ub97c \uc7a1\uace0, \ub05d\uc774 \uc5c6\ub294 \ud587\ube5b\uc744 \uc990\uae30\uace0, \uc2e0\uc120\ud55c \ub18d\uc0b0\ubb3c, \uaf43, \ud5a5, \uadf8\ub9ac\uace0 \uac00\uc82f \uac08\ub85c\uc5b4\ub97c \uad6c\uc785\ud558\uba74\uc11c \ud604\uc9c0\uc778\ub4e4\uacfc \uc5b4\uc6b8\ub9b4 \uc218 \uc788\ub294 \ud6cc\ub96d\ud55c \ubc29\ubc95\uc774\ub2e4.\\tLA\uc758 \uc57c\uc678 \uc2dc\uc7a5\uc744 \ub3cc\uc544\ub2e4\ub2c8\ub294 \uac83\uc740 \uc2dc\uac04 \ub0ad\ube44\ub2e4.\\tcontradiction\\n\uc548\ub098\ub294 \ubc16\uc73c\ub85c \ub098\uc640 \uc548\ub3c4\uc758 \ud55c\uc228\uc744 \ub0b4\uc26c\uc5c8\ub2e4. \ub2e8 \ud55c \ubc88, \uadf8\ub9ac\uace0 \ub9c8\ub9ac\ud6c4\uc544\uc26c \ub9db\uc758 \uc220\ub85c \ub05d\ub0b4\uc790\ub294 \uacb0\uc2ec\uc774 \ub4a4\uc11e\uc5ec \uc788\uc5c8\ub2e4.\\t\uc548\ub098\ub294 \uc548\uc2ec\ud558\uace0 \ub9c8\ub9ac\ud6c4\uc544\uc26c \ub9db\uc758 \uc220\uc744 \ub2e4 \ub9c8\uc2dc\uae30\ub85c \uacb0\uc2ec\ud588\ub2e4.\\tentailment\\n5 \uc6d4\uc5d0 Vajpayee\ub294 \ud575 \uc2e4\ud5d8\uc758 \uc131\uacf5\uc801\uc778 \uc644\ub8cc\ub97c \ubc1c\ud45c\ud588\ub294\ub370, \uc778\ub3c4\uc778\ub4e4\uc740 \uc8fc\uad8c\uc758 \ud45c\uc2dc\ub85c \uc120\uc804\ud588\uc9c0\ub9cc \uc774\uc6c3 \uad6d\uac00\uc640 \uc11c\uad6c\uc640\uc758 \uc778\ub3c4 \uad00\uacc4\ub97c \ubcf5\uc7a1\ud558\uac8c \ub9cc\ub4e4 \uc218 \uc788\uc2b5\ub2c8\ub2e4.\\t\uc778\ub3c4\ub294 \uc131\uacf5\uc801\uc778 \ud575\uc2e4\ud5d8\uc744 \ud55c \uc801\uc774 \uc5c6\ub2e4.\\tcontradiction\\n\ud50c\ub77c\ub178 \uc6d0\uc5d0\uc11c \ubcf4\ud1b5 \uc5bc\ub9c8\ub098 \ub9ce\uc740 \uac83\uc744 \uac00\uc9c0\uace0 \uc788\ub294\uac00?\\t\uc800 \uc0ac\ub78c\ub4e4 \uc911\uc5d0 \ud50c\ub77c\ub178 \uc6d0\uc5d0 \uac00\ubcf8 \uc0ac\ub78c \uc788\uc5b4?\\tcontradiction\\n\uadf8\uac83\uc758 \uc804\uccb4\uc801\uc778 \ud615\ud0dc\uc758 \uc6b0\uc544\ud568\uc740 \uc6b4\ud558 \uac74\ub108\ud3b8\uc5d0\uc11c \uac00\uc7a5 \uc798 \ubcfc \uc218 \uc788\ub2e4. \uc65c\ub0d0\ud558\uba74, \ub85c\ub9c8\uc5d0 \uc788\ub294 \uc131 \ubca0\ub4dc\ub85c\ucc98\ub7fc, \ub3d4\uc740 \uae38\ucb49\ud55c \ubcf8\ub2f9 \ub4a4\ub85c \ub354 \uac00\uae4c\uc6b4 \uacf3\uc5d0 \uc0ac\ub77c\uc9c0\uae30 \ub54c\ubb38\uc774\ub2e4.\\t\uc131 \ubca0\ub4dc\ub85c\uc758 \uae38\ucb49\ud55c \ubcf8\ub2f9\uc740 \ub3d4\uc744 \uac00\ub9b0\ub2e4.\\tentailment\\n\ub2f9\uc2e0\uc740 \uc218\ud2f4\uc774 \uc0b4\uc5d0 \uac15\ubc15\uc801\uc778 \uae30\uc068\uc744 \uac00\uc9c0\uace0 \ub204\ub4dc\ub97c \uadf8\ub9b4 \uac83\uc774\ub77c\uace0 \uc0dd\uac01\ud558\uaca0\uc9c0\ub9cc, \uc544\ub2c8\uc624; \uadf8\ub294 \uadf8\uc758 \ubaa8\ub4e0 \uacbd\ub825\uc5d0\uc11c \ub2e8 \ud55c \uc810\ub9cc\uc744 \uadf8\ub838\uace0, \uadf8\uac83\uc740 \uc0ac\uc18c\ud55c \uadf8\ub9bc\uc774\ub2e4.\\t\uadf8\ub294 \uadf8\uac83\uc774 \uadf8\ub97c \ubd88\ud3b8\ud558\uac8c \ub9cc\ub4e4\uc5c8\uae30 \ub54c\ubb38\uc5d0 \ud558\ub098\ub9cc \uadf8\ub838\ub2e4.\\tneutral\\n\uc774 \uc778\uc0c1\uc801\uc778 \ud48d\uacbd\uc740 \uc6d0\ub798 \ub098\ud3ec \ub808\uc628\uc774 \ub8e8\ube0c\ub974 \ubc15\ubb3c\uad00\uc758 \uce68\uc2e4\uc5d0\uc11c \ubcfc \uc218 \uc788\ub3c4\ub85d \uacc4\ud68d\ub418\uc5c8\ub294\ub370, \uadf8 \ub2f9\uc2dc \uad81\uc804\uc774\uc5c8\uc2b5\ub2c8\ub2e4.\\t\ub098\ud3f4\ub808\uc639\uc740 \uadf8\uc758 \ubaa8\ub4e0 \uad81\uc804\uc5d0 \uc788\ub294 \uadf8\uc758 \uce68\uc2e4\uc5d0\uc11c \ubcf4\ub294 \uacbd\uce58\uc5d0 \ub9ce\uc740 \uad00\uc2ec\uc744 \uac00\uc84c\ub2e4.\\tneutral\\n\uadf8\ub294 \uc6b0\ub9ac\uc5d0\uac8c \ubb38 \uc5f4\uc1e0\ub97c \uac74\ub124\uc8fc\uace0\ub294 \uae09\ud788 \ub5a0\ub0ac\ub2e4.\\t\uadf8\ub294 \uae34\uc7a5\ud574\uc11c \uc6b0\ub9ac\uc5d0\uac8c \uc5f4\uc1e0\ub97c \ube68\ub9ac \uc8fc\uc5c8\ub2e4.\\tneutral\\n\uc704\uc6d0\ud68c\ub294 \ub610\ud55c \ucd5c\uc885 \uaddc\uce59\uc744 OMB\uc5d0 \uc81c\ucd9c\ud588\ub2e4.\\t\uc704\uc6d0\ud68c\ub294 \ub610\ud55c \uc774 \uaddc\uce59\uc744 \ub2e4\ub978 \uadf8\ub8f9\uc5d0 \uc81c\ucd9c\ud588\uc9c0\ub9cc \ucd5c\uc885 \uaddc\uce59\uc740 OMB\uac00 \ud3c9\uac00\ud558\uae30 \uc704\ud55c \uac83\uc774 \uc5c8\uc2b5\ub2c8\ub2e4.\\tneutral\\n\uc815\uc6d0\uac00\uac8c\uc5d0 \uac00\ubcf4\uba74 \uc62c\ub9ac\ube44\uc544\uc758 \ubcf5\uc81c \ud654\ud569\ubb3c \uac19\uc740 \uc720\ucf8c\ud55c \uc774\ub984\uc744 \uac00\uc9c4 \uc81c\ud488\ub4e4\uc744 \ucc3e\uc744 \uc218 \uc788\uc744 \uac81\ub2c8\ub2e4.\uc774 \uc81c\ud488\uc774 \ubfcc\ub9ac\ub97c \ub0b4\ub9ac\ub3c4\ub85d \ub3d5\uae30 \uc704\ud574 \ucd2c\uc601\uc758 \uc808\ub2e8\ub41c \ub05d\uc5d0 \ub369\ud06c\uc29b\uc744 \ud558\ub294 \ud638\ub974\ubaac\uc758 \ud63c\ud569\ubb3c\uc774\uc8e0.\\t\uc815\uc6d0 \uac00\uafb8\uae30 \uac00\uac8c\uc758 \uc81c\ud488\ub4e4\uc740 \uc885\uc885 \uadf8\ub4e4\uc758 \ubaa9\uc801\uc744 \uc124\uba85\ud558\uae30 \uc704\ud574 \uae30\uc220\uc801\uc73c\ub85c\ub098 \uacfc\ud559\uc801\uc73c\ub85c \ud30c\uc0dd\ub41c \uc774\ub984(\uc62c\ub9ac\ube44\uc544\uc758 \ubcf5\uc81c \ud654\ud569\ubb3c\ucc98\ub7fc)\uc744 \ubd80\uc5ec\ubc1b\ub294\ub2e4.\\tneutral\\n\uc2a4\ud0c0\ub294 \uc2a4\ud2f8 \uc790\uc2e0\uc774\ub098 \uc65c \uadf8\ub140\uc758 \uc774\uc57c\uae30\ub97c \ubc14\uafb8\uc5c8\ub294\uc9c0\uc5d0 \ud6e8\uc52c \ub354 \uad00\uc2ec\uc774 \uc788\uc744 \uac83\uc774\ub2e4.\\t\uc2a4\ud2f8\uc758 \uc774\uc57c\uae30\ub294 \uc870\uae08\ub3c4 \ubcc0\ud558\uc9c0 \uc54a\uc558\ub2e4.\\tcontradiction\\n\ub0a8\ud3b8\uacfc\uc758 \ub9c8\uc9c0\ub9c9 \ub300\uacb0\ub85c \ub9e5\ud2f0\uc5b4\ub294 \ub178\ub77c\uc758 \ubcc0\uc2e0\uc744 \ub108\ubb34\ub098 \ub2a5\uc219\ud558\uac8c \uc608\uace0\ud574 \uc654\uae30 \ub54c\ubb38\uc5d0, \uadf8\ub140\uc5d0\uac8c\ub294 \ub2f9\ud669\uc2a4\ub7ec\uc6b8 \uc815\ub3c4\ub85c \uac11\uc791\uc2a4\ub7ec\uc6b4 \uac83\ucc98\ub7fc \ubcf4\uc774\uc9c0\ub9cc, \uc6b0\ub9ac\uc5d0\uac8c\ub294 \uac10\uc815\uc801\uc73c\ub85c \ubd88\uac00\ud53c\ud574 \ubcf4\uc778\ub2e4.\\t\ub178\ub77c\uc758 \ubcc0\uc2e0\uc740 \ubd84\uba85\ud558\uace0 \ud544\uc5f0\uc801\uc774\uc5c8\ub2e4.\\tcontradiction\\n\uc774\uc9d1\ud2b8 \ucd5c\ub0a8\ub2e8 \ub3c4\uc2dc\uc778 \uc544\uc2a4\uc644\uc740 \uc624\ub79c \uc5ed\uc0ac\ub97c \ud1b5\ud574 \uc911\uc694\ud55c \uc5ed\ud560\uc744 \ud574\uc654\ub2e4.\\t\uc544\uc2a4\uc644\uc740 \uc774\uc9d1\ud2b8 \uad6d\uacbd \ubc14\ub85c \uc704\uc5d0 \uc704\uce58\ud574 \uc788\uc2b5\ub2c8\ub2e4.\\tneutral\\n\uadf8\ub7ec\ub098 \ud6e8\uc52c \ub354 \uc6b0\uc544\ud55c \uac74\ucd95\uc801 \ud130\uce58\ub294 \uc2e0\uc131\ud55c \ucda4\uc778 Bharatanatyam\uc5d0\uc11c \uc218\ud589\ub41c 108 \uac00\uc9c0 \uae30\ubcf8 \ud3ec\uc988\ub97c \uc2dc\ubc14 \ud328\ub110\uc5d0\uc11c \ubcfc \uc218 \uc788\uc2b5\ub2c8\ub2e4.\\t\ud328\ub110\uc5d0 \ub300\ud55c \uc2dc\ubc14\uc758 \ubb18\uc0ac\ub294 \uc77c\ubc18\uc801\uc778 \ubaa8\ud2f0\ube0c\ub2e4.\\tneutral\\n\ud638\ud654\ub86d\uac8c \uc2ec\uc5b4\uc9c4 \uacc4\ub2e8\uc2dd \uc815\uc6d0\uc740 \uc774\ud0c8\ub9ac\uc544 \ud615\uc2dd\uc758 \uac00\uc7a5 \ud6cc\ub96d\ud55c \uc559\uc0c1\ube14 \uc911 \ud558\ub098\uc785\ub2c8\ub2e4.\\t\uc544\ub984\ub2e4\uc6b4 \uc815\uc6d0\uacfc \ud76c\uadc0\ud55c \uaf43\uaf42\uc774 \ubaa8\ub450 \uc774\ud0c8\ub9ac\uc544\uc758 \ud615\uc2dd\uc801\uc778 \uc2a4\ud0c0\uc77c\uc744 \ubcf4\uc5ec\uc900\ub2e4.\\tneutral\\n\uc74c, \uadf8\ub7ac\uc73c\uba74 \uc88b\uc558\uc744 \ud150\ub370\\t\ub098\ub294 \uadf8\uac83\uc744 \ub2e4\ub974\uac8c \ud560 \uae30\ud68c\ub97c \ubab9\uc2dc \uac08\ub9dd\ud55c\ub2e4.\\tentailment\\n\ud3d0\ud5c8\uac00 \ub41c \uc131\uc758 \uae30\uc2ad\uc5d0 \uc790\ub9ac\uc7a1\uace0 \uc788\ub294 \uc608\uc05c \uc911\uc138 \ub3c4\uc2dc \ucf00\uc774\uc11c\uc2a4\ubc84\uadf8\ub294 \ub178\ubca8 \ud3c9\ud654\uc0c1 \uc218\uc0c1\uc790 \uc54c\ubc84\ud2b8 \uc288\ubc14\uc774\ucc98(1875\ub144)\uc758 \ucd9c\uc0dd\uc9c0\ub85c \ub110\ub9ac \uc54c\ub824\uc838 \uc788\ub2e4.\\t\uc54c\ubc84\ud2b8 \uc288\ubc14\uc774\ucc98\ub294 \ub458 \ub2e4 \ucf00\uc774\uc11c\uc2a4\ubc84\uadf8 \ub9c8\uc744\uc5d0 \uc788\uc5c8\ub2e4.\\tentailment\\n\uace0\uac10\ub3c4\ub294 \ubb38\uc81c\uac00 \uc788\ub294 \ub300\ubd80\ubd84\uc758 \ud658\uc790\ub4e4\uc774 \ubc1c\uacac\ub420 \uac83\uc744 \ubcf4\uc7a5\ud55c\ub2e4.\\t\uc7a5\ube44 \ubbfc\uac10\ub3c4\ub294 \ubb38\uc81c \ud0d0\uc9c0\uc640 \uad00\ub828\uc774 \uc5c6\uc2b5\ub2c8\ub2e4.\\tcontradiction\\n\uc624\ub298\uc740 \ud655\uc2e4\ud788 \ubc18\ubc14\uc9c0 \uac19\uc740 \ub0a0\uc774\uc5c8\uc5b4\\t\uc624\ub298 \uc0ac\ubb34\uc2e4\uc5d0 \uc788\ub294 \ubaa8\ub4e0 \uc0ac\ub78c\ub4e4\uc740 \ubc18\ubc14\uc9c0\ub97c \uc785\uc5c8\ub2e4.\\tneutral\\n\ubabb\uc0dd\uae34 \ud131\uc2dc\ub3c4\ub97c \uc785\uace0.\\t\uadf8\uac83\uc740 \ubd84\ud64d\uc0c9\uacfc \uc8fc\ud669\uc0c9\uc785\ub2c8\ub2e4.\\tneutral\\n\uc774\uc8fc \ub178\ub3d9 \uc218\uc6a9\uc18c \uc624 \ub9c8\uc774 \uac13 \uadf8\ub4e4\uc740 \ud310\uc9c0 \uc0c1\uc790\uc5d0 \uc0b0\ub2e4.\\t\ub178\ub3d9 \uc218\uc6a9\uc18c\uc5d0\ub294 \ud310\uc9c0 \uc0c1\uc790\uc5d0 \uc0ac\ub294 \uc774\uc8fc \ub178\ub3d9\uc790\ub4e4\uc758 \uc0ac\uc9c4\uc774 \uc788\ub2e4.\\tneutral\\n\uadf8\ub798, \uadf8\uac00 \uc804 \uc138\uacc4\ub97c \uc5ec\ud589\ud55c \ud6c4\uc5d0 \uadf8\ub7f0 \uac70\uc57c\\t\uadf8\uac83\uc740 \uc0ac\ub78c\ub4e4\uc758 \uc138\uacc4 \uc5ec\ud589\uc744 \ub530\ub978\ub2e4.\\tentailment\\n\uac74\ub108\ud3b8\uc5d0 \ud06c\uace0 \ud070 \ucc38\ub098\ubb34 \uba87 \uadf8\ub8e8\uac00 \uc788\ub2e4.\\t\uc6b0\ub9ac\ub294 \uc5ec\uae30 \uc624\ud06c\ub098 \uc5b4\ub5a4 \uc885\ub958\uc758 \ubbf8\uad6d \ub098\ubb34\ub3c4 \uc5c6\ub2e4.\\tcontradiction\\nFort-de-France\uc5d0\uc11c \ucd9c\ubc1c\ud558\ub294 \uc790\ub3d9\ucc28\ub098 \uc5ec\uac1d\uc120\uc73c\ub85c, \ub2f9\uc2e0\uc740 \uc548\uc138 ? \ubc14\ub2e4 \ud3ec\ub3c4\uac00 \uadf8\ub298\uc744 \uc81c\uacf5\ud558\ub294 \ucf8c\uc801\ud55c \uac08\uc0c9 \ubaa8\ub798 \ud574\ubcc0\uacfc \ud53c\ud06c\ub2c9 \ud14c\uc774\ube14, \uc5b4\ub9b0\uc774 \ubbf8\ub044\ub7fc\ud2c0, \uc2dd\ub2f9\uc774 \uc788\ub294 \uc548\ub290\uc5d0 \ub3c4\ucc29\ud560 \uc218 \uc788\ub2e4.\\t\ud504\ub791\uc2a4 \uc694\uc0c8\uc5d0\uc11c \uc790\ub3d9\ucc28\ub098 \ud398\ub9ac\ub97c \ud0c0\uace0 \uc548\uc138\ub85c \uac08 \uc218 \uc788\ub2e4.\\tentailment\\n\uadf8\ub9ac\uace0 \uadf8\uac83\uc740 \uc568\ub77c\ubc30\ub9c8\uc8fc\uac00 \uc608\uc0c1\ud588\ub358 \ub300\ub85c \uc608\uc0b0\uc5d0\uc11c 50\ub9cc \ub2ec\ub7ec\ub97c \uc0ad\uac10\ud558\uc9c0 \uc54a\uc744 \uac83\uc774\ub77c\ub294 \uac83\uc744 \uc758\ubbf8\ud55c\ub2e4.\\t\uc568\ub77c\ubc30\ub9c8 \uc8fc\ub294 \uc608\uc0b0 \uc0ad\uac10\uc744 \ud558\uc9c0 \uc54a\uc558\ub2e4. \uc65c\ub0d0\ud558\uba74 \uadf8\ub807\uac8c \ud558\ub294 \uac83\uc5d0 \ub300\ud55c \ucd08\uae30 \uc815\ub2f9\uc131\uc774 \uc815\ubc00 \uc870\uc0ac\uc5d0 \ub9de\uc11c\uc9c0 \uc54a\uc558\uae30 \ub54c\ubb38\uc774\ub2e4.\\tneutral\\n\uc54c\uc558\uc5b4 \uba3c\uc800 \uc5b4 .. \uc5b4 .. \ub178\uc778\uc774\ub098 \uac00\uc871\uc744 \uc694\uc591\uc6d0\uc5d0 \ubcf4\ub0b4\ub294 \uac83\uc5d0 \ub300\ud574 \uc5b4\ub5bb\uac8c \uc0dd\uac01\ud558\ub2c8?\\t\uac00\uc871\uc744 \uc694\uc591\uc6d0\uc5d0 \ubcf4\ub0b4\uc11c \uc0ac\ub294 \uac83\uc5d0 \ub300\ud574 \uc5b4\ub5bb\uac8c \uc0dd\uac01\ud558\ub294\uc9c0 \uc54c \ud544\uc694\uac00 \uc5c6\ub2e4.\\tcontradiction\\n\ub098\uba38\uc9c0\ub294 \ub108\uc5d0\uac8c \ub2ec\ub838\uc5b4.\\t\ub098\uba38\uc9c0\ub294 \ub108\uc5d0\uac8c \ub2ec\ub838\uc9c0\ub9cc \uc2dc\uac04\uc774 \ub9ce\uc9c0 \uc54a\ub2e4.\\tneutral\\n\uc74c-\ud760, 3\uc6d4\uc5d0 \ud587\ubcd5\uc5d0 \ud0c0\ub294 \uac83\uc5d0 \ub300\ud574 \uac71\uc815\ud558\uba74 \uc548 \ub41c\ub2e4\ub294 \uac83\uc744 \uc54c\uace0 \uc788\ub294 3\uc6d4\uc774\uc57c.\\t3\uc6d4\uc740 \uadf8\ub807\uac8c \ub365\uc9c0 \uc54a\ub2e4.\\tneutral\\n\uadf8\ub9ac\uace0 \uc5b4, \uadf8\ub7f0 \uc791\uc740 \uac83\ub4e4\ub85c \ub2e4\uc2dc \uc2dc\uc791\ud574\ubd10. \uc544\uc9c1 \ud6e8\uc52c \uc2f8. \uc5b4, \uadf8 \ud2b9\ubcc4\ud55c \ubaa8\ub378 \ucc28\ub294 150\ub2ec\ub7ec\uc57c.\\t\uadf8 \ubaa8\ud615\ucc28\ub294 4\ucc9c \ub2ec\ub7ec\uac00 \ub4e0\ub2e4.\\tcontradiction\\n\ub0b4\uc77c \ub3cc\uc544\uac00\uc57c \ud55c\ub2e4\uba74, \uce7c\uc774 \ub9d0\ud588\ub2e4.\\t\ub3cc\uc544\uac08 \uc218 \uc5c6\uc5b4. \uc624\ub298\uc740 \uc548 \ub3fc. \ub0b4\uc77c\uc740 \uc548 \ub3fc. \uc808\ub300 \uc548 \ub3fc.\" \uce7c\uc774 \ub9d0\ud588\ub2e4.', 'sentence2': 'contradiction'}\r\n ```\r\n\r\n2. (Optional) Preferred to change the name of the features for the compatibility with `run_glue.py` in \ud83e\udd17 Transformers\r\n - `kor_nli` dataset has same data structure of multi_nli, xnli\r\n - Changing the name of features and the feature type of 'gold_label' to ClassLabel might be helpful\r\n ```python\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"premise\": datasets.Value(\"string\"),\r\n \"hypothesis\": datasets.Value(\"string\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"entailment\", \"neutral\", \"contradiction\"]),\r\n } \r\n ),\r\n ```\r\n\r\nIf you don't mind, I would like to fix this.\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/821\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/820","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/820\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/820\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/820\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/820","id":739387617,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE4MDYwMjQ0","number":820,"title":"Update quail dataset to v1.3","user":{"login":"ngdodd","id":4889636,"node_id":"MDQ6VXNlcjQ4ODk2MzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4889636?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ngdodd","html_url":"https:\/\/github.com\/ngdodd","followers_url":"https:\/\/api.github.com\/users\/ngdodd\/followers","following_url":"https:\/\/api.github.com\/users\/ngdodd\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ngdodd\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ngdodd\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ngdodd\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ngdodd\/orgs","repos_url":"https:\/\/api.github.com\/users\/ngdodd\/repos","events_url":"https:\/\/api.github.com\/users\/ngdodd\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ngdodd\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604958566000,"updated_at":1604999195000,"closed_at":1604999195000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/820","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/820","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/820.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/820.patch"},"body":"Updated quail to most recent version, to address the problem originally discussed [here](https:\/\/github.com\/huggingface\/datasets\/issues\/806).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/820\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/819","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/819\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/819\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/819\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/819","id":739250624,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE3OTQ2MjYy","number":819,"title":"Make save function use deterministic global vars order","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604945523000,"updated_at":1605108052000,"closed_at":1605108051000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/819","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/819","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/819.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/819.patch"},"body":"The `dumps` function need to be deterministic for the caching mechanism.\r\nHowever in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.\r\nI had to add a rectified `save_function` to the saving functions registry of the Pickler to make it work.\r\n\r\nThis should fix #816 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/819\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/818","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/818\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/818\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/818\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/818","id":739173861,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE3ODgzMzk0","number":818,"title":"Fix type hints pickling in python 3.6","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604939267000,"updated_at":1604999223000,"closed_at":1604999222000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/818","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/818","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/818.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/818.patch"},"body":"Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6\r\n\r\nHowever Cloupickle proposed a [fix](https:\/\/github.com\/cloudpipe\/cloudpickle\/pull\/318\/files) to make it work anyway.\r\nThe idea is just to implement the pickling\/unpickling of parameterized type hints. There is one detail though: since in python 3.6 we can't use `isinstance` on type hints, then we can't use pickle saving functions registry directly. Therefore we just wrap the `save_global` method of the Pickler.\r\n\r\nThis should fix https:\/\/github.com\/huggingface\/transformers\/issues\/8212 for python 3.6 and make `run_mlm.py` support python 3.6\r\n\r\ncc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/818\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/817","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/817\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/817\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/817\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/817","id":739145369,"node_id":"MDU6SXNzdWU3MzkxNDUzNjk=","number":817,"title":"Add MRQA dataset","user":{"login":"VictorSanh","id":16107619,"node_id":"MDQ6VXNlcjE2MTA3NjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16107619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/VictorSanh","html_url":"https:\/\/github.com\/VictorSanh","followers_url":"https:\/\/api.github.com\/users\/VictorSanh\/followers","following_url":"https:\/\/api.github.com\/users\/VictorSanh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/VictorSanh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/VictorSanh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/VictorSanh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/VictorSanh\/orgs","repos_url":"https:\/\/api.github.com\/users\/VictorSanh\/repos","events_url":"https:\/\/api.github.com\/users\/VictorSanh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/VictorSanh\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Done! cf #1117 and #1022"],"created_at":1604937139000,"updated_at":1607096682000,"closed_at":1607096681000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** MRQA\r\n- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. This dataset was collected as part of MRQA 2019's shared task \r\n- **Paper:** https:\/\/arxiv.org\/abs\/1910.09753\r\n- **Data:** https:\/\/github.com\/mrqa\/MRQA-Shared-Task-2019\r\n- **Motivation:** Out-of-domain generalization is becoming (has become) a de-factor evaluation for NLU systems\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/817\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/816","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/816\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/816\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/816\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/816","id":739102686,"node_id":"MDU6SXNzdWU3MzkxMDI2ODY=","number":816,"title":"[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order"],"created_at":1604934080000,"updated_at":1605108050000,"closed_at":1605108050000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.\r\n\r\nTo fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/816\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/815","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/815\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/815\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/815\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/815","id":738842092,"node_id":"MDU6SXNzdWU3Mzg4NDIwOTI=","number":815,"title":"Is dataset iterative or not?","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate them\r\n```python\r\nfrom datasets import concatenate_datasets\r\n\r\nnew_dataset = concatenate_datasets([dataset1, dataset2])\r\n```\r\nLet me know if this helps !","Hi Huggingface\/Datasets team,\nI want to use the datasets inside Seq2SeqDataset here\nhttps:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/seq2seq\/utils.py\nand there I need to return back each line from the datasets and I am not\nsure how to access each line and implement this?\nIt seems it also has get_item attribute? so I was not sure if this is\niterative dataset? or if this is non-iterable datasets?\nthanks.\n\n\n\nOn Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Hello !\n> Could you give more details ?\n>\n> If you mean iter through one dataset then yes, Dataset object does\n> implement the __iter__ method so you can use\n>\n> for example in dataset:\n> # do something\n>\n> If you want to iter through several datasets you can first concatenate them\n>\n> from datasets import concatenate_datasets\n> new_dataset = concatenate_datasets([dataset1, dataset2])\n>\n> Let me know if this helps !\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/issues\/815#issuecomment-723881199>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n> .\n>\n","could you tell me please if datasets also has __getitem__ any idea on how\nto integrate it with Seq2SeqDataset is appreciated thanks\n\nOn Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com>\nwrote:\n\n> Hi Huggingface\/Datasets team,\n> I want to use the datasets inside Seq2SeqDataset here\n> https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/seq2seq\/utils.py\n> and there I need to return back each line from the datasets and I am not\n> sure how to access each line and implement this?\n> It seems it also has get_item attribute? so I was not sure if this is\n> iterative dataset? or if this is non-iterable datasets?\n> thanks.\n>\n>\n>\n> On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com>\n> wrote:\n>\n>> Hello !\n>> Could you give more details ?\n>>\n>> If you mean iter through one dataset then yes, Dataset object does\n>> implement the __iter__ method so you can use\n>>\n>> for example in dataset:\n>> # do something\n>>\n>> If you want to iter through several datasets you can first concatenate\n>> them\n>>\n>> from datasets import concatenate_datasets\n>> new_dataset = concatenate_datasets([dataset1, dataset2])\n>>\n>> Let me know if this helps !\n>>\n>> \u2014\n>> You are receiving this because you authored the thread.\n>> Reply to this email directly, view it on GitHub\n>> <https:\/\/github.com\/huggingface\/datasets\/issues\/815#issuecomment-723881199>,\n>> or unsubscribe\n>> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA>\n>> .\n>>\n>\n","`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column.\r\n\r\nWe've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files.\r\n\r\nHowever as soon as you have a `datasets.Dataset` with columns \"tgt_texts\" (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?","Hi\nI am sorry for asking it multiple times but I am not getting the dataloader\ntype, could you confirm if the dataset library returns back an iterable\ntype dataloader or a mapping type one where one has access to __getitem__,\nin the former case, one can iterate with __iter__, and how I can configure\nit to return the data back as the iterative type? I am dealing with\nlarge-scale datasets and I do not want to bring all in memory\nthanks for your help\nBest regards\nRabeeh\n\nOn Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> datasets.Dataset objects implement indeed __getitem__. It returns a\n> dictionary with one field per column.\n>\n> We've not added the integration of the datasets library for the seq2seq\n> utilities yet. The current seq2seq utilities are based on text files.\n>\n> However as soon as you have a datasets.Dataset with columns \"tgt_texts\"\n> (str), \"src_texts\" (str), and \"id\" (int) you should be able to implement\n> your own Seq2SeqDataset class that wraps your dataset object. Does that\n> make sense ?\n>\n> \u2014\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/issues\/815#issuecomment-723915556>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA>\n> .\n>\n","`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__`\r\nFor example you can do\r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\nor\r\n```python\r\nfor i in range(len(dataset)):\r\n example = dataset[i]\r\n # do something\r\n```\r\nWhen you do that, one and only one example is loaded into memory at a time.","Hi there, \r\nHere is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks \r\n\r\n\r\n```\r\nimport datasets\r\ndataset1 = load_dataset(\"squad\", split=\"train[:10]\")\r\ndataset1 = dataset1.map(lambda example: {\"src_texts\": \"question: {0} context: {1} \".format(\r\n example[\"question\"], example[\"context\"]),\r\n \"tgt_texts\": example[\"answers\"][\"text\"][0]}, remove_columns=dataset1.column_names)\r\ndataset2 = load_dataset(\"imdb\", split=\"train[:10]\")\r\ndataset2 = dataset2.map(lambda example: {\"src_texts\": \"imdb: \" + example[\"text\"],\r\n \"tgt_texts\": str(example[\"label\"])}, remove_columns=dataset2.column_names)\r\ntrain_dataset = datasets.concatenate_datasets([dataset1, dataset2])\r\ntrain_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts'])\r\ndataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)\r\nfor id, batch in enumerate(dataloader):\r\n print(batch)\r\n\r\n```","closed since I found this response on the issue https:\/\/github.com\/huggingface\/datasets\/issues\/469"],"created_at":1604913108000,"updated_at":1605005403000,"closed_at":1605005403000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?\r\ncould you provide me with example how I can use datasets as iterative datasets?\r\nthanks","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/815\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/814","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/814\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/814\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/814\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/814","id":738500443,"node_id":"MDU6SXNzdWU3Mzg1MDA0NDM=","number":814,"title":"Joining multiple datasets ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["found a solution here https:\/\/discuss.pytorch.org\/t\/train-simultaneously-on-two-datasets\/649\/35, closed for now, thanks "],"created_at":1604852370000,"updated_at":1604864328000,"closed_at":1604864328000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi\r\nI have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/814\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/813","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/813\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/813\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/813\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/813","id":738489852,"node_id":"MDU6SXNzdWU3Mzg0ODk4NTI=","number":813,"title":"How to implement DistributedSampler with datasets ","user":{"login":"rabeehkarimimahabadi","id":73364383,"node_id":"MDQ6VXNlcjczMzY0Mzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/73364383?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi","html_url":"https:\/\/github.com\/rabeehkarimimahabadi","followers_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/followers","following_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/orgs","repos_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/repos","events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rabeehkarimimahabadi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ","Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to get somewhere?"],"created_at":1604849231000,"updated_at":1611150147000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.\r\nI need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/813\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/812","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/812\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/812\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/812\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/812","id":738340217,"node_id":"MDU6SXNzdWU3MzgzNDAyMTc=","number":812,"title":"Too much logging ","user":{"login":"dspoka","id":6183050,"node_id":"MDQ6VXNlcjYxODMwNTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6183050?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dspoka","html_url":"https:\/\/github.com\/dspoka","followers_url":"https:\/\/api.github.com\/users\/dspoka\/followers","following_url":"https:\/\/api.github.com\/users\/dspoka\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dspoka\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dspoka\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dspoka\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dspoka\/orgs","repos_url":"https:\/\/api.github.com\/users\/dspoka\/repos","events_url":"https:\/\/api.github.com\/users\/dspoka\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dspoka\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that","+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)\r\n\r\n```\r\nI1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on \/home\/kitaev\/.cache\/huggingface\/datasets\/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock\r\nI1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on \/home\/kitaev\/.cache\/huggingface\/datasets\/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on \/home\/kitaev\/.cache\/huggingface\/datasets\/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on \/home\/kitaev\/.cache\/huggingface\/datasets\/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on \/home\/kitaev\/.cache\/huggingface\/datasets\/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on \/home\/kitaev\/.cache\/huggingface\/datasets\/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on \/home\/kitaev\/.cache\/huggingface\/datasets\/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on \/home\/kitaev\/.cache\/huggingface\/metrics\/glue\/mnli\/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on \/home\/kitaev\/.cache\/huggingface\/metrics\/glue\/mnli\/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on \/home\/kitaev\/.cache\/huggingface\/metrics\/glue\/mnli\/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.069736 140301730502464 metric.py:400] Removing \/home\/kitaev\/.cache\/huggingface\/metrics\/glue\/mnli\/default_experiment-1-0.arrow\r\nI1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on \/home\/kitaev\/.cache\/huggingface\/metrics\/glue\/mnli\/default_experiment-1-0.arrow.lock\r\n```","So how to solve this problem?","In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.\r\nAlso `set_verbosity_warning` does take into account these logs now.\r\nCan you try to update the lib ?\r\n```\r\npip install --upgrade datasets\r\n```","Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?\r\n\r\nI'm still using 1.13 version datasets.","On older versions you can use\r\n```python\r\nimport logging\r\n\r\nlogging.getLogger(\"filelock\").setLevel(logging.WARNING)\r\n```","Whoa Thank you! It works!"],"created_at":1604793390000,"updated_at":1611671494000,"closed_at":1605546402000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm doing this in the beginning of my script:\r\n\r\nfrom datasets.utils import logging as datasets_logging\r\ndatasets_logging.set_verbosity_warning()\r\n\r\nbut I'm still getting these logs:\r\n\r\n[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on \/home\/username\/.cache\/huggingface\/datasets\/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock\r\n\r\n[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on \/home\/username\/.cache\/huggingface\/datasets\/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock\r\n\r\nusing datasets version = 1.1.2","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/812\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/811","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/811\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/811\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/811\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/811","id":738280132,"node_id":"MDU6SXNzdWU3MzgyODAxMzI=","number":811,"title":"nlp viewer error","user":{"login":"jc-hou","id":30210529,"node_id":"MDQ6VXNlcjMwMjEwNTI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30210529?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jc-hou","html_url":"https:\/\/github.com\/jc-hou","followers_url":"https:\/\/api.github.com\/users\/jc-hou\/followers","following_url":"https:\/\/api.github.com\/users\/jc-hou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jc-hou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jc-hou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jc-hou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jc-hou\/orgs","repos_url":"https:\/\/api.github.com\/users\/jc-hou\/repos","events_url":"https:\/\/api.github.com\/users\/jc-hou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jc-hou\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["and also for 'blog_authorship_corpus'\r\nhttps:\/\/huggingface.co\/nlp\/viewer\/?dataset=blog_authorship_corpus\r\n![image](https:\/\/user-images.githubusercontent.com\/30210529\/98557329-5c182800-22a4-11eb-9b01-5b910fb8fcd4.png)\r\n","Is this the problem of my local computer or ??"],"created_at":1604768938000,"updated_at":1605540383000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, \r\nwhen I select amazon_us_reviews in nlp viewer, it shows error.\r\nhttps:\/\/huggingface.co\/nlp\/viewer\/?dataset=amazon_us_reviews\r\n![image](https:\/\/user-images.githubusercontent.com\/30210529\/98447334-4aa81200-2124-11eb-9dca-82c3ab34ccc2.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/811\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/810","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/810\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/810\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/810\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/810","id":737878370,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE2ODQzMzQ3","number":810,"title":"Fix seqeval metric","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604679103000,"updated_at":1604930669000,"closed_at":1604930668000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/810","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/810","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/810.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/810.patch"},"body":"The current seqeval metric returns the following error when computed:\r\n```\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/seqeval\/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8\/seqeval.py in _compute(self, predictions, references, suffix)\r\n 102 scores = {}\r\n 103 for type_name, score in report.items():\r\n--> 104 scores[type_name][\"precision\"] = score[\"precision\"]\r\n 105 scores[type_name][\"recall\"] = score[\"recall\"]\r\n 106 scores[type_name][\"f1\"] = score[\"f1-score\"]\r\n\r\nKeyError: 'LOC'\r\n```\r\nThis is because the current code basically tries to do:\r\n```\r\nscores = {}\r\nscores[\"LOC\"][\"precision\"] = some_value\r\n```\r\nwhich does not work in python. This PR fixes that while keeping the previous nested structure of results, with the same keys.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/810\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/809","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/809\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/809\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/809\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/809","id":737832701,"node_id":"MDU6SXNzdWU3Mzc4MzI3MDE=","number":809,"title":"Add Google Taskmaster dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?","You are absolutely right :) \r\n\r\nClosed by https:\/\/github.com\/huggingface\/datasets\/pull\/1193 https:\/\/github.com\/huggingface\/datasets\/pull\/1197 https:\/\/github.com\/huggingface\/datasets\/pull\/1213"],"created_at":1604675441000,"updated_at":1618924166000,"closed_at":1618924166000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** Taskmaster\r\n- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)\r\n- **Paper:** https:\/\/arxiv.org\/abs\/1909.05358\r\n- **Data:** https:\/\/github.com\/google-research-datasets\/Taskmaster\r\n- **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/809\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/808","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/808\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/808\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/808\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/808","id":737638942,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE2NjQ0NDc0","number":808,"title":"dataset(dgs): initial dataset loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @AmitMY, \r\n\r\nWere you able to figure this out?","I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps:\/\/github.com\/sign-language-processing\/datasets\/tree\/master\/sign_language_datasets\/datasets\/dgs_corpus\r\n\r\nClosing as I don't know how to support this PR further"],"created_at":1604657683000,"updated_at":1616480335000,"closed_at":1616480335000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/808","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/808","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/808.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/808.patch"},"body":"When trying to create dummy data I get:\r\n\r\n> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. Make sure you create the file dummy_data.\r\n\r\nI am not sure how to manually create the dummy_data (what exactly it should contain)\r\n\r\nAlso note, this library says:\r\n> ImportError: To be able to use this dataset, you need to install the following dependencies['pympi'] using 'pip install pympi' for instance'\r\n\r\nWhen you actually need to `pip install pympi-ling`\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/808\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/807","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/807\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/807\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/807\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/807","id":737509954,"node_id":"MDU6SXNzdWU3Mzc1MDk5NTQ=","number":807,"title":"load_dataset for LOCAL CSV files report CONNECTION ERROR","user":{"login":"shexuan","id":25664170,"node_id":"MDQ6VXNlcjI1NjY0MTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25664170?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shexuan","html_url":"https:\/\/github.com\/shexuan","followers_url":"https:\/\/api.github.com\/users\/shexuan\/followers","following_url":"https:\/\/api.github.com\/users\/shexuan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shexuan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shexuan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shexuan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shexuan\/orgs","repos_url":"https:\/\/api.github.com\/users\/shexuan\/repos","events_url":"https:\/\/api.github.com\/users\/shexuan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shexuan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?","> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n\r\nI tried another server, it's working now. Thanks a lot.\r\n\r\nAnd I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?","It seems my network frequently crashed so most time it cannot work.","\r\n\r\n\r\n> > Hi !\r\n> > The url works on my side.\r\n> > Is the url working in your navigator ?\r\n> > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> \r\n> I tried another server, it's working now. Thanks a lot.\r\n> \r\n> And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n\r\nI download the scripts `https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py` and move it to the package dir `*\/datasets\/` solved the problem. Could you please put the file `datasets\/datasets\/csv\/csv.py` to `datasets\/src\/datasets\/`\uff1f \r\n\r\nThanks :D","hello, how did you solve this problems?\r\n\r\n> > > Hi !\r\n> > > The url works on my side.\r\n> > > Is the url working in your navigator ?\r\n> > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > \r\n> > \r\n> > I tried another server, it's working now. Thanks a lot.\r\n> > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> \r\n> I download the scripts `https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py` and move it to the package dir `*\/datasets\/` solved the problem. Could you please put the file `datasets\/datasets\/csv\/csv.py` to `datasets\/src\/datasets\/`\uff1f\r\n> \r\n> Thanks :D\r\n\r\nhello, I tried this. but it still failed. how do you fix this error?","> hello, how did you solve this problems?\r\n> \r\n> > > > Hi !\r\n> > > > The url works on my side.\r\n> > > > Is the url working in your navigator ?\r\n> > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > \r\n> > > \r\n> > > I tried another server, it's working now. Thanks a lot.\r\n> > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > \r\n> > \r\n> > I download the scripts `https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py` and move it to the package dir `*\/datasets\/` solved the problem. Could you please put the file `datasets\/datasets\/csv\/csv.py` to `datasets\/src\/datasets\/`\uff1f\r\n> > Thanks :D\r\n> \r\n> hello, I tried this. but it still failed. how do you fix this error?\r\n\r\n\u4f60\u628a\u90a3\u4e2a\u811a\u672c\u4e0b\u8f7d\u5230\u4f60\u672c\u5730\u5b89\u88c5\u76ee\u5f55\u4e0b\uff0c\u7136\u540e `load_dataset(csv_script_path, data_fiels)`\r\n\r\n","> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py` and move it to the package dir `*\/datasets\/` solved the problem. Could you please put the file `datasets\/datasets\/csv\/csv.py` to `datasets\/src\/datasets\/`\uff1f\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> \u4f60\u628a\u90a3\u4e2a\u811a\u672c\u4e0b\u8f7d\u5230\u4f60\u672c\u5730\u5b89\u88c5\u76ee\u5f55\u4e0b\uff0c\u7136\u540e `load_dataset(csv_script_path, data_fiels)`\r\n\r\n\u597d\u7684\u597d\u7684\uff01\u89e3\u51b3\u4e86\uff0c\u611f\u8c22\u611f\u8c22\uff01\uff01\uff01","> \r\n> \r\n> > hello, how did you solve this problems?\r\n> > > > > Hi !\r\n> > > > > The url works on my side.\r\n> > > > > Is the url working in your navigator ?\r\n> > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?\r\n> > > > \r\n> > > > \r\n> > > > I tried another server, it's working now. Thanks a lot.\r\n> > > > And I'm curious about why download things from \"github\" when I load dataset from local files ? Dose datasets work if my network crashed?\r\n> > > \r\n> > > \r\n> > > I download the scripts `https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py` and move it to the package dir `*\/datasets\/` solved the problem. Could you please put the file `datasets\/datasets\/csv\/csv.py` to `datasets\/src\/datasets\/`\uff1f\r\n> > > Thanks :D\r\n> > \r\n> > \r\n> > hello, I tried this. but it still failed. how do you fix this error?\r\n> \r\n> \u4f60\u628a\u90a3\u4e2a\u811a\u672c\u4e0b\u8f7d\u5230\u4f60\u672c\u5730\u5b89\u88c5\u76ee\u5f55\u4e0b\uff0c\u7136\u540e `load_dataset(csv_script_path, data_fiels)`\r\n\r\n\u6211\u7167\u7740\u505a\u4e86\uff0c\u7136\u540e\u62a5\u9519\u3002\r\nValueError: unable to parse C:\/Software\/Anaconda\/envs\/ptk_gpu2\/Lib\/site-packages\/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-5-fd2106a3f053> in <module>\r\n----> 1 dataset = load_dataset('C:\/Software\/Anaconda\/envs\/ptk_gpu2\/Lib\/site-packages\/datasets\/csv.py', data_files='.\/test.csv', delimiter=',', autogenerate_column_names=False)\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 588 # Download\/copy dataset processing script\r\n 589 module_path, hash = prepare_module(\r\n--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n 591 )\r\n 592 \r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 296 local_dataset_infos_path = cached_path(\r\n 297 dataset_infos,\r\n--> 298 download_config=download_config,\r\n 299 )\r\n 300 except (FileNotFoundError, ConnectionError):\r\n\r\nC:\\Software\\Anaconda\\envs\\ptk_gpu2\\lib\\site-packages\\datasets\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 316 else:\r\n 317 # Something unknown\r\n--> 318 raise ValueError(\"unable to parse {} as a URL or as a local path\".format(url_or_filename))\r\n 319 \r\n 320 if download_config.extract_compressed_file and output_path is not None:\r\n\r\nValueError: unable to parse C:\/Software\/Anaconda\/envs\/ptk_gpu2\/Lib\/site-packages\/datasets\\dataset_infos.json as a URL or as a local path\r\n\r\n`","I also experienced this issue this morning. Looks like something specific to windows.\r\nI'm working on a fix","I opened a PR @wn1652400018","> \r\n> \r\n> I opened a PR @wn1652400018\r\n\r\nThanks you!, It works very well."],"created_at":1604644384000,"updated_at":1610328627000,"closed_at":1605331834000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## load_dataset for LOCAL CSV files report CONNECTION ERROR\r\n- **Description:** \r\nA local demo csv file:\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom datasets import load_dataset\r\nimport torch\r\nimport transformers\r\n\r\ndf = pd.DataFrame(np.arange(1200).reshape(300,4))\r\ndf.to_csv('test.csv', header=False, index=False)\r\n\r\nprint('datasets version: ', datasets.__version__)\r\nprint('pytorch version: ', torch.__version__)\r\nprint('transformers version: ', transformers.__version__)\r\n\r\n# output:\r\ndatasets version: 1.1.2\r\npytorch version: 1.5.0\r\ntransformers version: 3.2.0\r\n```\r\n\r\nwhen I load data through `dataset`:\r\n```\r\ndataset = load_dataset('csv', data_files='.\/test.csv', delimiter=',', autogenerate_column_names=False)\r\n```\r\nError infos:\r\n```\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-17-bbdadb9a0c78> in <module>\r\n----> 1 dataset = load_dataset('csv', data_files='.\/test.csv', delimiter=',', autogenerate_column_names=False)\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 588 # Download\/copy dataset processing script\r\n 589 module_path, hash = prepare_module(\r\n--> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n 591 )\r\n 592 \r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)\r\n 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)\r\n 267 try:\r\n--> 268 local_path = cached_path(file_path, download_config=download_config)\r\n 269 except FileNotFoundError:\r\n 270 if script_version is not None:\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 306 user_agent=download_config.user_agent,\r\n 307 local_files_only=download_config.local_files_only,\r\n--> 308 use_etag=download_config.use_etag,\r\n 309 )\r\n 310 elif os.path.exists(url_or_filename):\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)\r\n 473 elif response is not None and response.status_code == 404:\r\n 474 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n--> 475 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 476 \r\n 477 # Try a second time\r\n\r\nConnectionError: Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py\r\n```\r\n\r\nAnd I try to connect to the site with requests:\r\n```\r\nimport requests\r\n\r\nrequests.head(\"https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py\")\r\n```\r\n\r\nSimilarly Error occurs:\r\n```\r\n---------------------------------------------------------------------------\r\nConnectionRefusedError Traceback (most recent call last)\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connection.py in _new_conn(self)\r\n 159 conn = connection.create_connection(\r\n--> 160 (self._dns_host, self.port), self.timeout, **extra_kw\r\n 161 )\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/util\/connection.py in create_connection(address, timeout, source_address, socket_options)\r\n 83 if err is not None:\r\n---> 84 raise err\r\n 85 \r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/util\/connection.py in create_connection(address, timeout, source_address, socket_options)\r\n 73 sock.bind(source_address)\r\n---> 74 sock.connect(sa)\r\n 75 return sock\r\n\r\nConnectionRefusedError: [Errno 111] Connection refused\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nNewConnectionError Traceback (most recent call last)\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\r\n 676 headers=headers,\r\n--> 677 chunked=chunked,\r\n 678 )\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)\r\n 380 try:\r\n--> 381 self._validate_conn(conn)\r\n 382 except (SocketTimeout, BaseSSLError) as e:\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py in _validate_conn(self, conn)\r\n 975 if not getattr(conn, \"sock\", None): # AppEngine might not have `.sock`\r\n--> 976 conn.connect()\r\n 977 \r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connection.py in connect(self)\r\n 307 # Add certificate verification\r\n--> 308 conn = self._new_conn()\r\n 309 hostname = self.host\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connection.py in _new_conn(self)\r\n 171 raise NewConnectionError(\r\n--> 172 self, \"Failed to establish a new connection: %s\" % e\r\n 173 )\r\n\r\nNewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nMaxRetryError Traceback (most recent call last)\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/requests\/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)\r\n 448 retries=self.max_retries,\r\n--> 449 timeout=timeout\r\n 450 )\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)\r\n 724 retries = retries.increment(\r\n--> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n 726 )\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/urllib3\/util\/retry.py in increment(self, method, url, response, error, _pool, _stacktrace)\r\n 438 if new_retry.is_exhausted():\r\n--> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\n 440 \r\n\r\nMaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: \/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-20-18cc3eb4a049> in <module>\r\n 1 import requests\r\n 2 \r\n----> 3 requests.head(\"https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py\")\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/requests\/api.py in head(url, **kwargs)\r\n 102 \r\n 103 kwargs.setdefault('allow_redirects', False)\r\n--> 104 return request('head', url, **kwargs)\r\n 105 \r\n 106 \r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/requests\/api.py in request(method, url, **kwargs)\r\n 59 # cases, and look like a memory leak in others.\r\n 60 with sessions.Session() as session:\r\n---> 61 return session.request(method=method, url=url, **kwargs)\r\n 62 \r\n 63 \r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/requests\/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)\r\n 528 }\r\n 529 send_kwargs.update(settings)\r\n--> 530 resp = self.send(prep, **send_kwargs)\r\n 531 \r\n 532 return resp\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/requests\/sessions.py in send(self, request, **kwargs)\r\n 641 \r\n 642 # Send the request\r\n--> 643 r = adapter.send(request, **kwargs)\r\n 644 \r\n 645 # Total elapsed time of the request (approximately)\r\n\r\n~\/.conda\/envs\/py36\/lib\/python3.6\/site-packages\/requests\/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)\r\n 514 raise SSLError(e, request=request)\r\n 515 \r\n--> 516 raise ConnectionError(e, request=request)\r\n 517 \r\n 518 except ClosedPoolError as e:\r\n\r\nConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: \/huggingface\/datasets\/1.1.2\/datasets\/csv\/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',))\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/807\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/806","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/806\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/806\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/806\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/806","id":737215430,"node_id":"MDU6SXNzdWU3MzcyMTU0MzA=","number":806,"title":"Quail dataset urls are out of date","user":{"login":"ngdodd","id":4889636,"node_id":"MDQ6VXNlcjQ4ODk2MzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4889636?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ngdodd","html_url":"https:\/\/github.com\/ngdodd","followers_url":"https:\/\/api.github.com\/users\/ngdodd\/followers","following_url":"https:\/\/api.github.com\/users\/ngdodd\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ngdodd\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ngdodd\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ngdodd\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ngdodd\/orgs","repos_url":"https:\/\/api.github.com\/users\/ngdodd\/repos","events_url":"https:\/\/api.github.com\/users\/ngdodd\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ngdodd\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ","Done! PR [https:\/\/github.com\/huggingface\/datasets\/pull\/820](https:\/\/github.com\/huggingface\/datasets\/pull\/820)\r\n\r\nUpdated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html#adding-tests-and-metadata-to-the-dataset). ","Closing since #820 is merged.\r\nThanks again for fixing the urls :)"],"created_at":1604605219000,"updated_at":1605016971000,"closed_at":1605016971000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"<h3>Code<\/h3>\r\n\r\n```\r\nfrom datasets import load_dataset\r\nquail = load_dataset('quail')\r\n```\r\n\r\n<h3>Error<\/h3>\r\n\r\n```\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/text-machine-lab\/quail\/master\/quail_v1.2\/xml\/ordered\/quail_1.2_train.xml\r\n```\r\n\r\n\r\nAs per [quail v1.3 commit](https:\/\/github.com\/text-machine-lab\/quail\/commit\/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/quail\/quail.py#L52-L58](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/quail\/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/806\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/805","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/805\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/805\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/805\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/805","id":737019360,"node_id":"MDU6SXNzdWU3MzcwMTkzNjA=","number":805,"title":"On loading a metric from datasets, I get the following error","user":{"login":"laibamehnaz","id":36405283,"node_id":"MDQ6VXNlcjM2NDA1Mjgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36405283?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/laibamehnaz","html_url":"https:\/\/github.com\/laibamehnaz","followers_url":"https:\/\/api.github.com\/users\/laibamehnaz\/followers","following_url":"https:\/\/api.github.com\/users\/laibamehnaz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/laibamehnaz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/laibamehnaz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/laibamehnaz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/laibamehnaz\/orgs","repos_url":"https:\/\/api.github.com\/users\/laibamehnaz\/repos","events_url":"https:\/\/api.github.com\/users\/laibamehnaz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/laibamehnaz\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"],"created_at":1604589278000,"updated_at":1604913155000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"`from datasets import load_metric`\r\n\r\n`metric = load_metric('bleurt')`\r\n\r\nTraceback:\r\n210 class _ArrayXDExtensionType(pa.PyExtensionType):\r\n 211 \r\n 212 ndims: int = None\r\n\r\nAttributeError: module 'pyarrow' has no attribute 'PyExtensionType'\r\n\r\nAny help will be appreciated. Thank you. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/805\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/804","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/804\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/804\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/804\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/804","id":736858507,"node_id":"MDU6SXNzdWU3MzY4NTg1MDc=","number":804,"title":"Empty output\/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["cc @yjernite is this expected ?","Yes: TriviaQA has a private test set for the leaderboard [here](https:\/\/competitions.codalab.org\/competitions\/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/kilt_tasks\/README.md","Oh ok, I guess I read the paper too fast \ud83d\ude05, thank you for your answer!"],"created_at":1604576281000,"updated_at":1604931299000,"closed_at":1604931298000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"# The issue\r\n\r\nIt's all in the title, it appears to be fine on the train and validation sets.\r\n\r\nIs there some kind of mapping to do like for the questions (see https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/kilt_tasks\/README.md) ? \r\n\r\n# How to reproduce\r\n```py\r\nfrom datasets import load_dataset\r\nkilt_tasks = load_dataset(\"kilt_tasks\")\r\ntrivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext')\r\n# both in \"kilt_tasks\"\r\nIn [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) \r\nOut[18]: False\r\n# and \"trivia_qa\"\r\nIn [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) \r\nOut[13]: True\r\n# appears to be fine on the train and validation sets.\r\nIn [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) \r\nOut[14]: False\r\n\r\nIn [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) \r\nOut[15]: False\r\n\r\nIn [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) \r\nOut[16]: True\r\n\r\nIn [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) \r\nOut[17]: True\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/804\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/803","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/803\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/803\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/803\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/803","id":736818917,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE1OTY1ODE2","number":803,"title":"fix: typos in tutorial to map KILT and TriviaQA","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604572920000,"updated_at":1604999287000,"closed_at":1604999287000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/803","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/803","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/803.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/803.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/803\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/802","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/802\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/802\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/802\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/802","id":736296343,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE1NTM1MDI0","number":802,"title":"Add XGlue","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Really cool to add XGlue, this will be a nice addition !\r\n\r\nSplits shouldn't depend on the language. There must be configurations for each language, as we're doing for xnli, xtreme, etc.\r\nFor example for XGlue we'll have these configurations: NER.de, NER.en etc."],"created_at":1604510994000,"updated_at":1606838308000,"closed_at":1606838307000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/802","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/802","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/802.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/802.patch"},"body":"Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for \r\n\r\n```python\r\nload_dataset(\"xglue\", \"ner\") # would give the splits 'train', 'validation.en', 'test.en', 'validation.es', 'test.es', ... \r\n```\r\n=> therefore one can load a single language test via\r\n\r\n```python\r\nload_dataset(\"xglue\", \"ner\", split=\"test.es\")\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/802\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/801","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/801\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/801\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/801\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/801","id":735790876,"node_id":"MDU6SXNzdWU3MzU3OTA4NzY=","number":801,"title":"How to join two datasets?","user":{"login":"shangw-nvidia","id":66387198,"node_id":"MDQ6VXNlcjY2Mzg3MTk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66387198?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shangw-nvidia","html_url":"https:\/\/github.com\/shangw-nvidia","followers_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/followers","following_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/orgs","repos_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/repos","events_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shangw-nvidia\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi this is also my question. thanks ","Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n","Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining datasets: #853 "],"created_at":1604461991000,"updated_at":1608732178000,"closed_at":1608732178000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? \r\n\r\nI'm currently trying to create paired sentences for BERT from `wikipedia\/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/801\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/800","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/800\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/800\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/800\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/800","id":735772775,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE1MTAyMjc3","number":800,"title":"Update loading_metrics.rst","user":{"login":"ayushidalmia","id":5400513,"node_id":"MDQ6VXNlcjU0MDA1MTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5400513?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ayushidalmia","html_url":"https:\/\/github.com\/ayushidalmia","followers_url":"https:\/\/api.github.com\/users\/ayushidalmia\/followers","following_url":"https:\/\/api.github.com\/users\/ayushidalmia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ayushidalmia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ayushidalmia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ayushidalmia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ayushidalmia\/orgs","repos_url":"https:\/\/api.github.com\/users\/ayushidalmia\/repos","events_url":"https:\/\/api.github.com\/users\/ayushidalmia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ayushidalmia\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604458631000,"updated_at":1605108512000,"closed_at":1605108512000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/800","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/800","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/800.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/800.patch"},"body":"Minor bug","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/800\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/799","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/799\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/799\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/799\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/799","id":735551165,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx","number":799,"title":"switch amazon reviews class label order","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604428738000,"updated_at":1604429054000,"closed_at":1604429050000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/799","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/799","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/799.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/799.patch"},"body":"Switches the label order to be more intuitive for amazon reviews, #791.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/799\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/798","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/798\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/798\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/798\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/798","id":735518805,"node_id":"MDU6SXNzdWU3MzU1MTg4MDU=","number":798,"title":"Cannot load TREC dataset: ConnectionError","user":{"login":"kaletap","id":25740957,"node_id":"MDQ6VXNlcjI1NzQwOTU3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25740957?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kaletap","html_url":"https:\/\/github.com\/kaletap","followers_url":"https:\/\/api.github.com\/users\/kaletap\/followers","following_url":"https:\/\/api.github.com\/users\/kaletap\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kaletap\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kaletap\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kaletap\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kaletap\/orgs","repos_url":"https:\/\/api.github.com\/users\/kaletap\/repos","events_url":"https:\/\/api.github.com\/users\/kaletap\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kaletap\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Indeed there's an issue with those links.\r\nWe should probably use the target urls of the redirections instead","Hi, the same issue here, could you tell me how to download it through datasets? thanks ","Same issue. ","Actually it's already fixed on the master branch since #740 \r\nI'll do the 1.1.3 release soon","Hi\nthanks, but I did tried to install from the pip install git+... and it does\nnot work for me,. thanks for the help. I have the same issue with wmt16,\n\"ro-en\"\nthanks.\nBest\nRabeeh\n\nOn Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com>\nwrote:\n\n> Actually it's already fixed on the master branch since #740\n> <https:\/\/github.com\/huggingface\/datasets\/pull\/740>\n> I'll do the 1.1.3 release soon\n>\n> \u2014\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/datasets\/issues\/798#issuecomment-727854736>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ABP4ZCEUBJKPOCLABXCKMPDSQDWH3ANCNFSM4TJBUKSA>\n> .\n>\n","I just tested on google colab using\r\n```python\r\n!pip install git+https:\/\/github.com\/huggingface\/datasets.git\r\nfrom datasets import load_dataset\r\nload_dataset(\"trec\")\r\n```\r\nand it works.\r\nCan you detail how you got the issue even when using the latest version on master ?\r\n\r\nAlso about wmt we'll look into it, thanks for reporting !"],"created_at":1604425522000,"updated_at":1605521905000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Problem\r\nI cannot load \"trec\" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. \r\n* `requests.head('http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label')` returns <Response [302]>. \r\n* `requests.head('http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.`\r\n* Opening `http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label' in a browser works, but opens a different address\r\n* Increasing max_redirects to 100 doesn't help\r\n\r\nAlso, while debugging I've seen that requesting 'https:\/\/storage.googleapis.com\/huggingface-nlp\/cache\/datasets\/trec\/default\/1.1.0\/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant.\r\n\r\n* datasets.__version__ == '1.1.2'\r\n* requests.__version__ == '2.24.0'\r\n\r\n## Error trace\r\n```\r\n>>> import datasets\r\n>>> datasets.__version__\r\n'1.1.2'\r\n>>> dataset = load_dataset(\"trec\", split=\"train\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset trec\/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to \/home\/przemyslaw\/.cache\/huggingface\/datasets\/trec\/default\/1.1.0\/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/przemyslaw\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/trec\/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7\/trec.py\", line 140, in _split_generators\r\n dl_files = dl_manager.download_and_extract(_URLs)\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/download_manager.py\", line 179, in download\r\n num_proc=download_config.num_proc,\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in map_nested\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/home\/przemyslaw\/.local\/lib\/python3.6\/site-packages\/datasets\/utils\/file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label\r\n```\r\n\r\nI would appreciate some suggestions here. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/798\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/797","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/797\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/797\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/797\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/797","id":735420332,"node_id":"MDU6SXNzdWU3MzU0MjAzMzI=","number":797,"title":"Token classification labels are strings and we don't have the list of labels","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed. Pinging @stefan-it here if he want to give an expert opinion :)","Related is https:\/\/github.com\/huggingface\/datasets\/pull\/636","Should definitely be a ClassLabel \ud83d\udc4d "],"created_at":1604417610000,"updated_at":1605017231000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy access to the underlying labels.\r\n\r\nThe main problem for preprocessing those datasets is that the list of possible labels is not stored inside the `Dataset` object which makes converting the labels to IDs quite difficult (you either have to know the list of labels in advance or run a full pass through the dataset to get the list of labels, the `unique` method being useless with the type `Sequence[str]`).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/797\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/796","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/796\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/796\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/796\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/796","id":735414881,"node_id":"MDU6SXNzdWU3MzU0MTQ4ODE=","number":796,"title":"Seq2Seq Metrics QOL: Bleu, Rouge","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for letting us know your experience :) \r\nWe should at least improve the error messages indeed","So what is the right way to add a batch to compute BLEU?","prediction = [['Hey', 'how', 'are', 'you', '?']] \r\nreference=[['Hey', 'how', 'are', 'you', '?']]\r\nbleu.compute(predictions=prediction,references=reference)\r\n\r\nalso tried this kind of things lol\r\nI definitely need help too","Hi !\r\n\r\nAs described in the documentation for `bleu`:\r\n```\r\nArgs:\r\n predictions: list of translations to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\n```\r\n\r\nTherefore you can use this metric this way:\r\n```python\r\nfrom datasets import load_metric\r\n\r\npredictions = [\r\n [\"hello\", \"there\", \"general\", \"kenobi\"], # tokenized prediction of the first sample\r\n [\"foo\", \"bar\", \"foobar\"] # tokenized prediction of the second sample\r\n]\r\nreferences = [\r\n [[\"hello\", \"there\", \"general\", \"kenobi\"], [\"hello\", \"there\", \"!\"]], # tokenized references for the first sample (2 references)\r\n [[\"foo\", \"bar\", \"foobar\"]] # tokenized references for the second sample (1 reference)\r\n]\r\n\r\nbleu = load_metric(\"bleu\")\r\nbleu.compute(predictions=predictions, references=references)\r\n# Or you can also add batches before calling compute()\r\n# bleu.add_batch(predictions=predictions, references=references)\r\n# bleu.compute()\r\n```\r\n\r\nHope this helps :)"],"created_at":1604417189000,"updated_at":1611843228000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Putting all my QOL issues here, idt I will have time to propose fixes, but I didn't want these to be lost, in case they are useful. I tried using `rouge` and `bleu` for the first time and wrote down everything I didn't immediately understand:\r\n\r\n+ Bleu expects tokenization, can I just kwarg it like sacrebleu?\r\n+ different signatures, means that I would have had to add a lot of conditionals + pre and post processing: if I were going to replace the `calculate_rouge` and `calculate_bleu` functions here: https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/seq2seq\/utils.py#L61\r\n\r\n\r\n#### What I tried\r\n\r\n\r\nRouge experience:\r\n```python\r\n\r\nrouge = load_metric('rouge')\r\nrouge.add_batch(['hi im sam'], ['im daniel']) # fails\r\nrouge.add_batch(predictions=['hi im sam'], references=['im daniel']) # works\r\nrouge.compute() # huge messy output, but reasonable. Not worth integrating b\/c don't want to rewrite all the postprocessing.\r\n```\r\n\r\nBLEU experience:\r\n```python\r\nbleu = load_metric('bleu')\r\nbleu.add_batch(predictions=['hi im sam'], references=['im daniel'])\r\nbleu.add_batch(predictions=[['hi im sam']], references=[['im daniel']])\r\n\r\nbleu.add_batch(predictions=[['hi im sam']], references=[['im daniel']])\r\n```\r\nAll of these raise `ValueError: Got a string but expected a list instead: 'im daniel'`\r\n\r\n#### Doc Typo\r\nThis says `dataset=load_metric(...)` which seems wrong, will cause `NameError`\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/6045025\/98004483-ff0d0580-1dbd-11eb-9f35-6f35904611bb.png)\r\n\r\ncc @lhoestq, feel free to ignore.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/796\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/795","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/795\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/795\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/795\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/795","id":735198265,"node_id":"MDU6SXNzdWU3MzUxOTgyNjU=","number":795,"title":"Descriptions of raw and processed versions of wikitext are inverted","user":{"login":"fraboniface","id":16835358,"node_id":"MDQ6VXNlcjE2ODM1MzU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16835358?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fraboniface","html_url":"https:\/\/github.com\/fraboniface","followers_url":"https:\/\/api.github.com\/users\/fraboniface\/followers","following_url":"https:\/\/api.github.com\/users\/fraboniface\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fraboniface\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fraboniface\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fraboniface\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fraboniface\/orgs","repos_url":"https:\/\/api.github.com\/users\/fraboniface\/repos","events_url":"https:\/\/api.github.com\/users\/fraboniface\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fraboniface\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes indeed ! Thanks for reporting"],"created_at":1604399091000,"updated_at":1605017145000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselves.\r\n\r\nAlso it would be nice if those descriptions appeared in the dataset explorer.\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/87bd0864845ea0a1dd7167918dc5f341bf807bd3\/datasets\/wikitext\/wikitext.py#L52","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/795\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/794","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/794\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/794\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/794\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/794","id":735158725,"node_id":"MDU6SXNzdWU3MzUxNTg3MjU=","number":794,"title":"self.options cannot be converted to a Python object for pickling","user":{"login":"hzqjyyx","id":9635713,"node_id":"MDQ6VXNlcjk2MzU3MTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9635713?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hzqjyyx","html_url":"https:\/\/github.com\/hzqjyyx","followers_url":"https:\/\/api.github.com\/users\/hzqjyyx\/followers","following_url":"https:\/\/api.github.com\/users\/hzqjyyx\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hzqjyyx\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hzqjyyx\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hzqjyyx\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hzqjyyx\/orgs","repos_url":"https:\/\/api.github.com\/users\/hzqjyyx\/repos","events_url":"https:\/\/api.github.com\/users\/hzqjyyx\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hzqjyyx\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting that's a bug on master indeed.\r\nWe'll fix that soon"],"created_at":1604395654000,"updated_at":1605807338000,"closed_at":1605807338000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nCurrently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.\r\n\r\nHere is a code snippet\r\n```python\r\nfrom datasets import load_dataset\r\nfrom pyarrow.csv import ReadOptions\r\nload_dataset(\"csv\", data_files=[\"out.csv\"], read_options=ReadOptions(block_size=16*1024*1024))\r\n```\r\nerror is `self.options cannot be converted to a Python object for pickling`\r\nWould you mind to take a look? Thanks!\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-ab83fec2ded4> in <module>\r\n----> 1 load_dataset(\"csv\", data_files=[\"out.csv\"], read_options=ReadOptions(block_size=16*1024*1024))\r\n\r\n\/tmp\/datasets\/src\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 602 hash=hash,\r\n 603 features=features,\r\n--> 604 **config_kwargs,\r\n 605 )\r\n 606 \r\n\r\n\/tmp\/datasets\/src\/datasets\/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)\r\n 162 name,\r\n 163 custom_features=features,\r\n--> 164 **config_kwargs,\r\n 165 )\r\n 166 \r\n\r\n\/tmp\/datasets\/src\/datasets\/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 281 )\r\n 282 else:\r\n--> 283 suffix = Hasher.hash(config_kwargs_to_add_to_suffix)\r\n 284 \r\n 285 if builder_config.data_files is not None:\r\n\r\n\/tmp\/datasets\/src\/datasets\/fingerprint.py in hash(cls, value)\r\n 51 return cls.dispatch[type(value)](cls, value)\r\n 52 else:\r\n---> 53 return cls.hash_default(value)\r\n 54 \r\n 55 def update(self, value):\r\n\r\n\/tmp\/datasets\/src\/datasets\/fingerprint.py in hash_default(cls, value)\r\n 44 @classmethod\r\n 45 def hash_default(cls, value):\r\n---> 46 return cls.hash_bytes(dumps(value))\r\n 47 \r\n 48 @classmethod\r\n\r\n\/tmp\/datasets\/src\/datasets\/utils\/py_utils.py in dumps(obj)\r\n 365 file = StringIO()\r\n 366 with _no_cache_fields(obj):\r\n--> 367 dump(obj, file)\r\n 368 return file.getvalue()\r\n 369 \r\n\r\n\/tmp\/datasets\/src\/datasets\/utils\/py_utils.py in dump(obj, file)\r\n 337 def dump(obj, file):\r\n 338 \"\"\"pickle an object to a file\"\"\"\r\n--> 339 Pickler(file, recurse=True).dump(obj)\r\n 340 return\r\n 341 \r\n\r\n~\/.local\/lib\/python3.6\/site-packages\/dill\/_dill.py in dump(self, obj)\r\n 444 raise PicklingError(msg)\r\n 445 else:\r\n--> 446 StockPickler.dump(self, obj)\r\n 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects\r\n 448 return\r\n\r\n\/usr\/lib\/python3.6\/pickle.py in dump(self, obj)\r\n 407 if self.proto >= 4:\r\n 408 self.framer.start_framing()\r\n--> 409 self.save(obj)\r\n 410 self.write(STOP)\r\n 411 self.framer.end_framing()\r\n\r\n\/usr\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n~\/.local\/lib\/python3.6\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/usr\/lib\/python3.6\/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n\/usr\/lib\/python3.6\/pickle.py in _batch_setitems(self, items)\r\n 850 k, v = tmp[0]\r\n 851 save(k)\r\n--> 852 save(v)\r\n 853 write(SETITEM)\r\n 854 # else tmp is empty, and we're done\r\n\r\n\/usr\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 494 reduce = getattr(obj, \"__reduce_ex__\", None)\r\n 495 if reduce is not None:\r\n--> 496 rv = reduce(self.proto)\r\n 497 else:\r\n 498 reduce = getattr(obj, \"__reduce__\", None)\r\n\r\n~\/.local\/lib\/python3.6\/site-packages\/pyarrow\/_csv.cpython-36m-x86_64-linux-gnu.so in pyarrow._csv.ReadOptions.__reduce_cython__()\r\n\r\nTypeError: self.options cannot be converted to a Python object for pickling\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/794\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/793","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/793\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/793\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/793\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/793","id":735105907,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5","number":793,"title":"[Datasets] fix discofuse links","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604390625000,"updated_at":1604391401000,"closed_at":1604391400000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/793","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/793","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/793.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/793.patch"},"body":"The discofuse links were changed: https:\/\/github.com\/google-research-datasets\/discofuse\/commit\/d27641016eb5b3eb2af03c7415cfbb2cbebe8558. \r\nThe old links are broken\r\n\r\nI changed the links and created the new dataset_infos.json.\r\n\r\nPinging @thomwolf @lhoestq for notification.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/793\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/792","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/792\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/792\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/792\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/792","id":734693652,"node_id":"MDU6SXNzdWU3MzQ2OTM2NTI=","number":792,"title":"KILT dataset: empty string in triviaqa input field","user":{"login":"PaulLerner","id":25532159,"node_id":"MDQ6VXNlcjI1NTMyMTU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25532159?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PaulLerner","html_url":"https:\/\/github.com\/PaulLerner","followers_url":"https:\/\/api.github.com\/users\/PaulLerner\/followers","following_url":"https:\/\/api.github.com\/users\/PaulLerner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PaulLerner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PaulLerner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PaulLerner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PaulLerner\/orgs","repos_url":"https:\/\/api.github.com\/users\/PaulLerner\/repos","events_url":"https:\/\/api.github.com\/users\/PaulLerner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PaulLerner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Just found out about https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/kilt_tasks\/README.md\r\n(Not very clear in https:\/\/huggingface.co\/datasets\/kilt_tasks links to http:\/\/github.com\/huggingface\/datasets\/datasets\/kilt_tasks\/README.md which is dead, closing the issue though :))"],"created_at":1604338434000,"updated_at":1604572499000,"closed_at":1604572499000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"# What happened\r\nBoth train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)\r\n\r\n# Versions\r\nKILT version is `1.0.0`\r\n`datasets` version is `1.1.2`\r\n[more here](https:\/\/gist.github.com\/PaulLerner\/3768c8d25f723edbac20d99b6a4056c1)\r\n\r\n# How to reproduce\r\n```py\r\nIn [1]: from datasets import load_dataset\r\nIn [4]: dataset = load_dataset(\"kilt_tasks\") \r\n# everything works fine, removed output for a better readibility\r\nDataset kilt_tasks downloaded and prepared to \/people\/lerner\/.cache\/huggingface\/datasets\/kilt_tasks\/all_tasks\/1.0.0\/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.\r\n\r\n# empty string in triviaqa input field\r\nIn [36]: dataset['train_triviaqa'][0] \r\nOut[36]: \r\n{'id': 'dpql_5197',\r\n 'input': '',\r\n 'meta': {'left_context': '',\r\n 'mention': '',\r\n 'obj_surface': {'text': []},\r\n 'partial_evidence': {'end_paragraph_id': [],\r\n 'meta': [],\r\n 'section': [],\r\n 'start_paragraph_id': [],\r\n 'title': [],\r\n 'wikipedia_id': []},\r\n 'right_context': '',\r\n 'sub_surface': {'text': []},\r\n 'subj_aliases': {'text': []},\r\n 'template_questions': {'text': []}},\r\n 'output': {'answer': ['five \u00a3', '5 \u00a3', '\u00a35', 'five \u00a3'],\r\n 'meta': [],\r\n 'provenance': [{'bleu_score': [1.0],\r\n 'end_character': [248],\r\n 'end_paragraph_id': [30],\r\n 'meta': [],\r\n 'section': ['Section::::Question of legal tender.\\n'],\r\n 'start_character': [246],\r\n 'start_paragraph_id': [30],\r\n 'title': ['Banknotes of the pound sterling'],\r\n 'wikipedia_id': ['270680']}]}}\r\nIn [35]: dataset['train_triviaqa']['input'][:10] \r\nOut[35]: ['', '', '', '', '', '', '', '', '', '']\r\n# same with test set \r\nIn [37]: dataset['test_triviaqa']['input'][:10] \r\nOut[37]: ['', '', '', '', '', '', '', '', '', '']\r\n# works fine with natural questions\r\nIn [34]: dataset['train_nq']['input'][:10] \r\nOut[34]: \r\n['how i.met your mother who is the mother',\r\n 'who had the most wins in the nfl',\r\n 'who played mantis guardians of the galaxy 2',\r\n 'what channel is the premier league on in france',\r\n \"god's not dead a light in the darkness release date\",\r\n 'who is the current president of un general assembly',\r\n 'when do the eclipse supposed to take place',\r\n 'what is the name of the sea surrounding dubai',\r\n 'who holds the nba record for most points in a career',\r\n 'when did the new maze runner movie come out']\r\n```\r\n\r\nStay safe :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/792\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/791","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/791\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/791\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/791\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/791","id":734656518,"node_id":"MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5","number":791,"title":"add amazon reviews","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@patrickvonplaten Yeah this is adapted from tfds so a lot is just how they wrote the code. Addressed your comments and also simplified the weird `AmazonUSReviewsConfig` definition. Will merge once tests pass.","Thanks for checking this one :) \r\nLooks good to me \r\n\r\nJust one question : is there a particular reason to use `names=[\"Y\", \"N\"]` in this order ? Usually the positive label is at index 1 and the negative one at index 0 for binary classification","> is there a particular reason to use `names=[\"Y\", \"N\"]` in this order ? Usually the positive label is at index 1 and the negative one at index 0 for binary classification\r\n\r\nHmm that's a good point. I'll submit a quick fix.\r\n\r\n"],"created_at":1604335377000,"updated_at":1604434506000,"closed_at":1604421837000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/791","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/791","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/791.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/791.patch"},"body":"Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https:\/\/www.tensorflow.org\/datasets\/catalog\/amazon_us_reviews). cc @clmnt @sshleifer ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/791\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/790","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/790\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/790\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/790\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/790","id":734470197,"node_id":"MDU6SXNzdWU3MzQ0NzAxOTc=","number":790,"title":"Error running pip install -e \".[dev]\" on MacOS 10.13.6: faiss\/python does not exist","user":{"login":"shawwn","id":59632,"node_id":"MDQ6VXNlcjU5NjMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/59632?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/shawwn","html_url":"https:\/\/github.com\/shawwn","followers_url":"https:\/\/api.github.com\/users\/shawwn\/followers","following_url":"https:\/\/api.github.com\/users\/shawwn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/shawwn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/shawwn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/shawwn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/shawwn\/orgs","repos_url":"https:\/\/api.github.com\/users\/shawwn\/repos","events_url":"https:\/\/api.github.com\/users\/shawwn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/shawwn\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now","Closing this one.\r\nFeel free to re-open if you still have issues"],"created_at":1604320595000,"updated_at":1605017102000,"closed_at":1605017102000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I was following along with https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.\r\n\r\n```sh\r\ngit clone https:\/\/github.com\/huggingface\/datasets\r\ncd datasets\r\nvirtualenv venv -p python3 --system-site-packages\r\nsource venv\/bin\/activate\r\npip install -e \".[dev]\"\r\n```\r\n\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/59632\/97868518-72871800-1cd5-11eb-9cd2-37d4e9d20b39.png)\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/59632\/97868592-977b8b00-1cd5-11eb-8f3c-0c409616149c.png)\r\n\r\nPython 3.7.7\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/790\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/789","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/789\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/789\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/789\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/789","id":734237839,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEzODM1MzE0","number":789,"title":"dataset(ncslgr): add initial loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @AmitMY, sorry for leaving you hanging for a minute :) \r\n\r\nWe've developed a new pipeline for adding datasets with a few extra steps, including adding a dataset card. You can find the full process [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/ADD_NEW_DATASET.md)\r\n\r\nWould you be up for adding the tags and description in the README.md so we can merge this cool dataset?","@lhoestq should be ready for another review :) ","Awesome thank you !\r\n\r\nIt looks like the PR now includes changes from other PR that were previously merged. \r\nFeel free to create another branch and another PR so that we can have a clean diff.\r\n","Closing for #958 "],"created_at":1604299810000,"updated_at":1606830097000,"closed_at":1606830096000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/789","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/789","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/789.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/789.patch"},"body":"Its a small dataset, but its heavily annotated\r\nhttps:\/\/www.bu.edu\/asllrp\/ncslgr.html\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/5757359\/97838609-3c539380-1ce9-11eb-885b-a15d4c91ea49.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/789\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/788","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/788\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/788\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/788\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/788","id":734136124,"node_id":"MDU6SXNzdWU3MzQxMzYxMjQ=","number":788,"title":"failed to reuse cache","user":{"login":"WangHexie","id":31768052,"node_id":"MDQ6VXNlcjMxNzY4MDUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31768052?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/WangHexie","html_url":"https:\/\/github.com\/WangHexie","followers_url":"https:\/\/api.github.com\/users\/WangHexie\/followers","following_url":"https:\/\/api.github.com\/users\/WangHexie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/WangHexie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/WangHexie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/WangHexie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/WangHexie\/orgs","repos_url":"https:\/\/api.github.com\/users\/WangHexie\/repos","events_url":"https:\/\/api.github.com\/users\/WangHexie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/WangHexie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604284956000,"updated_at":1604319975000,"closed_at":1604319975000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to ******) which logged to terminal shows the path is right to the cache directory, but the files still have to be downloaded again.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/788\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/787","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/787\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/787\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/787\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/787","id":734070162,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEzNjk5MTQz","number":787,"title":"Adding nli_tr dataset","user":{"login":"e-budur","id":2246791,"node_id":"MDQ6VXNlcjIyNDY3OTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2246791?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/e-budur","html_url":"https:\/\/github.com\/e-budur","followers_url":"https:\/\/api.github.com\/users\/e-budur\/followers","following_url":"https:\/\/api.github.com\/users\/e-budur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/e-budur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/e-budur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/e-budur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/e-budur\/orgs","repos_url":"https:\/\/api.github.com\/users\/e-budur\/repos","events_url":"https:\/\/api.github.com\/users\/e-budur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/e-budur\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you @lhoestq for the time you take to review our pull request. We appreciate your help.\r\n\r\nWe've made the changes you described. Hope that it is ready for being merged. Please let me know if you have any additional requests for revisions. "],"created_at":1604267384000,"updated_at":1605207962000,"closed_at":1605207962000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/787","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/787","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/787.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/787.patch"},"body":"Hello,\r\n\r\nIn this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https:\/\/github.com\/boun-tabi\/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https:\/\/arxiv.org\/pdf\/2004.14963.pdf)\r\n\r\nThe dataset is the neural machine translation of SNLI and MultiNLI datasets into Turkish. So, we followed a similar format with the original datasets hosted in the HuggingFace datasets hub. \r\n\r\nOur dataset is designed to be accessed as follows by following the interface of the GLUE dataset that provides multiple datasets in a single interface over the HuggingFace datasets hub. \r\n\r\n```\r\nfrom datasets import load_dataset\r\nmultinli_tr = load_dataset(\"nli_tr\", \"multinli_tr\")\r\nsnli_tr = load_dataset(\"nli_tr\", \"snli_tr\")\r\n```\r\n\r\nThanks for your help in reviewing our pull request.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/787\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/786","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/786\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/786\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/786\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/786","id":733761717,"node_id":"MDU6SXNzdWU3MzM3NjE3MTc=","number":786,"title":"feat(dataset): multiprocessing _generate_examples","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I agree that would be cool :)\r\nRight now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik"],"created_at":1604163136000,"updated_at":1604911118000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"forking this out of #741, this issue is only regarding multiprocessing\r\n\r\nI'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.\r\n\r\nIn my use case, I would instead of:\r\n```python\r\nfor datum in data:\r\n yield self.load_datum(datum)\r\n```\r\ndo:\r\n```python\r\nreturn pool.map(self.load_datum, data)\r\n```\r\n\r\nAs the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset.\r\nIf this was a larger dataset (and many such datasets exist), it would take multiple days to complete.\r\n\r\nUsing multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/786\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/785","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/785\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/785\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/785\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/785","id":733719419,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEzNDMyNTM1","number":785,"title":"feat(aslg_pc12): add dev and test data splits","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! I'm not sure we should make this split decision arbitrarily on our side. Users can split it afterwards to whatever they want using `dataset.train_test_split` for example.\r\nMoreover it looks like there's already papers that use this dataset and propose their own splits ([here](http:\/\/xanthippi.ceid.upatras.gr\/HealthSign\/resources\/Publications\/sitis_paper_25_10.pdf) 80-20) \r\nWhat do you think ?","I was not aware of the `train_test_split` method, thanks!\r\nSoe ven though it contributes to reproducibility, no need to do this split then."],"created_at":1604150738000,"updated_at":1605022170000,"closed_at":1605022170000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/785","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/785","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/785.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/785.patch"},"body":"For reproducibility sake, it's best if there are defined dev and test splits.\r\n\r\nThe original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:\r\n- 5\/7th for train\r\n- 1\/7th for dev\r\n- 1\/7th for test\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/785\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/784","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/784\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/784\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/784\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/784","id":733700463,"node_id":"MDU6SXNzdWU3MzM3MDA0NjM=","number":784,"title":"Issue with downloading Wikipedia data for low resource language","user":{"login":"SamuelCahyawijaya","id":2826602,"node_id":"MDQ6VXNlcjI4MjY2MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2826602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya","html_url":"https:\/\/github.com\/SamuelCahyawijaya","followers_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/followers","following_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/orgs","repos_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/repos","events_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https:\/\/dumps.wikimedia.org\/jvwiki) here for `jv`) ?","@lhoestq\r\n\r\nI've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya.\r\n\r\nAlso, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message.\r\n\r\n```\r\nValueError: BuilderConfig 20201120.zh not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']\r\n```\r\n\r\nI am pretty sure that `https:\/\/dumps.wikimedia.org\/enwiki\/20201120\/dumpstatus.json` exists.","Thanks for reporting I created a PR to make the custom config work (language=\"zh\", date=\"20201120\").","@lhoestq Thanks!"],"created_at":1604144400000,"updated_at":1624584931000,"closed_at":1606318933000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet\r\n```\r\njv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')\r\nsu_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')\r\n```\r\nAnd I get the following error for these two languages:\r\nJavanese\r\n```\r\nFileNotFoundError: Couldn't find file at https:\/\/dumps.wikimedia.org\/jvwiki\/20200501\/dumpstatus.json\r\n```\r\n\r\nSundanese\r\n```\r\nFileNotFoundError: Couldn't find file at https:\/\/dumps.wikimedia.org\/suwiki\/20200501\/dumpstatus.json\r\n```\r\n\r\nI found from https:\/\/github.com\/huggingface\/datasets\/issues\/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https:\/\/dumps.wikimedia.org\/jvwiki\/20200501\/dumpstatus.json` and `https:\/\/dumps.wikimedia.org\/suwiki\/20200501\/dumpstatus.json` are no longer valid.\r\n\r\n Any suggestions on how to handle this issue? Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/784\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/783","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/783\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/783\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/783\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/783","id":733536254,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz","number":783,"title":"updated links to v1.3 of quail, fixed the description","user":{"login":"annargrs","id":1450322,"node_id":"MDQ6VXNlcjE0NTAzMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1450322?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/annargrs","html_url":"https:\/\/github.com\/annargrs","followers_url":"https:\/\/api.github.com\/users\/annargrs\/followers","following_url":"https:\/\/api.github.com\/users\/annargrs\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/annargrs\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/annargrs\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/annargrs\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/annargrs\/orgs","repos_url":"https:\/\/api.github.com\/users\/annargrs\/repos","events_url":"https:\/\/api.github.com\/users\/annargrs\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/annargrs\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["we're using quail 1.3 now thanks.\r\nclosing this one"],"created_at":1604094453000,"updated_at":1606691119000,"closed_at":1606691118000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/783","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/783","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/783.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/783.patch"},"body":"updated links to v1.3 of quail, fixed the description","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/783\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/782","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/782\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/782\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/782\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/782","id":733316463,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEzMTE2MTM0","number":782,"title":"Fix metric deletion when attribuets are missing","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604074570000,"updated_at":1604076473000,"closed_at":1604076472000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/782","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/782","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/782.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/782.patch"},"body":"When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted.\r\nI just added `if hasattr(...)` to make sure it doesn't crash","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/782\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/781","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/781\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/781\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/781\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/781","id":733168609,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyOTkyMzQw","number":781,"title":"Add XNLI train set","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1604064113000,"updated_at":1604946170000,"closed_at":1604946169000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/781","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/781","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/781.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/781.patch"},"body":"I added the train set that was built using the translated MNLI.\r\nNow you can load the dataset specifying one language:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nxnli_en = load_dataset(\"xnli\", \"en\")\r\nprint(xnli_en[\"train\"][0])\r\n# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label': 1, 'premise': 'Conceptually cream skimming has two basic dimensions - product and geography .'}\r\nprint(xnli_en[\"test\"][0]) \r\n# {'hypothesis': 'I havent spoken to him again.', 'label': 2, 'premise': \"Well, I wasn't even thinking about that, but I was so frustrated, and, I ended up talking to him again.\"}\r\n```\r\n\r\nCc @sgugger ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/781\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/780","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/780\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/780\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/780\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/780","id":732738647,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyNjM0MzI0","number":780,"title":"Add ASNQ dataset","user":{"login":"mkserge","id":2992022,"node_id":"MDQ6VXNlcjI5OTIwMjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2992022?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mkserge","html_url":"https:\/\/github.com\/mkserge","followers_url":"https:\/\/api.github.com\/users\/mkserge\/followers","following_url":"https:\/\/api.github.com\/users\/mkserge\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mkserge\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mkserge\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mkserge\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mkserge\/orgs","repos_url":"https:\/\/api.github.com\/users\/mkserge\/repos","events_url":"https:\/\/api.github.com\/users\/mkserge\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mkserge\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Very nice !\r\nWhat do the `sentence1` and `sentence2` correspond to exactly ?\r\nAlso maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/snli\/snli.py) for example)","> What do the `sentence1` and `sentence2` correspond to exactly ?\r\n\r\n`sentence1` is a question, and `sentence2` is a candidate answer sentence. The labels are [1, 2, 3, 4] defining a relation between the answer sentence and the question. For example, label 4 means that the answer sentence is inside the _long_answer_ passage AND that the _short_answer_ is within the answer sentence. All the other labels are the negatives with different characteristics. (the short_answer, long_answer terminology is borrowed from Google's NQ dataset)\r\n\r\nShould I label them simply as `question` and `answer`? I was going more with what I saw in the examples\/run_glue.py script, but I realize now there is no restriction around this.\r\n\r\n> Also maybe you could use the `ClassLabel` feature type for the `label` field (see [snli](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/snli\/snli.py) for example)\r\n\r\nI am finding it difficult to assign names to each class, but perhaps it's possible. Here's the description of each class from the paper.\r\n\r\n1. Sentences from the document that are in the long answer but do not contain the annotated short answers. It is possible that these sentences might contain the short answer.\r\n2. Sentences from the document that are not in the long answer but contain the short answer string, that is, such occurrence is purely accidental.\r\n3. Sentences from the document that are neither in the long answer nor contain the short answer.\r\n4. Sentences from the document that are in the long answer and do contain the annotated short answers.\r\n\r\nAny ideas?\r\n\r\n","Yes it's better to have explicit feature names. Maybe go with question\/answer or question\/sentence.\r\nI read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\nWe could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?","> Yes it's better to have explicit feature names. Maybe go with question\/answer or question\/sentence.\r\n> I read in the paper that 1,2 and 3 are considered negative and 4 positive.\r\n> We could have a binary classification label `label` (either positive of negative) and then two boolean fields `short_answser_in_sentence` and `sentence_in_long_answer`. What do you think ?\r\n\r\nOk, sounds good. I went with `sentence` to keep it consistent with `short_answer_in_sentence` and `sentence_in_long_answer`. \r\n\r\nI changed it to a ClassLabel with pos and neg classes and added the two above as features. Let me know if this is not what you had in mind.\r\n\r\n"],"created_at":1604014316000,"updated_at":1605000383000,"closed_at":1605000383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/780","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/780","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/780.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/780.patch"},"body":"This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https:\/\/arxiv.org\/abs\/1911.04118\r\n\r\nThe dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Moschitti. \r\n\r\n_Please note that I have no affiliation with the authors._\r\n\r\nRepo: https:\/\/github.com\/alexa\/wqa_tanda\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/780\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/779","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/779\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/779\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/779\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/779","id":732514887,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyNDQzMjY0","number":779,"title":"Feature\/fidelity metrics from emnlp2020 evaluating and characterizing human rationales","user":{"login":"rathoreanirudh","id":11327413,"node_id":"MDQ6VXNlcjExMzI3NDEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11327413?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rathoreanirudh","html_url":"https:\/\/github.com\/rathoreanirudh","followers_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/followers","following_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/orgs","repos_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/repos","events_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rathoreanirudh\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This looks interesting, thanks for adding it :) \r\n\r\nFor metrics there should only be two features fields: references and predictions.\r\nBoth of them can be defined as you want using nested structures if you need to.\r\nAlso I'm not sure what goes into references and what goes into predictions, could you give more details please ?\r\nAll the other computations parameters (model etc.) are fine though. Maybe explain a bit more what they're used for","> Hi ! This looks interesting, thanks for adding it :)\r\n> \r\n> For metrics there should only be two features fields: references and predictions.\r\n> Both of them can be defined as you want using nested structures if you need to.\r\n> Also I'm not sure what goes into references and what goes into predictions, could you give more details please ?\r\n> All the other computations parameters (model etc.) are fine though. Maybe explain a bit more what they're used for\r\n\r\nThe `predictions` are the predicted labels by a model for a particular input. Do you mean making `prob_y_hat` - the probability of the prediction being the predicted label, `prob_y_hat_alpha` - the probability of the prediction being the predicted label when the input is reduced subject to alpha and the `null_difference` is the difference between the probability of the prediction being the predicted label in full information minus the probability in zero information a part of references? Also, I have added the description for other parameters in kwargs_description. I can expand it if that makes sense?","I think every value that is generated by the model (so label, prob_y_hat, prob_y_hat_alpha etc.) should be in `predictions`.\r\nFeel free to add more details in the kwargs_description, this is very useful for the end user.","Hi @lhoestq , I have updated the code according to your feedback. Please, let me know if it looks good and can be merged now."],"created_at":1603992674000,"updated_at":1605291082000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/779","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/779","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/779.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/779.patch"},"body":"This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/779\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/778","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/778\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/778\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/778\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/778","id":732449652,"node_id":"MDU6SXNzdWU3MzI0NDk2NTI=","number":778,"title":"Unexpected behavior when loading cached csv file?","user":{"login":"dcfidalgo","id":15979778,"node_id":"MDQ6VXNlcjE1OTc5Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15979778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dcfidalgo","html_url":"https:\/\/github.com\/dcfidalgo","followers_url":"https:\/\/api.github.com\/users\/dcfidalgo\/followers","following_url":"https:\/\/api.github.com\/users\/dcfidalgo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dcfidalgo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dcfidalgo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dcfidalgo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dcfidalgo\/orgs","repos_url":"https:\/\/api.github.com\/users\/dcfidalgo\/repos","events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dcfidalgo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nThe same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 .\r\nThe fix will be available in the next release :)","Thanks for the prompt reply and terribly sorry for the spam! \r\nLooking forward to the new release! "],"created_at":1603987570000,"updated_at":1604006487000,"closed_at":1604006487000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode=\"force_redownload\"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset.\r\n\r\nSmall snippet to reproduce the behavior:\r\n```python\r\nimport datasets\r\n\r\nwith open(\"dummy_data.csv\", \"w\") as file:\r\n file.write(\"test,this;text\\n\")\r\n\r\nprint(datasets.load_dataset(\"csv\", data_files=\"dummy_data.csv\", split=\"train\").column_names)\r\n# [\"test\", \"this;text\"]\r\n\r\nprint(datasets.load_dataset(\"csv\", data_files=\"dummy_data.csv\", split=\"train\", delimiter=\";\").column_names)\r\n# still [\"test\", \"this;text\"]\r\n```\r\n\r\nBy the way, thanks a lot for this amazing library! :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/778\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/777","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/777\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/777\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/777\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/777","id":732376648,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2","number":777,"title":"Better error message for uninitialized metric","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603982570000,"updated_at":1603984706000,"closed_at":1603984704000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/777","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/777","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/777.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/777.patch"},"body":"When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message\r\n\r\nFix #729 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/777\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/776","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/776\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/776\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/776\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/776","id":732343550,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx","number":776,"title":"Allow custom split names in text dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!"],"created_at":1603980246000,"updated_at":1604065605000,"closed_at":1604064232000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/776","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/776","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/776.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/776.patch"},"body":"The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.\r\nNow any split name is allowed.\r\n\r\nI did the same for `json`, `pandas` and `csv`\r\n\r\nFix #735 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/776\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/775","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/775\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/775\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/775\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/775","id":732287504,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyMjUyODI3","number":775,"title":"Properly delete metrics when a process is killed","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603975927000,"updated_at":1603980080000,"closed_at":1603980079000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/775","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/775","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/775.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/775.patch"},"body":"Tests are flaky when using metrics in distributed setup.\r\nThere is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.\r\nHowever if the error is raised, all the processes of the metric are killed, and the open files (arrow + lock files) are not closed correctly. This causes PermissionError on Windows when deleting the temporary directory.\r\n\r\nTo fix that I added a `finally` clause in the function passed to multiprocess to properly close the files when the process exits.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/775\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/774","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/774\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/774\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/774\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/774","id":732265741,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEyMjM0NjA0","number":774,"title":"[ROUGE] Add description to Rouge metric","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603973972000,"updated_at":1603994150000,"closed_at":1603994148000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/774","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/774","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/774.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/774.patch"},"body":"Add information about case sensitivity to ROUGE.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/774\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/773","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/773\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/773\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/773\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/773","id":731684153,"node_id":"MDU6SXNzdWU3MzE2ODQxNTM=","number":773,"title":"Adding CC-100: Monolingual Datasets from Web Crawl Data","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false},"assignees":[{"login":"abhishekkrthakur","id":1183441,"node_id":"MDQ6VXNlcjExODM0NDE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1183441?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abhishekkrthakur","html_url":"https:\/\/github.com\/abhishekkrthakur","followers_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/followers","following_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/orgs","repos_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/repos","events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abhishekkrthakur\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["cc @aconneau ;) "],"created_at":1603909241000,"updated_at":1607941208000,"closed_at":1607941207000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** CC-100: Monolingual Datasets from Web Crawl Data\r\n- **Description:** https:\/\/twitter.com\/alex_conneau\/status\/1321507120848625665\r\n- **Paper:** https:\/\/arxiv.org\/abs\/1911.02116\r\n- **Data:** http:\/\/data.statmt.org\/cc-100\/\r\n- **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how \"Wikipedia-like\" it is, hopefully helping avoid some of the worst parts of the common crawl.\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/773\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/772","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/772\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/772\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/772\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/772","id":731612430,"node_id":"MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx","number":772,"title":"Fix metric with cache dir","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603903393000,"updated_at":1603964084000,"closed_at":1603964083000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/772","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/772","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/772.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/772.patch"},"body":"The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.\r\nThe tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).\r\n\r\nI remove the double concatenation and I fixed the tests\r\n\r\nFix #728 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/772\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/771","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/771\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/771\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/771\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/771","id":731482213,"node_id":"MDU6SXNzdWU3MzE0ODIyMTM=","number":771,"title":"Using `Dataset.map` with `n_proc>1` print multiple progress bars","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset.\r\n\r\nAt one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar"],"created_at":1603894407000,"updated_at":1603894697000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/771\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/770","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/770\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/770\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/770\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/770","id":731445222,"node_id":"MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1","number":770,"title":"Fix custom builder caching","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603891944000,"updated_at":1603964163000,"closed_at":1603964161000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/770","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/770","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/770.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/770.patch"},"body":"The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).\r\n\r\nTo fix that, the cache directory name now has a suffix that depends on all of them.\r\n\r\nFix #730\r\nFix #750 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/770\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/769","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/769\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/769\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/769\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/769","id":731257104,"node_id":"MDU6SXNzdWU3MzEyNTcxMDQ=","number":769,"title":"How to choose proper download_mode in function load_dataset?","user":{"login":"jzq2000","id":48550398,"node_id":"MDQ6VXNlcjQ4NTUwMzk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48550398?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jzq2000","html_url":"https:\/\/github.com\/jzq2000","followers_url":"https:\/\/api.github.com\/users\/jzq2000\/followers","following_url":"https:\/\/api.github.com\/users\/jzq2000\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jzq2000\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jzq2000\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jzq2000\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jzq2000\/orgs","repos_url":"https:\/\/api.github.com\/users\/jzq2000\/repos","events_url":"https:\/\/api.github.com\/users\/jzq2000\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jzq2000\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work.\r\nThis makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing","Can we just use `features=...` in `load_dataset` for this @lhoestq?","Indeed you should use `features` in this case. \r\n```python\r\nfeatures = Features({'text': Value('string'), 'label': Value('float32')})\r\ndataset = load_dataset('csv', data_files=['sst_test.csv'], features=features)\r\n```\r\nNote that because of an issue with the caching when you change the features (see #750 ) you still need to specify the `FORCE_REDOWNLOAD ` flag. I'm working on a fix for this one"],"created_at":1603876579000,"updated_at":1603881299000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I am a beginner to datasets and I try to use datasets to load my csv file.\r\nmy csv file looks like this\r\n\r\n``` \r\ntext,label\r\n\"Effective but too-tepid biopic\",3\r\n\"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .\",4\r\n\"Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .\",5\r\n```\r\n\r\nFirst I try to use this command to load my csv file . \r\n\r\n``` python\r\ndataset=load_dataset('csv', data_files=['sst_test.csv'])\r\n```\r\n\r\nIt seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this.\r\n\r\n``` python\r\nimport pyarrow as pa\r\nfrom pyarrow import csv\r\nread_options = csv.ReadOptions(block_size=1024*1024)\r\nparse_options = csv.ParseOptions()\r\nconvert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()})\r\ndataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options,\r\n parse_options=parse_options, convert_options=convert_options)\r\n```\r\n\r\nIt keeps the same:\r\n\r\n```shell\r\nDataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210)\r\n```\r\n\r\nI think this issue is caused by the parameter \"download_mode\" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right.\r\n\r\nIs it a bug? How to choose proper download_mode to avoid this issue?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/769\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/768","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/768\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/768\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/768\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/768","id":730908060,"node_id":"MDU6SXNzdWU3MzA5MDgwNjA=","number":768,"title":"Add a `lazy_map` method to `Dataset` and `DatasetDict`","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is cool! I think some aspects to think about and decide in terms of API are:\r\n- do we allow several methods (chained i guess)\r\n- how do we inspect the currently set method(s)\r\n- how do we control\/reset them"],"created_at":1603837983000,"updated_at":1603875493000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:\r\n\r\n1. load image on the fly\r\n2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/768\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/767","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/767\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/767\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/767\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/767","id":730771610,"node_id":"MDU6SXNzdWU3MzA3NzE2MTA=","number":767,"title":"Add option for named splits when using ds.train_test_split","user":{"login":"nateraw","id":32437151,"node_id":"MDQ6VXNlcjMyNDM3MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32437151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nateraw","html_url":"https:\/\/github.com\/nateraw","followers_url":"https:\/\/api.github.com\/users\/nateraw\/followers","following_url":"https:\/\/api.github.com\/users\/nateraw\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nateraw\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nateraw\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nateraw\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nateraw\/orgs","repos_url":"https:\/\/api.github.com\/users\/nateraw\/repos","events_url":"https:\/\/api.github.com\/users\/nateraw\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nateraw\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https:\/\/discuss.huggingface.co\/t\/how-to-split-main-dataset-into-train-dev-test-as-datasetdict\/1090\/5\r\n\r\nAnd in particular that it should advantageously be able to split in 3 splits as well instead of just 2 like we copied from sklearn."],"created_at":1603828784000,"updated_at":1605017121000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"### Feature Request \ud83d\ude80 \r\n\r\nCan we add a way to name your splits when using the `.train_test_split` function?\r\n\r\nIn almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.\r\n\r\n### Workaround\r\n\r\nthis is my hack for dealin with this, for now :slightly_smiling_face:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\u200b\r\n\u200b\r\nds = load_dataset('imdb')\r\nds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/767\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/766","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/766\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/766\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/766\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/766","id":730669596,"node_id":"MDU6SXNzdWU3MzA2Njk1OTY=","number":766,"title":"[GEM] add DART data-to-text generation dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Is this a duplicate of #924 ?","Yup, closing! Haven't been keeping track of the solved issues during the sprint."],"created_at":1603820044000,"updated_at":1607002638000,"closed_at":1607002638000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** DART\r\n- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2007.02871v1\r\n- **Data:** https:\/\/github.com\/Yale-LILY\/dart\r\n- **Motivation:** the dataset will likely be included in the GEM benchmark\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/766\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/765","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/765\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/765\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/765\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/765","id":730668332,"node_id":"MDU6SXNzdWU3MzA2NjgzMzI=","number":765,"title":"[GEM] Add DART data-to-text generation dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603819943000,"updated_at":1603820061000,"closed_at":1603820061000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"## Adding a Dataset\r\n- **Name:** DART\r\n- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.\r\n- **Paper:** https:\/\/arxiv.org\/abs\/2007.02871v1\r\n- **Data:** https:\/\/github.com\/Yale-LILY\/dart\r\n- **Motivation:** It will likely be included in the GEM generation evaluation benchmark\r\n\r\nInstructions to add a new dataset can be found [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/765\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/764","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/764\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/764\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/764\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/764","id":730617828,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2","number":764,"title":"Adding Issue Template for Dataset Requests","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603816628000,"updated_at":1603819526000,"closed_at":1603819525000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/764","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/764","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/764.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/764.patch"},"body":"adding .github\/ISSUE_TEMPLATE\/add-dataset.md","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/764\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/763","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/763\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/763\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/763\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/763","id":730593631,"node_id":"MDExOlB1bGxSZXF1ZXN0NTEwODcyMDYx","number":763,"title":"Fixed errors in bertscore related to custom baseline","user":{"login":"juanjucm","id":36761132,"node_id":"MDQ6VXNlcjM2NzYxMTMy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36761132?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/juanjucm","html_url":"https:\/\/github.com\/juanjucm","followers_url":"https:\/\/api.github.com\/users\/juanjucm\/followers","following_url":"https:\/\/api.github.com\/users\/juanjucm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/juanjucm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/juanjucm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/juanjucm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/juanjucm\/orgs","repos_url":"https:\/\/api.github.com\/users\/juanjucm\/repos","events_url":"https:\/\/api.github.com\/users\/juanjucm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/juanjucm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603814915000,"updated_at":1603907965000,"closed_at":1603907965000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/763","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/763","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/763.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/763.patch"},"body":"[bertscore version 0.3.6 ](https:\/\/github.com\/Tiiiger\/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`.\r\n\r\nThis PR fix those matching errors in bertscore metric implementation.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/763\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/762","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/762\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/762\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/762\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/762","id":730586972,"node_id":"MDU6SXNzdWU3MzA1ODY5NzI=","number":762,"title":"[GEM] Add Czech Restaurant data-to-text generation dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603814447000,"updated_at":1607002664000,"closed_at":1607002664000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"- Paper: https:\/\/www.aclweb.org\/anthology\/W19-8670.pdf\r\n- Data: https:\/\/github.com\/UFAL-DSG\/cs_restaurant_dataset\r\n- The dataset will likely be part of the GEM benchmark","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/762\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/761","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/761\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/761\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/761\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/761","id":729898867,"node_id":"MDU6SXNzdWU3Mjk4OTg4Njc=","number":761,"title":"Downloaded datasets are not usable offline","user":{"login":"ghazi-f","id":25091538,"node_id":"MDQ6VXNlcjI1MDkxNTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25091538?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghazi-f","html_url":"https:\/\/github.com\/ghazi-f","followers_url":"https:\/\/api.github.com\/users\/ghazi-f\/followers","following_url":"https:\/\/api.github.com\/users\/ghazi-f\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghazi-f\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghazi-f\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghazi-f\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghazi-f\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghazi-f\/repos","events_url":"https:\/\/api.github.com\/users\/ghazi-f\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghazi-f\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.\r\n\r\nIf we add a way to store the etag\/hash locally after the first download, it would allow users to first download the dataset with an internet connection, and still have it working without an internet connection.\r\n\r\nI'll let you know when we add this feature."],"created_at":1603745686000,"updated_at":1603807469000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.\r\nIs this the intended behavior ?\r\n(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/761\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/760","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/760\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/760\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/760\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/760","id":729637917,"node_id":"MDU6SXNzdWU3Mjk2Mzc5MTc=","number":760,"title":"Add meta-data to the HANS dataset","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892877,"node_id":"MDU6TGFiZWwxOTM1ODkyODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/good%20first%20issue","name":"good first issue","color":"7057ff","default":true,"description":"Good for newcomers"},{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"assignees":[{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1603724213000,"updated_at":1607002714000,"closed_at":1607002714000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The current version of the [HANS dataset](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/hans\/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/760\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/759","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/759\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/759\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/759\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/759","id":729046916,"node_id":"MDU6SXNzdWU3MjkwNDY5MTY=","number":759,"title":"(Load dataset failure) ConnectionError: Couldn\u2019t reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/cnn_dailymail\/cnn_dailymail.py","user":{"login":"AI678","id":63541083,"node_id":"MDQ6VXNlcjYzNTQxMDgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/63541083?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AI678","html_url":"https:\/\/github.com\/AI678","followers_url":"https:\/\/api.github.com\/users\/AI678\/followers","following_url":"https:\/\/api.github.com\/users\/AI678\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AI678\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AI678\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AI678\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AI678\/orgs","repos_url":"https:\/\/api.github.com\/users\/AI678\/repos","events_url":"https:\/\/api.github.com\/users\/AI678\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AI678\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Are you running the script on a machine with an internet connection ?","Yes , I can browse the url through Google Chrome.","Does this HEAD request return 200 on your machine ?\r\n```python\r\nimport requests \r\nrequests.head(\"https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/cnn_dailymail\/cnn_dailymail.py\")\r\n```\r\n\r\nIf it returns 200, could you try again to load the dataset ?","Thank you very much for your response.\r\nWhen I run \r\n``` \r\nimport requests \r\nrequests.head(\"https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/cnn_dailymail\/cnn_dailymail.py\")\r\n```\r\nIt returns 200.\r\n\r\nAnd I try again to load the dataset. I got the following errors again. \r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 475, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"C:\\Users\\666666\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\cnn_dailymail\\0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602\\cnn_dailymail.py\", line 253, in _split_generators\r\n dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 254, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\download_manager.py\", line 175, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 224, in map_nested\r\n mapped = [\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 225, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\py_utils.py\", line 163, in _single_map_nested\r\n return function(data_struct)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 300, in cached_path\r\n output_path = get_from_cache(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 475, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/drive.google.com\/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\n\r\nConnection error happened but the url was different.\r\n\r\nI add the following code.\r\n```\r\nrequests.head(\"https:\/\/drive.google.com\/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nThis didn't return 200\r\nIt returned like this:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 159, in _new_conn\r\n conn = connection.create_connection(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 84, in create_connection\r\n raise err\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\util\\connection.py\", line 74, in create_connection\r\n sock.connect(sa)\r\nTimeoutError: [WinError 10060] \r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 670, in urlopen\r\n httplib_response = self._make_request(\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 381, in _make_request\r\n self._validate_conn(conn)\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connectionpool.py\", line 978, in _validate_conn\r\n conn.connect()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 309, in connect\r\n conn = self._new_conn()\r\n File \"C:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\urllib3\\connection.py\", line 171, in _new_conn\r\n raise NewConnectionError(\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001F6060618E0>: Failed to establish a new connection: [WinError 10060] ","Is google drive blocked on your network ?\r\nFor me \r\n```python\r\nrequests.head(\"https:\/\/drive.google.com\/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\")\r\n```\r\nreturns 200","I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually.","Could you try to update `requests` maybe ?\r\nIt works with 2.23.0 on my side","My ```requests``` is 2.24.0 . It still can't return 200.","Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https:\/\/huggingface.co\/patrickvonplaten\/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and the connection error always happens .\r\n","The head request should definitely work, not sure what's going on on your side.\r\nIf you find a way to make it work, please post it here since other users might encounter the same issue.\r\n\r\nIf you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk(\"path\/to\/dataset\")`.\r\nThen you can download the directory on your machine and do\r\n```python\r\nfrom datasets import load_from_disk\r\ndataset = load_from_disk(\"path\/to\/local\/dataset\")\r\n```","Hi\r\nI want to know if this problem has been solved because I encountered a similar issue. Thanks.\r\n`train_data = datasets.load_dataset(\"xsum\", `split=\"train\")`\r\n`ConnectionError:` Couldn't reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.3\/datasets\/xsum\/xsum.py`","Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?\r\n\r\nOtherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version\r\n```\r\npip install --upgrade datasets\r\n```\r\nLet me know if that helps.","Hi @lhoestq \r\nOh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n![image](https:\/\/user-images.githubusercontent.com\/46243662\/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\n","> Hi @lhoestq\r\n> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.\r\n> ![image](https:\/\/user-images.githubusercontent.com\/46243662\/106456211-2ca24180-64c8-11eb-831e-47e9b40e7da4.png)\r\n\r\nI have the same problem, have you solved it? Many thanks","Hi @ZhengxiangShi \r\nYou can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,\r\n`train_data = datasets.load_dataset(\"xsum.py\", split=\"train\")`"],"created_at":1603640097000,"updated_at":1628100609000,"closed_at":1628100609000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hey, I want to load the cnn-dailymail dataset for fine-tune.\r\nI write the code like this\r\nfrom datasets import load_dataset\r\n\r\ntest_dataset = load_dataset(\u201ccnn_dailymail\u201d, \u201c3.0.0\u201d, split=\u201ctrain\u201d)\r\n\r\nAnd I got the following errors.\r\n\r\nTraceback (most recent call last):\r\nFile \u201ctest.py\u201d, line 7, in\r\ntest_dataset = load_dataset(\u201ccnn_dailymail\u201d, \u201c3.0.0\u201d, split=\u201ctest\u201d)\r\nFile \u201cC:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\u201d, line 589, in load_dataset\r\nmodule_path, hash = prepare_module(\r\nFile \u201cC:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\u201d, line 268, in prepare_module\r\nlocal_path = cached_path(file_path, download_config=download_config)\r\nFile \u201cC:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\u201d, line 300, in cached_path\r\noutput_path = get_from_cache(\r\nFile \u201cC:\\Users\\666666\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\utils\\file_utils.py\u201d, line 475, in get_from_cache\r\nraise ConnectionError(\u201cCouldn\u2019t reach {}\u201d.format(url))\r\nConnectionError: Couldn\u2019t reach https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.1.2\/datasets\/cnn_dailymail\/cnn_dailymail.py\r\n\r\nHow can I fix this ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/759\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/758","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/758\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/758\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/758\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/758","id":728638559,"node_id":"MDU6SXNzdWU3Mjg2Mzg1NTk=","number":758,"title":"Process 0 very slow when using num_procs with map to tokenizer","user":{"login":"ksjae","id":17930170,"node_id":"MDQ6VXNlcjE3OTMwMTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17930170?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ksjae","html_url":"https:\/\/github.com\/ksjae","followers_url":"https:\/\/api.github.com\/users\/ksjae\/followers","following_url":"https:\/\/api.github.com\/users\/ksjae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ksjae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ksjae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ksjae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ksjae\/orgs","repos_url":"https:\/\/api.github.com\/users\/ksjae\/repos","events_url":"https:\/\/api.github.com\/users\/ksjae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ksjae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Thanks for reporting.\r\nIs the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?\r\nAlso could how many CPUs can you use for multiprocessing ?\r\n```python\r\nimport multiprocessing\r\nprint(multiprocessing.cpu_count())\r\n```\r\nWhich tokenizer are you using ?","Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.\r\nI have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.\r\n\r\nI can use up to 16 cores.","Ok weird, I don't manage to reproduce this issue on my side.\r\nDoes it happen even with `num_proc=2` for example ?\r\nAlso could you provide more details about your OS and the versions of tokenizers\/datasets\/multiprocess that you're using ?","Yes, I can confirm it also happens with ```num_proc=2```.\r\n```\r\ntokenizers 0.9.2\r\ndatasets 1.1.2\r\nmultiprocess 0.70.10\r\n```\r\n```\r\nLinux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU\/Linux\r\n```","I can't reproduce on my side unfortunately with the same versions.\r\n\r\nDo you have issues when doing multiprocessing with python ?\r\n```python\r\nfrom tqdm.auto import tqdm\r\nfrom multiprocess import Pool, RLock\r\n\r\ndef process_data(shard):\r\n # implement\r\n\r\nnum_proc = 8\r\nshards = [] # implement, this must be a list of size num_proc\r\n\r\nwith Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n results = [pool.apply_async(process_data, shard=shard) for shard in shards]\r\n transformed_shards = [r.get() for r in results]\r\n```","Nah, I'll just wait a few hours. Thank you for helping, though."],"created_at":1603507220000,"updated_at":1603857586000,"closed_at":1603857585000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"<img width=\"721\" alt=\"image\" src=\"https:\/\/user-images.githubusercontent.com\/17930170\/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png\">\r\nThe code I am using is\r\n```\r\n\r\n dataset = load_dataset(\"text\", data_files=[file_path], split='train')\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), num_proc=8)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n dataset.save_to_disk(file_path+'.arrow')\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/758\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/757","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/757\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/757\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/757\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/757","id":728241494,"node_id":"MDU6SXNzdWU3MjgyNDE0OTQ=","number":757,"title":"CUDA out of memory","user":{"login":"li1117heex","id":47059217,"node_id":"MDQ6VXNlcjQ3MDU5MjE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47059217?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/li1117heex","html_url":"https:\/\/github.com\/li1117heex","followers_url":"https:\/\/api.github.com\/users\/li1117heex\/followers","following_url":"https:\/\/api.github.com\/users\/li1117heex\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/li1117heex\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/li1117heex\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/li1117heex\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/li1117heex\/orgs","repos_url":"https:\/\/api.github.com\/users\/li1117heex\/repos","events_url":"https:\/\/api.github.com\/users\/li1117heex\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/li1117heex\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you provide more details ? What's the code you ran ?","```python\r\ntokenizer = FunnelTokenizer.from_pretrained('funnel-transformer\/small')\r\n\r\ndef tokenize(batch):\r\n return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)\r\n\r\ndataset = load_dataset(\"bookcorpus\",split='train[:1000]').shuffle()\r\ndataset = dataset.map(tokenize, batched=True, batch_size=512)\r\n\r\n# dataset = LineByLineTextDataset(\r\n# tokenizer=tokenizer,\r\n# file_path=\".\/wiki1000.txt\",\r\n# block_size=128\r\n# )\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\nconfig=FunnelConfig(\r\n return_dict=True\r\n)\r\n\r\nmodel= FunnelForMaskedLM(config=config)\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\".\/checkpoints\",\r\n overwrite_output_dir=True,\r\n do_train=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=16,\r\n per_device_eval_batch_size=16,\r\n save_steps=10000,\r\n logging_dir='.\/ptlogs'\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n)\r\ntrainer.train()\r\n```","`RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 14.35 GiB already allocated; 753.75 MiB free; 14.39 GiB reserved in total by PyTorch)\r\nException raised from malloc at \/pytorch\/c10\/cuda\/CUDACachingAllocator.cpp:272 (most recent call first):`\r\n\r\npart of error output","from funnel model to bert model : error still happened\r\n\r\nfrom your dataset to LineByLineTextDataset : error disapeared","notice i just loaded 1000 rows of data","the error happens when executing loss.backward()","Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you try not to use `map` and only the data collator instead ? The data collator is supposed to pad to the longest sequence in each batch afaik, instead of padding to 512.\r\n\r\nAlso cc @sgugger ","Closing this one.\r\nFeel free to re-open if you have other questions about this issue"],"created_at":1603461420000,"updated_at":1608732389000,"closed_at":1608732389000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In your dataset ,cuda run out of memory as long as the trainer begins:\r\nhowever, without changing any other element\/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/757\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/756","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/756\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/756\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/756\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/756","id":728211373,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3","number":756,"title":"Start community-provided dataset docs ","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh, really cool @sshleifer!"],"created_at":1603459061000,"updated_at":1603716920000,"closed_at":1603716919000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/756","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/756","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/756.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/756.patch"},"body":"Continuation of #736 with clean fork.\r\n\r\n#### Old description\r\nThis is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.\r\n\r\nIn slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset.\r\nI think the first naming is clearer, but I didn't address that here.\r\n\r\nI didn't add metadata, will try that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/756\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/755","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/755\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/755\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/755\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/755","id":728203821,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2","number":755,"title":"Start community-provided dataset docs V2","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603458450000,"updated_at":1603458937000,"closed_at":1603458937000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/755","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/755","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/755.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/755.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/755\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/754","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/754\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/754\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/754\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/754","id":727863105,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2","number":754,"title":"Use full released xsum dataset","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq I took a shot at addressing your comments but the build scripts seem to be complaining about not being able to open dummy files. How do I resolve those errors without copying the full dataset into the dummy dir?","Could you check that the names of the dummy data files are right ?\r\nYou can use \r\n```\r\ndatasets-cli dummy_data .\/datasets\/xum\r\n```\r\nto print the expected file names","Ok @lhoestq looks like I got the tests to pass :)"],"created_at":1603423789000,"updated_at":1609470716000,"closed_at":1603717018000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/754","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/754","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/754.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/754.patch"},"body":"#672 Fix xsum to expand coverage and include IDs\r\nCode based on parser from older version of `datasets\/xsum\/xsum.py`\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/754\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/753","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/753\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/753\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/753\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/753","id":727434935,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0","number":753,"title":"Fix doc links to viewer","user":{"login":"Pierrci","id":5020707,"node_id":"MDQ6VXNlcjUwMjA3MDc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5020707?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Pierrci","html_url":"https:\/\/github.com\/Pierrci","followers_url":"https:\/\/api.github.com\/users\/Pierrci\/followers","following_url":"https:\/\/api.github.com\/users\/Pierrci\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Pierrci\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Pierrci\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Pierrci\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Pierrci\/orgs","repos_url":"https:\/\/api.github.com\/users\/Pierrci\/repos","events_url":"https:\/\/api.github.com\/users\/Pierrci\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Pierrci\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603376416000,"updated_at":1603442531000,"closed_at":1603442531000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/753","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/753","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/753.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/753.patch"},"body":"It seems #733 forgot some links in the doc :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/753\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/752","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/752\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/752\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/752\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/752","id":726917801,"node_id":"MDU6SXNzdWU3MjY5MTc4MDE=","number":752,"title":"Clicking on a metric in the search page points to datasets page giving \"Missing dataset\" warning","user":{"login":"ogabrielluiz","id":24829397,"node_id":"MDQ6VXNlcjI0ODI5Mzk3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24829397?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ogabrielluiz","html_url":"https:\/\/github.com\/ogabrielluiz","followers_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/followers","following_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/orgs","repos_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/repos","events_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ogabrielluiz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the report, can reproduce. Will fix","Fixed now @ogabrielluiz "],"created_at":1603320983000,"updated_at":1603383582000,"closed_at":1603383582000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this.\r\n\r\nSearching a metric in https:\/\/huggingface.co\/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https:\/\/huggingface.co\/datasets\/rouge. Clicking on a metric without searching points to the right page.\r\n\r\nThanks for all the great work!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/752\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/751","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/751\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/751\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/751\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/751","id":726820191,"node_id":"MDU6SXNzdWU3MjY4MjAxOTE=","number":751,"title":"Error loading ms_marco v2.1 using load_dataset()","user":{"login":"JainSahit","id":30478979,"node_id":"MDQ6VXNlcjMwNDc4OTc5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30478979?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JainSahit","html_url":"https:\/\/github.com\/JainSahit","followers_url":"https:\/\/api.github.com\/users\/JainSahit\/followers","following_url":"https:\/\/api.github.com\/users\/JainSahit\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JainSahit\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JainSahit\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JainSahit\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JainSahit\/orgs","repos_url":"https:\/\/api.github.com\/users\/JainSahit\/repos","events_url":"https:\/\/api.github.com\/users\/JainSahit\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JainSahit\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?","I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixes the problem","Yes, it indeed was a cache issue!\r\nThanks for reaching out!!"],"created_at":1603310083000,"updated_at":1604539917000,"closed_at":1604539917000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Code:\r\n`dataset = load_dataset('ms_marco', 'v2.1')`\r\n\r\nError:\r\n```\r\n`---------------------------------------------------------------------------\r\nJSONDecodeError Traceback (most recent call last)\r\n<ipython-input-16-34378c057212> in <module>()\r\n 9 \r\n 10 # Downloading and loading a dataset\r\n---> 11 dataset = load_dataset('ms_marco', 'v2.1')\r\n\r\n10 frames\r\n\/usr\/lib\/python3.6\/json\/decoder.py in raw_decode(self, s, idx)\r\n 353 \"\"\"\r\n 354 try:\r\n--> 355 obj, end = self.scan_once(s, idx)\r\n 356 except StopIteration as err:\r\n 357 raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\n\r\nJSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)\r\n`\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/751\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/750","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/750\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/750\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/750\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/750","id":726589446,"node_id":"MDU6SXNzdWU3MjY1ODk0NDY=","number":750,"title":"load_dataset doesn't include `features` in its hash","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603293401000,"updated_at":1603964161000,"closed_at":1603964161000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.\r\n\r\nExample: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:\r\n```\r\ndataset = load_dataset(\"glue\", \"mnli\")\r\nfeatures = dataset[\"train\"].features\r\nfeatures[\"label\"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order\r\ndataset = load_dataset(\"glue\", \"mnli\", features=features)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/750\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/749","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/749\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/749\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/749\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/749","id":726366062,"node_id":"MDU6SXNzdWU3MjYzNjYwNjI=","number":749,"title":"[XGLUE] Adding new dataset","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Amazing! ","Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https:\/\/arxiv.org\/pdf\/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here: \r\n\r\n![Screenshot from 2020-11-04 15-02-17](https:\/\/user-images.githubusercontent.com\/23423619\/98120893-d7499a80-1eae-11eb-9d0b-57dfe5d4ee68.png)\r\n\r\nSo, I'd suggest to have exactly 11 \"language-independent\" configs: \"ner\", \"pos\", ... and give the sample in each dataset in the config a \"language\" label being one of \"ar\", \"bg\", .... => To me this makes more sense than making languaga specific config, *e.g.* \"ner-de\", ...especially because training data is only available in English. Do you guys agree? ","In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...\r\n\r\nThis is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model.","I see your point! \r\n\r\nI think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way. \r\n\r\nGood for me @yjernite ! What do the others think? @lhoestq \r\n","I agree with Yacine on this!","Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.\r\nSee: https:\/\/github.com\/huggingface\/datasets\/pull\/802","IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.\r\nHaving split names that depend on the language seems wrong. We should try to avoid split names that are not train\/val\/test.\r\nSorry for late response on this one","@lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https:\/\/www.tau-nlp.org\/commonsenseqa with their train-sanity or dev-sanity splits","Yes sure ! Could you open a separate issue for that ?","Really cool dataset \ud83d\udc4d btw. does Transformers support all 11 tasks \ud83e\udd14 would be awesome to have a xglue script (like the \"normal\" glue one)","Just to make sure this is what we want here. If we add one config per language, \r\n\r\nthis means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest. \r\n\r\nI think it could be quite confusing for the user to have\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner-de\", split=\"train\")\r\n```\r\n\r\nin English even though it's `ner-de`.\r\n\r\nTo be honest, I'd prefer:\r\n\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test-de\")\r\ntest_dataset_fr = load_dataset(\"xglue\", \"ner\", split=\"test-fr\")\r\n```\r\n\r\nhere","Oh yes right I didn't notice the train set was always in english sorry.\r\nMoreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).\r\nSo to better fit the usual usage of this dataset, I agree that it's better to have one test split per language. \r\n\r\nSomething like your latest example patrick is fine imo :\r\n```python\r\ntrain_dataset = load_dataset(\"xglue\", \"ner\", split=\"train\")\r\ntest_dataset_de = load_dataset(\"xglue\", \"ner\", split=\"test.de\")\r\n```\r\n\r\nI just replace test-de with test.de since `-` is not allowed for split names (it has to follow the `\\w+` regex), and usually we specify the language after a point. ","Closing since XGLUE has been added in #802 , thanks patrick :) "],"created_at":1603277496000,"updated_at":1609927376000,"closed_at":1609927375000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"XGLUE is a multilingual GLUE like dataset propesed in this [paper](https:\/\/arxiv.org\/pdf\/2004.01401.pdf).\r\n\r\nI'm planning on adding the dataset to the library myself in a couple of weeks.\r\nAlso tagging @JetRunner @qiweizhen in case I need some guidance ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/749\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/748","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/748\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/748\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/748\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/748","id":726196589,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3","number":748,"title":"New version of CompGuessWhat?! with refined annotations","user":{"login":"aleSuglia","id":1479733,"node_id":"MDQ6VXNlcjE0Nzk3MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1479733?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aleSuglia","html_url":"https:\/\/github.com\/aleSuglia","followers_url":"https:\/\/api.github.com\/users\/aleSuglia\/followers","following_url":"https:\/\/api.github.com\/users\/aleSuglia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aleSuglia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aleSuglia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aleSuglia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aleSuglia\/orgs","repos_url":"https:\/\/api.github.com\/users\/aleSuglia\/repos","events_url":"https:\/\/api.github.com\/users\/aleSuglia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aleSuglia\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["No worries. Always happy to help and thanks for your support in fixing the issue :)"],"created_at":1603263341000,"updated_at":1603270362000,"closed_at":1603269979000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/748","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/748","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/748.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/748.patch"},"body":"This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/748\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/747","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/747\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/747\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/747\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/747","id":725884704,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4","number":747,"title":"Add Quail question answering dataset","user":{"login":"sai-prasanna","id":3595526,"node_id":"MDQ6VXNlcjM1OTU1MjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3595526?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sai-prasanna","html_url":"https:\/\/github.com\/sai-prasanna","followers_url":"https:\/\/api.github.com\/users\/sai-prasanna\/followers","following_url":"https:\/\/api.github.com\/users\/sai-prasanna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sai-prasanna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sai-prasanna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sai-prasanna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sai-prasanna\/orgs","repos_url":"https:\/\/api.github.com\/users\/sai-prasanna\/repos","events_url":"https:\/\/api.github.com\/users\/sai-prasanna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sai-prasanna\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603222394000,"updated_at":1603269315000,"closed_at":1603269315000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/747","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/747","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/747.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/747.patch"},"body":"QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019).\r\n\r\nhttps:\/\/text-machine-lab.github.io\/blog\/2020\/quail\/ @annargrs","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/747\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/746","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/746\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/746\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/746\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/746","id":725627235,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw","number":746,"title":"dataset(ngt): add ngt dataset initial loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1603202698000,"updated_at":1616480378000,"closed_at":1616480378000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/746","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/746","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/746.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/746.patch"},"body":"Currently only making the paths to the annotation ELAN (eaf) file and videos available.\r\nThis is the first accessible way to download this dataset, which is not manual file-by-file.\r\n\r\nOnly downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format. \r\nI do not intend to actually store these as an uncompressed array of frames, because it will be huge.\r\n\r\nFuture updates may add pose estimation files for all videos, making it easier to work with this data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/746\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/745","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/745\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/745\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/745\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/745","id":725589352,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0","number":745,"title":"Fix emotion description","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number? \r\nThank you in advance."],"created_at":1603200519000,"updated_at":1619102851000,"closed_at":1603269507000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/745","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/745","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/745.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/745.patch"},"body":"Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper.\r\n\r\nI also took the liberty to make use of `ClassLabel` for the emotion labels.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/745\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/744","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/744\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/744\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/744\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/744","id":724918448,"node_id":"MDU6SXNzdWU3MjQ5MTg0NDg=","number":744,"title":"Dataset Explorer Doesn't Work for squad_es and squad_it","user":{"login":"gaotongxiao","id":22607038,"node_id":"MDQ6VXNlcjIyNjA3MDM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22607038?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gaotongxiao","html_url":"https:\/\/github.com\/gaotongxiao","followers_url":"https:\/\/api.github.com\/users\/gaotongxiao\/followers","following_url":"https:\/\/api.github.com\/users\/gaotongxiao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gaotongxiao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gaotongxiao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gaotongxiao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gaotongxiao\/orgs","repos_url":"https:\/\/api.github.com\/users\/gaotongxiao\/repos","events_url":"https:\/\/api.github.com\/users\/gaotongxiao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gaotongxiao\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oups wrong click.\r\nThis one is for you @srush"],"created_at":1603136052000,"updated_at":1603730177000,"closed_at":1603730177000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"https:\/\/huggingface.co\/nlp\/viewer\/?dataset=squad_es\r\nhttps:\/\/huggingface.co\/nlp\/viewer\/?dataset=squad_it\r\n\r\nBoth pages show \"OSError: [Errno 28] No space left on device\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/744\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/743","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/743\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/743\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/743\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/743","id":724703980,"node_id":"MDU6SXNzdWU3MjQ3MDM5ODA=","number":743,"title":"load_dataset for CSV files not working","user":{"login":"iliemihai","id":2815308,"node_id":"MDQ6VXNlcjI4MTUzMDg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2815308?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/iliemihai","html_url":"https:\/\/github.com\/iliemihai","followers_url":"https:\/\/api.github.com\/users\/iliemihai\/followers","following_url":"https:\/\/api.github.com\/users\/iliemihai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/iliemihai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/iliemihai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/iliemihai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/iliemihai\/orgs","repos_url":"https:\/\/api.github.com\/users\/iliemihai\/repos","events_url":"https:\/\/api.github.com\/users\/iliemihai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/iliemihai\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you !\r\nCould you provide a csv file that reproduces the error ?\r\nIt doesn't have to be one of your dataset. As long as it reproduces the error\r\nThat would help a lot !","I think another good example is the following:\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv\", data_files=[\".\/sts-dev.csv\"], delimiter=\"\\t\", column_names=[\"one\", \"two\", \"three\", \"four\", \"score\", \"sentence1\", \"sentence2\"], script_version=\"master\")`\r\n`\r\n\r\nDisplayed error `CSV parse error: Expected 7 columns, got 6` even tough I put 7 columns. First four columns from the csv don't have a name, so I've named them by default. The csv file is the .dev file from STSb benchmark dataset.\r\n\r\n","Hi, seems I also can't read csv file. I was trying with a dummy csv with only three rows.\r\n\r\n```\r\ntext,label\r\nI hate google,negative\r\nI love Microsoft,positive\r\nI don't like you,negative\r\n```\r\nI was using the HuggingFace image in Paperspace Gradient (datasets==1.1.3). The following code doesn't work:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\n```\r\nIt outputs the following:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv\/default-3b6254ff4dd403e5 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/csv\/default-3b6254ff4dd403e5\/0.0.0\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nDataset csv downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/csv\/default-3b6254ff4dd403e5\/0.0.0\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2. Subsequent calls will reuse this data.\r\n```\r\nBut `len(dataset)` gives `1` and I can't access rows with indexing `dataset[0]` (it gives `KeyError: 0`).\r\n\r\nHowever, loading from pandas dataframe is working.\r\n```\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\ndf = pd.read_csv('test_data.csv')\r\ndataset = Dataset.from_pandas(df)\r\n```\r\n\r\n","This is because load_dataset without `split=` returns a dictionary of split names (train\/validation\/test) to dataset.\r\nYou can do\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\")\r\nprint(dataset[\"train\"][0])\r\n```\r\n\r\nOr if you want to directly get the train split:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', script_version=\"master\", data_files=['test_data.csv'], delimiter=\",\", split=\"train\")\r\nprint(dataset[0])\r\n```\r\n","Good point\r\n\r\nDesign question for us, though: should `load_dataset` when no split is specified and only one split is present in the dataset (common use case with CSV\/text\/JSON datasets) return a `Dataset` instead of a `DatsetDict`? I feel like it's often what the user is expecting. I break a bit the paradigm of a unique return type but since this library is designed for widespread DS people more than CS people usage I would tend to think that UX should take precedence over CS reasons. What do you think?","In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\nI'm ok with returning the dataset object if no split specifications are given for text\/json\/csv\/pandas.\r\n\r\nFor the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?","Thanks for your quick response! I'm fine with specifying the split as @lhoestq suggested. My only concern is when I'm loading from python dict or pandas, the library returns a dataset instead of a dictionary of datasets when no split is specified. I know that they use a different function `Dataset.from_dict` or `Dataset.from_pandas` but the text\/csv files use `load_dataset()`. However, to the user, they do the same task and we probably expect them to have the same behavior.","```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='.\/amazon_data\/Video_Games_5.csv', delimiter=\",\", split=['train', 'test'])\r\n```\r\nI was running the above line, but got this error.\r\n\r\n```ValueError: Unknown split \"test\". Should be one of ['train'].```\r\n\r\nThe data is amazon product data. I load the Video_Games_5.json.gz data into pandas and save it as csv file. and then load the csv file using the above code. I thought, ```split=['train', 'test']``` would split the data into train and test. did I misunderstood?\r\n\r\nThank you!\r\n\r\n","Hi ! the `split` argument in `load_dataset` is used to select the splits you want among the available splits.\r\nHowever when loading a csv with a single file as you did, only a `train` split is available by default.\r\n\r\nIndeed since `data_files='.\/amazon_data\/Video_Games_5.csv'` is equivalent to `data_files={\"train\": '.\/amazon_data\/Video_Games_5.csv'}`, you can get a dataset with \r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='.\/amazon_data\/Video_Games_5.csv', delimiter=\",\", split=\"train\")\r\n```\r\n\r\nAnd then to get both a train and test split you can do\r\n```python\r\ndataset = dataset.train_test_split()\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n\r\n\r\nAlso note that a csv dataset may have several available splits if it is defined this way:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={\r\n \"train\": '.\/amazon_data\/Video_Games_5_train.csv',\r\n \"test\": '.\/amazon_data\/Video_Games_5_test.csv'\r\n})\r\nprint(dataset.keys())\r\n# ['train', 'test']\r\n```\r\n","> In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.\r\n> I'm ok with returning the dataset object if no split specifications are given for text\/json\/csv\/pandas.\r\n> \r\n> For the other datasets ton the other hand the user doesn't know in advance the splits so I would keep the dictionary by default. What do you think ?\r\n\r\nYes maybe this would be good. I think having to select 'train' from the resulting object why the user gave no split information is a confusing and not intuitive behavior.","> Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.\r\n> \r\n> `from datasets import load_dataset`\r\n> `dataset = load_dataset(\"csv\", data_files=[\".\/sample_data.csv\"], delimiter=\"\\t\", column_names=[\"title\", \"text\"], script_version=\"master\")`\r\n> \r\n> Displayed error:\r\n> `... ArrowInvalid: CSV parse error: Expected 2 columns, got 1`\r\n\r\nI'm also facing the same issue when trying to load from a csv file locally:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')\r\n```\r\n\r\nError when executed from Google Colab:\r\n```python\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-34-79a8d4f65ed6> in <module>()\r\n 1 from nlp import load_dataset\r\n----> 2 dataset = load_dataset('csv', data_files='sample_data.csv')\r\n\r\n9 frames\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 535 try:\r\n 536 # Prepare split will record examples associated to the split\r\n--> 537 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 538 except OSError:\r\n 539 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 863 \r\n 864 generator = self._generate_tables(**split_generator.gen_kwargs)\r\n--> 865 for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n 866 writer.write_table(table)\r\n 867 num_examples, num_bytes = writer.finalize()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/tqdm\/notebook.py in __iter__(self, *args, **kwargs)\r\n 213 def __iter__(self, *args, **kwargs):\r\n 214 try:\r\n--> 215 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 216 # return super(tqdm...) will not catch exception\r\n 217 yield obj\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/tqdm\/std.py in __iter__(self)\r\n 1102 fp_write=getattr(self.fp, 'write', sys.stderr.write))\r\n 1103 \r\n-> 1104 for obj in iterable:\r\n 1105 yield obj\r\n 1106 # Update and possibly print the progressbar.\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/nlp\/datasets\/csv\/ede98314803c971fef04bcee45d660c62f3332e8a74491e0b876106f3d99bd9b\/csv.py in _generate_tables(self, files)\r\n 78 read_options=self.config.pa_read_options,\r\n 79 parse_options=self.config.pa_parse_options,\r\n---> 80 convert_options=self.config.convert_options,\r\n 81 )\r\n 82 yield i, pa_table\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/_csv.pyx in pyarrow._csv.read_csv()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n\/usr\/local\/lib\/python3.7\/dist-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: CSV parse error: Expected 1 columns, got 8\r\n```\r\n\r\nVersion:\r\n```\r\nnlp==0.4.0\r\n```","Hi @kauvinlucas\r\n\r\nYou can use the latest versions of `datasets` to do this.\r\nTo do so, just `pip install datasets` instead of `nlp` (the library was renamed) and then\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files='sample_data.csv')","Hi \r\nI'm having a different problem with loading local csv. \r\n```Python\r\nfrom datasets import load_dataset \r\ndataset = load_dataset('csv', data_files='sample.csv') \r\n``` \r\n\r\ngives `ValueError: Specified named and prefix; you can only specify one.` error \r\n\r\nversions: \r\n- datasets: 1.1.3 \r\n- python: 3.9.6 \r\n- pyarrow: 2.0.0 ","Oh.. I figured it out. According to issue #[42387](https:\/\/github.com\/pandas-dev\/pandas\/issues\/42387) from pandas, this new version does not accept None for both parameters (which was being done by the repo I'm testing). Dowgrading Pandas==1.0.4 and Python==3.8 worked","Hi, \r\nI got an `OSError: Cannot find data file. ` when I tried to use load_dataset with tsv files. I have checked the paths, and they are correct. \r\n\r\nversions\r\n- python: 3.7.9\r\n- datasets: 1.1.3\r\n- pyarrow: 2.0.0\r\n- transformers: 4.2.2\r\n\r\n~~~\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n~~~\r\n\r\nThe entire Error message is on below:\r\n\r\n```08\/14\/2021 16:55:44 - INFO - __main__ - load a local file for train: \/project\/media-framing\/transformer4\/data\/0\/val\/label1.tsv\r\n08\/14\/2021 16:55:44 - INFO - __main__ - load a local file for test: \/project\/media-framing\/transformer4\/data\/unlabel\/test.tsv\r\nUsing custom data configuration default\r\nDownloading and preparing dataset csv\/default-00a4200ae8507533 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/usr4\/cs542sp\/hey1\/.cache\/huggingface\/datasets\/csv\/default-00a4200ae8507533\/0.0.0\/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2...\r\nTraceback (most recent call last):\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 592, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 944, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 307, in finalize\r\n self.stream.close()\r\n File \"pyarrow\/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 484, in <module>\r\n main()\r\n File \"run_glue.py\", line 243, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 610, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 515, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 594, in _download_and_prepare\r\n raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\nOSError: Cannot find data file. ```","Hi ! It looks like the error stacktrace doesn't match with your code snippet.\r\n\r\nWhat error do you get when running this ?\r\n```\r\ndata_files = {\"train\": \"train.tsv\", \"test\",: \"test.tsv\"}\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n```\r\ncan you check that both tsv files are in the same folder as the current working directory of your shell ?","Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It's the same with I got before.\r\n```\r\n\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/torch\/cuda\/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http:\/\/www.nvidia.com\/Download\/index.aspx (Triggered internally at \/pytorch\/c10\/cuda\/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\n08\/29\/2021 22:56:43 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False\r\n08\/29\/2021 22:56:43 - INFO - __main__ - Training\/evaluation parameters TrainingArguments(output_dir=\/projectnb\/media-framing\/pred_result\/label1\/, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=True, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=8.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs\/Aug29_22-56-43_scc1, logging_first_step=False, logging_steps=500, save_steps=3000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=\/projectnb\/media-framing\/pred_result\/label1\/, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, _n_gpu=0)\r\n08\/29\/2021 22:56:43 - INFO - __main__ - load a local file for train: \/project\/media-framing\/transformer4\/temp_train.tsv\r\n08\/29\/2021 22:56:43 - INFO - __main__ - load a local file for test: \/project\/media-framing\/transformer4\/temp_test.tsv\r\n08\/29\/2021 22:56:43 - WARNING - datasets.builder - Using custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv\/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/usr4\/cs542sp\/hey1\/.cache\/huggingface\/datasets\/csv\/default-df627c23ac0e98ec\/0.0.0\/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow\/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"run_glue.py\", line 487, in <module>\r\n main()\r\n File \"run_glue.py\", line 244, in main\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```","Hi !\r\nCan you try running this into a python shell directly ?\r\n\r\n```python\r\nimport os\r\nfrom datasets import load_dataset\r\n\r\ndata_files = {\"train\": \"train.tsv\", \"test\": \"test.tsv\"}\r\nassert all(os.path.isfile(data_file) for data_file in data_files.values()), \"Couln't find files\"\r\n\r\ndatasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\nprint(\"success !\")\r\n```\r\n\r\nThis way all the code from `run_glue.py` doesn't interfere with our tests :)","Hi @lhoestq, \r\n\r\nBelow is what I got from terminal after I copied and run your code. I think the files themselves are good since there is no assertion error. \r\n\r\n```\r\nUsing custom data configuration default-df627c23ac0e98ec\r\nDownloading and preparing dataset csv\/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/usr4\/cs542sp\/hey1\/.cache\/huggingface\/datasets\/csv\/default-df627c23ac0e98ec\/0.0.0\/9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff...\r\nTraceback (most recent call last):\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 693, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1166, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/arrow_writer.py\", line 428, in finalize\r\n self.stream.close()\r\n File \"pyarrow\/io.pxi\", line 132, in pyarrow.lib.NativeFile.close\r\n File \"pyarrow\/error.pxi\", line 99, in pyarrow.lib.check_status\r\nOSError: error closing file\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 7, in <module>\r\n datasets = load_dataset(\"csv\", data_files=data_files, delimiter=\"\\t\")\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 852, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 616, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/projectnb2\/media-framing\/env-trans4\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 699, in _download_and_prepare\r\n + str(e)\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nerror closing file\r\n```","Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.\r\n\r\nBy default datasets are cached in `~\/.cache\/huggingface\/datasets`, could you check that you have the right permissions ?\r\nYou can also try to change the cache directory by passing `cache_dir=\"path\/to\/my\/cache\/dir\"` to `load_dataset`.","Thank you!! @lhoestq\r\n\r\nFor some reason, I don't have the default path for datasets to cache, maybe because I work from a remote system. The issue solved after I pass the `cache_dir` argument to the function. Thank you very much!!"],"created_at":1603119231000,"updated_at":1631212006000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.\r\n\r\n`\r\nfrom datasets import load_dataset\r\n`\r\n`\r\ndataset = load_dataset(\"csv\", data_files=[\".\/sample_data.csv\"], delimiter=\"\\t\", column_names=[\"title\", \"text\"], script_version=\"master\")\r\n`\r\n\r\nDisplayed error:\r\n`\r\n...\r\nArrowInvalid: CSV parse error: Expected 2 columns, got 1\r\n`\r\n\r\nI should mention that when I've tried to read data from `https:\/\/github.com\/lhoestq\/transformers\/tree\/custom-dataset-in-rag-retriever\/examples\/rag\/test_data\/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with \/r character, so I've removed them from the custom dataset, but the problem still remains.\r\n\r\nI've added a colab reproducing the bug, but unfortunately I cannot provide the dataset.\r\nhttps:\/\/colab.research.google.com\/drive\/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing\r\n\r\nAre there any work around for it ?\r\nThank you","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/743\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/742","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/742\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/742\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/742\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/742","id":724509974,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3","number":742,"title":"Add OCNLI, a new CLUE dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks :) merging it"],"created_at":1603105593000,"updated_at":1603383589000,"closed_at":1603383588000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/742","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/742","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/742.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/742.patch"},"body":"OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for\r\n Chinese Natural Language Inference, collected following closely the procedures of MNLI,\r\n but with enhanced strategies aiming for more challenging inference pairs. We want to\r\n emphasize we did not use human\/machine translation in creating the dataset, and thus\r\n our Chinese texts are original and not translated.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/742\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/741","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/741\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/741\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/741\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/741","id":723924275,"node_id":"MDU6SXNzdWU3MjM5MjQyNzU=","number":741,"title":"Creating dataset consumes too much memory","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting.\r\nIn theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.\r\n\r\nCould you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?\r\nYou can just copy paste what's inside `_generate_examples` and remove all the code for `datasets` (remove yield).\r\n\r\nIf the RAM usage stays low after 600 examples it means that it comes from some sort of memory leak in the library, or with pyarrow.","Here's an equivalent loading code:\r\n```python\r\nimages_path = \"PHOENIX-2014-T-release-v3\/PHOENIX-2014-T\/features\/fullFrame-210x260px\/train\"\r\n\r\nfor dir_path in tqdm(os.listdir(images_path)):\r\n frames_path = os.path.join(images_path, dir_path)\r\n np_frames = []\r\n for frame_name in os.listdir(frames_path):\r\n frame_path = os.path.join(frames_path, frame_name)\r\n im = Image.open(frame_path)\r\n np_frames.append(np.asarray(im))\r\n im.close()\r\n```\r\n\r\nThe process takes 0.3% of memory, even after 1000 examples on the small machine with 120GB RAM.\r\n\r\nI guess something in the datasets library doesn't release the reference to the objects I'm yielding, but no idea how to test for this","I've had similar issues with Arrow once. I'll investigate...\r\n\r\nFor now maybe we can simply use the images paths in the dataset you want to add. I don't expect to fix this memory issue until 1-2 weeks unfortunately. Then we can just update the dataset with the images. What do you think ?","If it's just 1-2 weeks, I think it's best if we wait. I don't think it is very urgent to add it, and it will be much more useful with the images loaded rather than not (the images are low resolution, and thus papers using this dataset actually fit the entire video into memory anyway)\r\n\r\nI'll keep working on other datasets in the meanwhile :) ","Ok found the issue. This is because the batch size used by the writer is set to 10 000 elements by default so it would load your full dataset in memory (the writer has a buffer that flushes only after each batch). Moreover to write in Apache Arrow we have to use python objects so what's stored inside the ArrowWriter's buffer is actually python integers (32 bits).\r\n\r\nLowering the batch size to 10 should do the job.\r\n\r\nI will add a flag to the DatasetBuilder class of dataset scripts, so that we can customize the batch size.","Thanks, that's awesome you managed to find the problem.\r\n\r\nAbout the 32 bits - really? there isn't a way to serialize the numpy array somehow? 32 bits would take 4 times the memory \/ disk space needed to store these videos.\r\n\r\nPlease let me know when the batch size is customizable and I'll try again!","The 32 bit integrers are only used in the writer's buffer because Arrow doesn't take numpy arrays correctly as input. On disk it's stored as uint8 in arrow format ;)","> I don't expect to fix this memory issue until 1-2 weeks unfortunately.\r\n\r\nHi @lhoestq \r\nnot to rush of course, but I was wondering if you have a new timeline so I know how to plan my work around this :) ","Hi ! Next week for sure :) ","Alright it should be good now.\r\nYou just have to specify `_writer_batch_size = 10` for example as a class attribute of the dataset builder class.","I added it, but still it consumes as much memory\r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/pull\/722\/files#diff-2e0d865dd4a60dedd1861d6f8c5ed281ded71508467908e1e0b1dbe7d2d420b1R66\r\n\r\nDid I not do it correctly?","Yes you did it right.\r\nDid you rebase to include the changes of #828 ?\r\n\r\nEDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate","Hi @lhoestq, any update on this?\r\nPerhaps even a direction I could try myself?","Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'\r\n\r\nWhat you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/arrow_writer.py#L257-L258) in the code.\r\n\r\nThe idea is that `write_on_file` writes the examples every `writer_batch_size` examples and clear the buffer `self. current_rows`. As soon as `writer_batch_size` is small enough you shouldn't have memory issues in theory.\r\n\r\nLet me know if you have questions or if I can help.\r\n\r\nSince the dataset sprint is over and I will also be done with all the PRs soon I will be able to go back at it and take a look.","Thanks. I gave it a try and no success. I'm not sure what's happening there","I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https:\/\/github.com\/huggingface\/datasets\/blob\/0e2563e5d5c2fc193ea27d7c24607bb35607f2d5\/src\/datasets\/builder.py#L934))","Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.\r\nOtherwise in `load_dataset` you can specify `writer_batch_size=`","Ok thanks for the tips. Maybe the documentation should be updated accordingly https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html.","Thanks for reporting this mistake in the docs.\r\nI just fixed it at https:\/\/github.com\/huggingface\/datasets\/commit\/85cf7ff920c90ca2e12bedca12b36d2a043c3da2"],"created_at":1603001226000,"updated_at":1617097628000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Moving this issue from https:\/\/github.com\/huggingface\/datasets\/pull\/722 here, because it seems like a general issue.\r\n\r\nGiven the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):\r\n```python\r\n def _generate_examples(self, base_path, split):\r\n \"\"\" Yields examples. \"\"\"\r\n\r\n filepath = os.path.join(base_path, \"annotations\", \"manual\", \"PHOENIX-2014-T.\" + split + \".corpus.csv\")\r\n images_path = os.path.join(base_path, \"features\", \"fullFrame-210x260px\", split)\r\n\r\n with open(filepath, \"r\", encoding=\"utf-8\") as f:\r\n data = csv.DictReader(f, delimiter=\"|\", quoting=csv.QUOTE_NONE)\r\n for row in data:\r\n frames_path = os.path.join(images_path, row[\"video\"])[:-7]\r\n np_frames = []\r\n for frame_name in os.listdir(frames_path):\r\n frame_path = os.path.join(frames_path, frame_name)\r\n im = Image.open(frame_path)\r\n np_frames.append(np.asarray(im))\r\n im.close()\r\n\r\n yield row[\"name\"], {\"video\": np_frames}\r\n```\r\n\r\nThe dataset creation process goes out of memory on a machine with 500GB RAM.\r\nI was under the impression that the \"generator\" here is exactly for that, to avoid memory constraints.\r\n\r\n\r\nHowever, even if you want the entire dataset in memory, it would be in the worst case\r\n`260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes\r\nSo I'm not sure why it's taking more than 500GB.\r\n\r\nAnd the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM.\r\n\r\n\r\n---\r\n\r\n## Info that might help:\r\nIterating over examples is extremely slow.\r\n![image](https:\/\/user-images.githubusercontent.com\/5757359\/96359590-3c666780-111d-11eb-9347-1f833ad982a9.png)\r\nIf I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples\/sec\r\n\r\nAnd you can see at this state it is using 94% of the memory:\r\n![image](https:\/\/user-images.githubusercontent.com\/5757359\/96359606-7afc2200-111d-11eb-8c11-0afbdba1a6a3.png)\r\n\r\nAnd it is only using one CPU core, which is probably why it's so slow:\r\n![image](https:\/\/user-images.githubusercontent.com\/5757359\/96359630-a3841c00-111d-11eb-9ba0-7fd3cdf51d26.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/741\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/740","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/740\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/740\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/740\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/740","id":723047958,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0","number":740,"title":"Fix TREC urls","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602839488000,"updated_at":1603097677000,"closed_at":1603097676000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/740","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/740","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/740.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/740.patch"},"body":"The old TREC urls are now redirections.\r\nI updated the urls to the new ones, since we don't support redirections for downloads.\r\n\r\nFix #737 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/740\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/739","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/739\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/739\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/739\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/739","id":723044066,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3","number":739,"title":"Add wiki dpr multiset embeddings","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I still have to compute the dataset_infos, and build + host the indexes","update: I'm computing the metadata, will update the PR soon","Finally all green and ready to merge :)"],"created_at":1602839149000,"updated_at":1606399370000,"closed_at":1606399369000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/739","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/739","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/739.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/739.patch"},"body":"There are two DPR encoders, one trained on Natural Questions and one trained on a multiset\/hybrid dataset.\r\nPreviously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset\/hybrid dataset.\r\nIn the configuration you can now specify `embeddings_name=\"nq\"` or `embeddings_name=\"multiset\"`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/739\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/738","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/738\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/738\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/738\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/738","id":723033923,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4","number":738,"title":"Replace seqeval code with original classification_report for simplicity","user":{"login":"Hironsan","id":6737785,"node_id":"MDQ6VXNlcjY3Mzc3ODU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6737785?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Hironsan","html_url":"https:\/\/github.com\/Hironsan","followers_url":"https:\/\/api.github.com\/users\/Hironsan\/followers","following_url":"https:\/\/api.github.com\/users\/Hironsan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Hironsan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Hironsan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Hironsan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Hironsan\/orgs","repos_url":"https:\/\/api.github.com\/users\/Hironsan\/repos","events_url":"https:\/\/api.github.com\/users\/Hironsan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Hironsan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello,\r\n\r\nI ran https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/token-classification\/run.sh\r\n\r\nAnd received this error:\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 407\/407 [21:37<00:00, 3.44s\/it]Traceback (most recent call last):\r\n File \"run_ner.py\", line 445, in <module>\r\n main()\r\n File \"run_ner.py\", line 398, in main\r\n results = trainer.evaluate()\r\n File \"\/data\/2021\/transformers\/src\/transformers\/trainer.py\", line 1470, in evaluate\r\n metric_key_prefix=metric_key_prefix,\r\n File \"\/data\/2021\/transformers\/src\/transformers\/trainer.py\", line 1622, in prediction_loop\r\n metrics = self.compute_metrics(EvalPrediction(predictions=preds, label_ids=label_ids))\r\n File \"run_ner.py\", line 345, in compute_metrics\r\n results = metric.compute(predictions=true_predictions, references=true_labels)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/metric.py\", line 398, in compute\r\n output = self._compute(predictions=predictions, references=references, **kwargs)\r\n File \"\/root\/.cache\/huggingface\/modules\/datasets_modules\/metrics\/seqeval\/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff\/seqeval.py\", line 97, in _compute\r\n report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)\r\nTypeError: classification_report() got an unexpected keyword argument 'output_dict'\r\n```\r\n\r\nI'm still trying multiple things to see if I can work around this, but I thought it might be useful to mention it here.\r\n\r\n```\r\nName: transformers\r\nVersion: 4.3.0.dev0\r\n\r\nName: datasets\r\nVersion: 1.2.1\r\n```","Hi, can you try to update your local installation of `seqeval` ?\r\n\r\n```\r\npip install --upgrade seqeval\r\n```","@lhoestq thanks for the reply. Indeed it was some issue with my setup. I removed the \"transformers\" and \"datasets\" (that I had previously installed from the source code), cleared the cache and installed everything again. It works great now!"],"created_at":1602838305000,"updated_at":1611245235000,"closed_at":1603103472000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/738","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/738","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/738.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/738.patch"},"body":"Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.\r\n\r\nThis PR replaces the current code with the original function(`classification_report`) to simplify it.\r\n\r\nAlso, the original code has been updated to fix #352.\r\n- Related issue: https:\/\/github.com\/chakki-works\/seqeval\/pull\/38\r\n\r\n\r\n```python\r\nfrom datasets import load_metric\r\nmetric = load_metric(\"seqeval\")\r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\nmetric.compute(predictions=y_pred, references=y_true)\r\n# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/738\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/737","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/737\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/737\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/737\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/737","id":722463923,"node_id":"MDU6SXNzdWU3MjI0NjM5MjM=","number":737,"title":"Trec Dataset Connection Error","user":{"login":"aychang95","id":10554495,"node_id":"MDQ6VXNlcjEwNTU0NDk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10554495?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aychang95","html_url":"https:\/\/github.com\/aychang95","followers_url":"https:\/\/api.github.com\/users\/aychang95\/followers","following_url":"https:\/\/api.github.com\/users\/aychang95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aychang95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aychang95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aychang95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aychang95\/orgs","repos_url":"https:\/\/api.github.com\/users\/aychang95\/repos","events_url":"https:\/\/api.github.com\/users\/aychang95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aychang95\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting.\r\nThat's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.\r\n\r\nI'm opening a PR to update the url"],"created_at":1602777473000,"updated_at":1603097676000,"closed_at":1603097676000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**Datasets Version:**\r\n1.1.2\r\n\r\n**Python Version:**\r\n3.6\/3.7\r\n\r\n\r\n**Code:**\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"trec\")\r\n```\r\n\r\n**Expected behavior:**\r\nDownload Trec dataset and load Dataset object\r\n\r\n**Current Behavior:**\r\nGet a connection error saying it couldn't reach http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label (but the link doesn't seem broken)\r\n\r\n<details>\r\n <summary>Error Logs<\/summary>\r\n \r\n\r\nUsing custom data configuration default\r\nDownloading and preparing dataset trec\/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to \/root\/.cache\/huggingface\/datasets\/trec\/default\/1.1.0\/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7...\r\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\n<ipython-input-8-66bf1242096e> in <module>()\r\n----> 1 load_dataset(\"trec\")\r\n\r\n10 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag)\r\n 473 elif response is not None and response.status_code == 404:\r\n 474 raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\n--> 475 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 476 \r\n 477 # Try a second time\r\n\r\nConnectionError: Couldn't reach http:\/\/cogcomp.org\/Data\/QA\/QC\/train_5500.label\r\n\r\n<\/details>","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/737\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/736","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/736\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/736\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/736\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/736","id":722348191,"node_id":"MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy","number":736,"title":"Start community-provided dataset docs","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["can you also reference the `--organization` flag like in https:\/\/github.com\/huggingface\/transformers\/blob\/master\/docs\/source\/model_sharing.rst#upload-your-model-with-the-cli ?","done!","Not sure if the changes in `datasets\/wmt_t2t\/wmt_utils.py` are intentional.\r\nIf you want to add more configs to wmt, could you do it in a serapate PR ?","I don't think I changed wmt_utils (I think github is wrong or my setup is poorly configured).\r\n\r\nLocally git diff master --name-only says one file. Master is up to date.\r\nTried to make a new PR #755 and the same thing happened.","Trying new fork."],"created_at":1602769299000,"updated_at":1603458928000,"closed_at":1603458928000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/736","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/736","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/736.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/736.patch"},"body":"This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs.\r\n\r\n+ In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`.\r\nI think the first naming is clearer, but I didn't address that here.\r\n\r\n\r\n+ I didn't add metadata, will try that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/736\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/735","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/735\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/735\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/735\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/735","id":722225270,"node_id":"MDU6SXNzdWU3MjIyMjUyNzA=","number":735,"title":"Throw error when an unexpected key is used in data_files","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nWe'll add support for other keys"],"created_at":1602759327000,"updated_at":1604064232000,"closed_at":1604064232000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I have found that only \"train\", \"validation\" and \"test\" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users.\r\n\r\nSo the following, unintuitively, returns only one key (namely `train`).\r\n\r\n```python\r\ndatasets = load_dataset(\"text\", data_files={\"train\": train_f, \"valid\": valid_f})\r\nprint(datasets.keys())\r\n# dict_keys(['train'])\r\n```\r\n\r\nwhereas using `validation` instead, does return the expected result:\r\n\r\n```python\r\ndatasets = load_dataset(\"text\", data_files={\"train\": train_f, \"validation\": valid_f})\r\nprint(datasets.keys())\r\n# dict_keys(['train', 'validation'])\r\n```\r\n\r\nI would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/735\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/734","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/734\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/734\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/734\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/734","id":721767848,"node_id":"MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz","number":734,"title":"Fix GLUE metric description","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602708254000,"updated_at":1602754063000,"closed_at":1602754062000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/734","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/734","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/734.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/734.patch"},"body":"Small typo: the description says translation instead of prediction.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/734\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/733","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/733\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/733\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/733\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/733","id":721366744,"node_id":"MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw","number":733,"title":"Update link to dataset viewer","user":{"login":"negedng","id":12969168,"node_id":"MDQ6VXNlcjEyOTY5MTY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12969168?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/negedng","html_url":"https:\/\/github.com\/negedng","followers_url":"https:\/\/api.github.com\/users\/negedng\/followers","following_url":"https:\/\/api.github.com\/users\/negedng\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/negedng\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/negedng\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/negedng\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/negedng\/orgs","repos_url":"https:\/\/api.github.com\/users\/negedng\/repos","events_url":"https:\/\/api.github.com\/users\/negedng\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/negedng\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602674003000,"updated_at":1602684451000,"closed_at":1602684451000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/733","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/733","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/733.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/733.patch"},"body":"Change 404 error links in quick tour to working ones","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/733\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/732","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/732\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/732\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/732\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/732","id":721359448,"node_id":"MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy","number":732,"title":"dataset(wlasl): initial loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Followup: \r\nFrom the info in https:\/\/github.com\/huggingface\/datasets\/pull\/722, I probably should load the videos as array of frames directly into the database. \r\nThis will make the dataset generation time very long, but will make working with the dataset much easier.","When I run:\r\n```\r\npython datasets-cli dummy_data datasets\/wlasl\r\n```\r\n\r\nI get:\r\n```\r\nChecking datasets\/wlasl\/wlasl.py for additional imports. \r\nFound main folder for dataset datasets\/wlasl\/wlasl.py at \/home\/nlp\/amit\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wlasl \r\nFound specific version folder for dataset datasets\/wlasl\/wlasl.py at \/home\/nlp\/amit\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wlasl\/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786 \r\nFound script file from datasets\/wlasl\/wlasl.py to \/home\/nlp\/amit\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wlasl\/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\/wlasl.py \r\nFound dataset infos file from datasets\/wlasl\/dataset_infos.json to \/home\/nlp\/amit\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wlasl\/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\/dataset_infos.json \r\nFound metadata file for dataset datasets\/wlasl\/wlasl.py at \/home\/nlp\/amit\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wlasl\/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\/wlasl.json \r\nUsing custom data configuration default \r\nLoading Dataset Infos from \/home\/nlp\/amit\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wlasl\/f0cad785350d770804f20c471b0cff2d7e3c7b210b5c3a228c393abb95a04786\r\nCreating dummy folder structure for datasets\/wlasl\/dummy\/0.3.0... \r\nDataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data. \r\nTraceback (most recent call last): \r\nFile \"datasets-cli\", line 36, in \r\nservice.run() File \"\/home\/nlp\/amit\/anaconda2\/envs\/meta-scholar\/lib\/python3.7\/site-packages\/datasets-1.1.2-py3.7.egg\/datasets\/commands\/dummy_data.py\", line 73, in run \r\nfor split in generator_splits: \r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```"],"created_at":1602673302000,"updated_at":1616480383000,"closed_at":1616480383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/732","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/732","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/732.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/732.patch"},"body":"takes like 9-10 hours to download all of the videos for the dataset, but it does finish :) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/732\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/731","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/731\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/731\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/731\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/731","id":721142985,"node_id":"MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4","number":731,"title":"dataset(aslg_pc12): initial loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @lhoestq \r\nAre there any guidelines for the dummy data?\r\nIn this particular case for example, the dataset fetches from two hardcoded URLs. \r\nDo I just `head -n 10` both files and zip them?\r\n\r\n","> Thanks @lhoestq\r\n> Are there any guidelines for the dummy data?\r\n> In this particular case for example, the dataset fetches from two hardcoded URLs.\r\n> Do I just `head -n 10` both files and zip them?\r\n\r\nYes the idea is just to have a few examples to properly test the script and make sure it keeps working in the long run.\r\n\r\nAnd FYI there's a command to help you name the dummy data files correctly. More info in the documentation [here](https:\/\/huggingface.co\/docs\/datasets\/share_dataset.html#adding-dummy-data)","@lhoestq passes all tests"],"created_at":1602652477000,"updated_at":1603898826000,"closed_at":1603898826000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/731","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/731","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/731.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/731.patch"},"body":"This contains the only current public part of this corpus.\r\n\r\nThe rest of the corpus is not yet been made public, but this sample is still being used by researchers.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/731\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/730","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/730\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/730\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/730\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/730","id":721073812,"node_id":"MDU6SXNzdWU3MjEwNzM4MTI=","number":730,"title":"Possible caching bug","user":{"login":"ArneBinder","id":3375489,"node_id":"MDQ6VXNlcjMzNzU0ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3375489?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ArneBinder","html_url":"https:\/\/github.com\/ArneBinder","followers_url":"https:\/\/api.github.com\/users\/ArneBinder\/followers","following_url":"https:\/\/api.github.com\/users\/ArneBinder\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ArneBinder\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ArneBinder\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ArneBinder\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ArneBinder\/orgs","repos_url":"https:\/\/api.github.com\/users\/ArneBinder\/repos","events_url":"https:\/\/api.github.com\/users\/ArneBinder\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ArneBinder\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)"],"created_at":1602640954000,"updated_at":1603964161000,"closed_at":1603964161000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The following code with `test1.txt` containing just \"\ud83e\udd17\ud83e\udd17\ud83e\udd17\":\r\n```\r\ndataset = datasets.load_dataset('text', data_files=['test1.txt'], split=\"train\", encoding=\"latin_1\")\r\nprint(dataset[0])\r\ndataset = datasets.load_dataset('text', data_files=['test1.txt'], split=\"train\", encoding=\"utf-8\")\r\nprint(dataset[0])\r\n``` \r\nproduces this output:\r\n```\r\nDownloading and preparing dataset text\/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/arne\/.cache\/huggingface\/datasets\/text\/default-15600e4d83254059\/0.0.0\/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...\r\nDataset text downloaded and prepared to \/home\/arne\/.cache\/huggingface\/datasets\/text\/default-15600e4d83254059\/0.0.0\/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.\r\n{'text': '\u00f0\\x9f\u00a4\\x97\u00f0\\x9f\u00a4\\x97\u00f0\\x9f\u00a4\\x97'}\r\nUsing custom data configuration default\r\nReusing dataset text (\/home\/arne\/.cache\/huggingface\/datasets\/text\/default-15600e4d83254059\/0.0.0\/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)\r\n{'text': '\u00f0\\x9f\u00a4\\x97\u00f0\\x9f\u00a4\\x97\u00f0\\x9f\u00a4\\x97'}\r\n```\r\nJust changing the order (and deleting the temp files):\r\n```\r\ndataset = datasets.load_dataset('text', data_files=['test1.txt'], split=\"train\", encoding=\"utf-8\")\r\nprint(dataset[0])\r\ndataset = datasets.load_dataset('text', data_files=['test1.txt'], split=\"train\", encoding=\"latin_1\")\r\nprint(dataset[0])\r\n```\r\nproduces this:\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset text\/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/arne\/.cache\/huggingface\/datasets\/text\/default-15600e4d83254059\/0.0.0\/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...\r\nDataset text downloaded and prepared to \/home\/arne\/.cache\/huggingface\/datasets\/text\/default-15600e4d83254059\/0.0.0\/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.\r\n{'text': '\ud83e\udd17\ud83e\udd17\ud83e\udd17'}\r\nUsing custom data configuration default\r\nReusing dataset text (\/home\/arne\/.cache\/huggingface\/datasets\/text\/default-15600e4d83254059\/0.0.0\/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)\r\n{'text': '\ud83e\udd17\ud83e\udd17\ud83e\udd17'}\r\n```\r\n\r\nIs it intended that the cache path does not depend on the config entries?\r\n\r\ntested with datasets==1.1.2 and python==3.8.5","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/730\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/729","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/729\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/729\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/729\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/729","id":719558876,"node_id":"MDU6SXNzdWU3MTk1NTg4NzY=","number":729,"title":"Better error message when one forgets to call `add_batch` before `compute`","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602525562000,"updated_at":1603984704000,"closed_at":1603984704000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer.\r\n\r\n## Reproducer\r\n\r\n```python\r\nimport datasets\r\nimport torch\r\nfrom datasets import Metric\r\n\r\nclass GatherMetric(Metric):\r\n def _info(self):\r\n return datasets.MetricInfo(\r\n description=\"description\",\r\n citation=\"citation\",\r\n inputs_description=\"kwargs\",\r\n features=datasets.Features({\r\n 'predictions': datasets.Value('int64'),\r\n 'references': datasets.Value('int64'),\r\n }),\r\n codebase_urls=[],\r\n reference_urls=[],\r\n format='numpy'\r\n )\r\n\r\n def _compute(self, predictions, references):\r\n return {\"predictions\": predictions, \"labels\": references}\r\n\r\nmetric = GatherMetric(cache_dir=\"test-metric\")\r\ninputs = torch.randint(0, 2, (1024,))\r\ntargets = torch.randint(0, 2, (1024,))\r\n\r\nbatch_size = 8\r\nfor i in range(0, 1024, batch_size):\r\n pass # User forgets to call `add_batch`\r\nresult = metric.compute()\r\n```\r\n\r\n## Stack trace:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-13-267729d187fa> in <module>\r\n 3 pass\r\n 4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])\r\n----> 5 result = metric.compute()\r\n\r\n~\/git\/datasets\/src\/datasets\/metric.py in compute(self, *args, **kwargs)\r\n 380 if predictions is not None:\r\n 381 self.add_batch(predictions=predictions, references=references)\r\n--> 382 self._finalize()\r\n 383 \r\n 384 self.cache_file_name = None\r\n\r\n~\/git\/datasets\/src\/datasets\/metric.py in _finalize(self)\r\n 343 elif self.process_id == 0:\r\n 344 # Let's acquire a lock on each node files to be sure they are finished writing\r\n--> 345 file_paths, filelocks = self._get_all_cache_files()\r\n 346 \r\n 347 # Read the predictions and references\r\n\r\n~\/git\/datasets\/src\/datasets\/metric.py in _get_all_cache_files(self)\r\n 280 filelocks = []\r\n 281 for process_id, file_path in enumerate(file_paths):\r\n--> 282 filelock = FileLock(file_path + \".lock\")\r\n 283 try:\r\n 284 filelock.acquire(timeout=self.timeout)\r\n\r\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'str'\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/729\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/728","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/728\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/728\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/728\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/728","id":719555780,"node_id":"MDU6SXNzdWU3MTk1NTU3ODA=","number":728,"title":"Passing `cache_dir` to a metric does not work","user":{"login":"sgugger","id":35901082,"node_id":"MDQ6VXNlcjM1OTAxMDgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35901082?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sgugger","html_url":"https:\/\/github.com\/sgugger","followers_url":"https:\/\/api.github.com\/users\/sgugger\/followers","following_url":"https:\/\/api.github.com\/users\/sgugger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sgugger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sgugger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sgugger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sgugger\/orgs","repos_url":"https:\/\/api.github.com\/users\/sgugger\/repos","events_url":"https:\/\/api.github.com\/users\/sgugger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sgugger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602525314000,"updated_at":1603964082000,"closed_at":1603964082000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:\r\n\r\n## Reproducer\r\n\r\n```python\r\nimport datasets\r\nimport torch\r\nfrom datasets import Metric\r\n\r\nclass GatherMetric(Metric):\r\n def _info(self):\r\n return datasets.MetricInfo(\r\n description=\"description\",\r\n citation=\"citation\",\r\n inputs_description=\"kwargs\",\r\n features=datasets.Features({\r\n 'predictions': datasets.Value('int64'),\r\n 'references': datasets.Value('int64'),\r\n }),\r\n codebase_urls=[],\r\n reference_urls=[],\r\n format='numpy'\r\n )\r\n\r\n def _compute(self, predictions, references):\r\n return {\"predictions\": predictions, \"labels\": references}\r\n\r\nmetric = GatherMetric(cache_dir=\"test-metric\")\r\ninputs = torch.randint(0, 2, (1024,))\r\ntargets = torch.randint(0, 2, (1024,))\r\n\r\nbatch_size = 8\r\nfor i in range(0, 1024, batch_size):\r\n metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])\r\nresult = metric.compute()\r\n```\r\n\r\n## Stack trace:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n~\/git\/datasets\/src\/datasets\/metric.py in _finalize(self)\r\n 349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))\r\n--> 350 self.data = Dataset(**reader.read_files([{\"filename\": f} for f in file_paths]))\r\n 351 except FileNotFoundError:\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_reader.py in read_files(self, files, original_instructions)\r\n 227 # Prepend path to filename\r\n--> 228 pa_table = self._read_files(files)\r\n 229 files = copy.deepcopy(files)\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_reader.py in _read_files(self, files)\r\n 166 for f_dict in files:\r\n--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)\r\n 168 pa_tables.append(pa_table)\r\n\r\n~\/git\/datasets\/src\/datasets\/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)\r\n 291 )\r\n--> 292 mmap = pa.memory_map(filename)\r\n 293 f = pa.ipc.open_stream(mmap)\r\n\r\n~\/.pyenv\/versions\/3.7.9\/envs\/base\/lib\/python3.7\/site-packages\/pyarrow\/io.pxi in pyarrow.lib.memory_map()\r\n\r\n~\/.pyenv\/versions\/3.7.9\/envs\/base\/lib\/python3.7\/site-packages\/pyarrow\/io.pxi in pyarrow.lib.MemoryMappedFile._open()\r\n\r\n~\/.pyenv\/versions\/3.7.9\/envs\/base\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n~\/.pyenv\/versions\/3.7.9\/envs\/base\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nFileNotFoundError: [Errno 2] Failed to open local file 'test-metric\/gather_metric\/default\/test-metric\/gather_metric\/default\/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-17-e42d43cc981f> in <module>\r\n 2 for i in range(0, 1024, batch_size):\r\n 3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])\r\n----> 4 result = metric.compute()\r\n\r\n~\/git\/datasets\/src\/datasets\/metric.py in compute(self, *args, **kwargs)\r\n 380 if predictions is not None:\r\n 381 self.add_batch(predictions=predictions, references=references)\r\n--> 382 self._finalize()\r\n 383 \r\n 384 self.cache_file_name = None\r\n\r\n~\/git\/datasets\/src\/datasets\/metric.py in _finalize(self)\r\n 351 except FileNotFoundError:\r\n 352 raise ValueError(\r\n--> 353 \"Error in finalize: another metric instance is already using the local cache file. \"\r\n 354 \"Please specify an experiment_id to avoid colision between distributed metric instances.\"\r\n 355 )\r\n\r\nValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.\r\n```\r\n\r\nThe code works when we remove the `cache_dir=...` from the metric.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/728\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/727","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/727\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/727\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/727\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/727","id":719386366,"node_id":"MDU6SXNzdWU3MTkzODYzNjY=","number":727,"title":"Parallel downloads progress bar flickers","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602509765000,"updated_at":1602509765000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.\r\n\r\nTo fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar. \r\n\r\nAnother way would be to have one \"master\" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/727\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/726","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/726\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/726\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/726\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/726","id":719313754,"node_id":"MDU6SXNzdWU3MTkzMTM3NTQ=","number":726,"title":"\"Checksums didn't match for dataset source files\" error while loading openwebtext dataset","user":{"login":"SparkJiao","id":16469472,"node_id":"MDQ6VXNlcjE2NDY5NDcy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16469472?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SparkJiao","html_url":"https:\/\/github.com\/SparkJiao","followers_url":"https:\/\/api.github.com\/users\/SparkJiao\/followers","following_url":"https:\/\/api.github.com\/users\/SparkJiao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SparkJiao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SparkJiao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SparkJiao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SparkJiao\/orgs","repos_url":"https:\/\/api.github.com\/users\/SparkJiao\/repos","events_url":"https:\/\/api.github.com\/users\/SparkJiao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SparkJiao\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi try, to provide more information please.\r\n\r\nExample code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).","> Hi try, to provide more information please.\r\n> \r\n> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).\r\n\r\nI have update the description, sorry for the incomplete issue by mistake.","Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:\r\n```\r\n>>> dataset = load_dataset('\/home\/admin\/workspace\/datasets\/datasets-master\/datasets-master\/datasets\/openwebtext', data_dir='\/home\/admin\/workspace\/datasets')\r\nUsing custom data configuration default\r\nDownloading and preparing dataset openwebtext\/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/admin\/.cache\/huggingface\/datasets\/openwebtext\/default\/0.0.0\/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nDataset openwebtext downloaded and prepared to \/home\/admin\/.cache\/huggingface\/datasets\/openwebtext\/default\/0.0.0\/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02. Subsequent calls will reuse this data.\r\n>>> len(dataset['train'])\r\n74571\r\n>>>\r\n```\r\nThe size of the pre-processed example file is only 354MB, however the processed bookcorpus dataset is 4.6g. Are there any problems?","NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n\r\ni got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo","Hi, I got the similar issue for xnli dataset while working on colab with python3.7. \r\n\r\n`nlp.load_dataset(path = 'xnli')`\r\n\r\nThe above command resulted in following issue : \r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/www.nyu.edu\/projects\/bowman\/xnli\/XNLI-1.0.zip']\r\n```\r\n\r\nAny idea how to fix this ?"],"created_at":1602503110000,"updated_at":1630485792000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI have encountered this problem during loading the openwebtext dataset:\r\n```\r\n>>> dataset = load_dataset('openwebtext')\r\nDownloading and preparing dataset openwebtext\/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to \/home\/admin\/.cache\/huggingface\/datasets\/openwebtext\/plain_text\/1.0.0\/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/admin\/workspace\/anaconda3\/envs\/torch1.6-py3.7\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/home\/admin\/workspace\/anaconda3\/envs\/torch1.6-py3.7\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/admin\/workspace\/anaconda3\/envs\/torch1.6-py3.7\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 536, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"\/home\/admin\/workspace\/anaconda3\/envs\/torch1.6-py3.7\/lib\/python3.7\/site-packages\/datasets\/utils\/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/zenodo.org\/record\/3834942\/files\/openwebtext.tar.xz']\r\n```\r\nI think this problem is caused because the released dataset has changed. Or I should download the dataset manually?\r\n\r\nSorry for release the unfinised issue by mistake.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/726\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/725","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/725\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/725\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/725\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/725","id":718985641,"node_id":"MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1","number":725,"title":"pretty print dataset objects","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great, as you found it useful I improved the code a bit to automate indentation in the parent class, so that the child repr doesn't need to guess the indentation level, while repr'ing nicely on its own.\r\n\r\n- do we want indent=4 or 2?\r\n- do we want `{` ... `}` or w\/o?\r\n\r\ncurrently it's indent4 and w\/ curly braces, so it looks:\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 157252\r\n })\r\n validation: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5599\r\n })\r\n test: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n })\r\n})\r\n```\r\njust child:\r\n```\r\nDataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n})\r\n```\r\n\r\n","Yes! A lot better indeed!"],"created_at":1602468226000,"updated_at":1603470275000,"closed_at":1603443646000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/725","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/725","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/725.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/725.patch"},"body":"Currently, if I do:\r\n```\r\nfrom datasets import load_dataset\r\nload_dataset(\"wikihow\", 'all', data_dir=\"\/hf\/pegasus-datasets\/wikihow\/\")\r\n```\r\nI get:\r\n```\r\n\r\nDatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),\r\n'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',\r\nid=None)}, num_rows: 157252), 'validation': Dataset(features: {'text':\r\nValue(dtype='string', id=None), 'headline': Value(dtype='string', id=None),\r\n'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test':\r\nDataset(features: {'text': Value(dtype='string', id=None), 'headline':\r\nValue(dtype='string', id=None), 'title': Value(dtype='string', id=None)},\r\nnum_rows: 5577)})\r\n```\r\n\r\nThis is not very readable. \r\n\r\nCan we either have a better `__repr__` or have a custom method to nicely pprint the dataset object? \r\n\r\nHere is my very simple attempt. With this PR, it produces:\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 157252\r\n })\r\n validation: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5599\r\n })\r\n test: Dataset({\r\n features: ['text', 'headline', 'title'],\r\n num_rows: 5577\r\n })\r\n})\r\n```\r\nI did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too.\r\n\r\nnote that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design.\r\n\r\nI'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler.\r\n\r\nThank you.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/725\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/724","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/724\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/724\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/724\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/724","id":718947700,"node_id":"MDU6SXNzdWU3MTg5NDc3MDA=","number":724,"title":"need to redirect \/nlp to \/datasets and remove outdated info","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Should be fixed now: \r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/35882\/95917301-040b0600-0d78-11eb-9655-c4ac0e788089.png)\r\n\r\nNot sure I understand what you mean by the second part?\r\n","Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* https:\/\/huggingface.co\/datasets\/wikihow\r\n* https:\/\/huggingface.co\/nlp\/viewer\/?dataset=wikihow&config=all\r\nCan you see the difference? 2nd has formatting, 1st doesn't.\r\n","For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.\r\n\r\nFor the second one, we'll move to markdown parsing soon, so it'll be formatted better.","I understand. I was just flagging the lack of markup issue."],"created_at":1602457932000,"updated_at":1602694812000,"closed_at":1602694812000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"It looks like the website still has all the `nlp` data, e.g.: https:\/\/huggingface.co\/nlp\/viewer\/?dataset=wikihow&config=all\r\n\r\nshould probably redirect to: https:\/\/huggingface.co\/datasets\/wikihow\r\n\r\nalso for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/724\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/723","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/723\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/723\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/723\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/723","id":718926723,"node_id":"MDU6SXNzdWU3MTg5MjY3MjM=","number":723,"title":"Adding pseudo-labels to datasets","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"assignees":[{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n","They can be used as training data for a smaller model.","Sounds just like a regular dataset to me then, no?","A new configuration for those datasets should do the job then.\r\nNote that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the default\/standard configuration name (not the one with pseudo labels).","Could also be a `user-namespace` dataset maybe?","Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community","![image](https:\/\/user-images.githubusercontent.com\/6045025\/96045248-b528a380-0e3f-11eb-9124-bd55afa031bb.png)\r\n\r\nI assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3?","You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.\r\n\r\n```\r\ndatasets-cli upload_dataset path\/to\/xsum\r\n```"],"created_at":1602450345000,"updated_at":1627967511000,"closed_at":1627967511000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"I recently [uploaded pseudo-labels](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/seq2seq\/precomputed_pseudo_labels.md) for CNN\/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.\r\nSince pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.\r\nI read https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html, but it doesn't really cover this type of contribution.\r\n\r\nI could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https:\/\/github.com\/huggingface\/datasets\/blob\/5f4c6e830f603830117877b8990a0e65a2386aa6\/datasets\/xsum\/xsum.py\r\n\r\nWhat do you think @lhoestq ?\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/723\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/722","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/722\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/722\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/722\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/722","id":718689117,"node_id":"MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw","number":722,"title":"datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This might be interesting to @kayoyin the author of https:\/\/github.com\/kayoyin\/transformer-slt \u2013 pinging you just in case :)","Thanks Amit, this is a great idea! I'm thinking of porting the SLT models from my paper here as well, having this dataset would be perfect for that :)"],"created_at":1602359048000,"updated_at":1609830411000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/722","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/722","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/722.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/722.patch"},"body":"This is the first sign language dataset in this repo as far as I know.\r\nFollowing an old issue I opened https:\/\/github.com\/huggingface\/datasets\/issues\/302.\r\n\r\nI added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/722\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/721","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/721\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/721\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/721\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/721","id":718647147,"node_id":"MDU6SXNzdWU3MTg2NDcxNDc=","number":721,"title":"feat(dl_manager): add support for ftp downloads","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset.\r\n\r\nTo make the download_manager work with a custom downloader, you can call `download_manager.download_custom` instead of `download_manager.download_and_extract`. The expected arguments are the following:\r\n```\r\nurl_or_urls: url or `list`\/`dict` of urls to download and extract. Each\r\n url is a `str`.\r\ncustom_download: Callable with signature (src_url: str, dst_path: str) -> Any\r\n as for example `tf.io.gfile.copy`, that lets you download from google storage\r\n```\r\n","Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ?","Downloading an `ftp` file is as simple as:\r\n```python\r\nimport urllib \r\nurllib.urlretrieve('ftp:\/\/server\/path\/to\/file', 'file')\r\n```\r\n\r\nI believe this should be supported by the library, as its not using any dependency and is trivial amount of code.","I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https:\/\/github.com\/huggingface\/datasets\/pull\/722\r\nSo its possible to understand the interaction of the download component with the ftp download ability","Awesome ! I'll take a look :)","@AmitMY Can you now download the Phoenix2014 Dataset?","@hoanganhpham1006 yes.\r\nSee pull request https:\/\/github.com\/huggingface\/datasets\/pull\/722 , it has a loader for this dataset, mostly ready.\r\nThere's one issue that delays it being merged - https:\/\/github.com\/huggingface\/datasets\/issues\/741 - regarding memory consumption.","The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls","The dataset loader is not yet ready, because of that issue.\r\nIf you want to just download the dataset the old-fashioned way, just go to: https:\/\/www-i6.informatik.rwth-aachen.de\/ftp\/pub\/rwth-phoenix\/2016\/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https)","Got it, thank you so much!"],"created_at":1602345020000,"updated_at":1603531473000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I am working on a new dataset (#302) and encounter a problem downloading it.\r\n\r\n```python\r\n# This is the official download link from https:\/\/www-i6.informatik.rwth-aachen.de\/~koller\/RWTH-PHOENIX-2014-T\/\r\n_URL = \"ftp:\/\/wasserstoff.informatik.rwth-aachen.de\/pub\/rwth-phoenix\/2016\/phoenix-2014-T.v3.tar.gz\"\r\n\r\ndl_manager.download_and_extract(_URL)\r\n```\r\n\r\nI get an error:\r\n\r\n> ValueError: unable to parse ftp:\/\/wasserstoff.informatik.rwth-aachen.de\/pub\/rwth-phoenix\/2016\/phoenix-2014-T.v3.tar.gz as a URL or as a local path\r\n\r\nI checked, and indeed you don't consider `ftp` as a remote file.\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/4c2af707a6955cf4b45f83ac67990395327c5725\/src\/datasets\/utils\/file_utils.py#L188\r\n\r\nAdding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/721\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/720","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/720\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/720\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/720\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/720","id":716581266,"node_id":"MDU6SXNzdWU3MTY1ODEyNjY=","number":720,"title":"OSError: Cannot find data file when not using the dummy dataset in RAG","user":{"login":"josemlopez","id":4112135,"node_id":"MDQ6VXNlcjQxMTIxMzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4112135?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/josemlopez","html_url":"https:\/\/github.com\/josemlopez","followers_url":"https:\/\/api.github.com\/users\/josemlopez\/followers","following_url":"https:\/\/api.github.com\/users\/josemlopez\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/josemlopez\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/josemlopez\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/josemlopez\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/josemlopez\/orgs","repos_url":"https:\/\/api.github.com\/users\/josemlopez\/repos","events_url":"https:\/\/api.github.com\/users\/josemlopez\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/josemlopez\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Same issue here. I will be digging further, but it looks like the [script](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wiki_dpr\/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~\/anaconda3\/envs\/eqa\/lib\/python3.7\/site-packages\/numpy\/lib\/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~\/src\/datasets\/src\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~\/src\/datasets\/src\/datasets\/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~\/anaconda3\/envs\/eqa\/lib\/python3.7\/site-packages\/tqdm\/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~\/anaconda3\/envs\/eqa\/lib\/python3.7\/site-packages\/tqdm\/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n\/hdd\/rag\/cache\/huggingface\/modules\/datasets_modules\/datasets\/wiki_dpr\/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2\/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~\/anaconda3\/envs\/eqa\/lib\/python3.7\/site-packages\/numpy\/lib\/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='\/hdd\/rag\/downloads\/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook\/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~\/src\/transformers\/src\/transformers\/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~\/src\/transformers\/src\/transformers\/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~\/src\/transformers\/src\/transformers\/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~\/src\/transformers\/src\/transformers\/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~\/src\/datasets\/src\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~\/src\/datasets\/src\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~\/src\/datasets\/src\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.","An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ","Closing this one. Feel free to re-open if you have other questions about this issue"],"created_at":1602080833000,"updated_at":1608732271000,"closed_at":1608732271000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Environment info\r\n\r\n transformers version: 3.3.1\r\n Platform: Linux-4.19\r\n Python version: 3.7.7\r\n PyTorch version (GPU?): 1.6.0\r\n Tensorflow version (GPU?): No\r\n Using GPU in script?: Yes\r\n Using distributed or parallel set-up in script?: No\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behaviour:\r\n```\r\nimport os\r\nos.environ['HF_DATASETS_CACHE'] = '\/workspace\/notebooks\/POCs\/cache'\r\n\r\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\r\n\r\ntokenizer = RagTokenizer.from_pretrained(\"facebook\/rag-token-nq\")\r\nretriever = RagRetriever.from_pretrained(\"facebook\/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=False) \r\n```\r\n\r\nPlese note that I'm using the whole dataset: **use_dummy_dataset=False**\r\nAfter around 4 hours (downloading and some other things) this is returned:\r\n\r\n```\r\nDownloading and preparing dataset wiki_dpr\/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/workspace\/notebooks\/POCs\/cache\/wiki_dpr\/psgs_w100.nq.exact\/0.0.0\/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...\r\n\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/numpy\/lib\/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 459 try:\r\n--> 460 return pickle.load(fid, **pickle_kwargs)\r\n 461 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 552 # Prepare split will record examples associated to the split\r\n--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 554 except OSError:\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py in _prepare_split(self, split_generator)\r\n 840 for key, record in utils.tqdm(\r\n--> 841 generator, unit=\" examples\", total=split_info.num_examples, leave=False, disable=not_verbose\r\n 842 ):\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/tqdm\/notebook.py in __iter__(self, *args, **kwargs)\r\n 217 try:\r\n--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 219 # return super(tqdm...) will not catch exception\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/tqdm\/std.py in __iter__(self)\r\n 1128 try:\r\n-> 1129 for obj in iterable:\r\n 1130 yield obj\r\n\r\n~\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wiki_dpr\/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2\/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/numpy\/lib\/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 462 raise IOError(\r\n--> 463 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 464 finally:\r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='\/workspace\/notebooks\/POCs\/cache\/downloads\/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-10-f28df370ac47> in <module>\r\n 1 # ln -s \/workspace\/notebooks\/POCs\/cache \/root\/.cache\/huggingface\/datasets\r\n----> 2 retriever = RagRetriever.from_pretrained(\"facebook\/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=False)\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/transformers\/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 307 generator_tokenizer = rag_tokenizer.generator\r\n 308 return cls(\r\n--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 310 )\r\n 311 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/transformers\/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 298 self.config = config\r\n 299 if self._init_retrieval:\r\n--> 300 self.init_retrieval()\r\n 301 \r\n 302 @classmethod\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/transformers\/retrieval_rag.py in init_retrieval(self)\r\n 324 \r\n 325 logger.info(\"initializing retrieval\")\r\n--> 326 self.index.init_index()\r\n 327 \r\n 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/transformers\/retrieval_rag.py in init_index(self)\r\n 238 split=self.dataset_split,\r\n 239 index_name=self.index_name,\r\n--> 240 dummy=self.use_dummy_dataset,\r\n 241 )\r\n 242 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 609 download_config=download_config,\r\n 610 download_mode=download_mode,\r\n--> 611 ignore_verifications=ignore_verifications,\r\n 612 )\r\n 613 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 474 if not downloaded_from_gcs:\r\n 475 self._download_and_prepare(\r\n--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 477 )\r\n 478 # Sync info\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 553 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 554 except OSError:\r\n--> 555 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n 556 \r\n 557 if verify_infos:\r\n\r\nOSError: Cannot find data file. \r\n\r\n```\r\n\r\nThanks \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/720\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/719","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/719\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/719\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/719\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/719","id":716492263,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2","number":719,"title":"Fix train_test_split output format","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1602074341000,"updated_at":1602077888000,"closed_at":1602077886000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/719","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/719","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/719.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/719.patch"},"body":"There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split.\r\nThis was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split).\r\n\r\nThis should fix @timothyjlaurent 's issue in #620 and fix #676 \r\n\r\nI added tests for `transmit_format` so that it doesn't happen again","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/719\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/718","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/718\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/718\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/718\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/718","id":715694709,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw","number":718,"title":"Don't use tqdm 4.50.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601991953000,"updated_at":1601992164000,"closed_at":1601992162000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/718","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/718","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/718.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/718.patch"},"body":"tqdm 4.50.0 introduced permission errors on windows\r\nsee [here](https:\/\/app.circleci.com\/pipelines\/github\/huggingface\/datasets\/235\/workflows\/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369\/jobs\/1111) for the error details.\r\n\r\nFor now I just added `<4.50.0` in the setup.py\r\nHopefully we can find what's wrong with this version soon","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/718\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/717","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/717\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/717\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/717\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/717","id":714959268,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2","number":717,"title":"Fixes #712 Error in the Overview.ipynb notebook","user":{"login":"subhrm","id":850012,"node_id":"MDQ6VXNlcjg1MDAxMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/850012?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/subhrm","html_url":"https:\/\/github.com\/subhrm","followers_url":"https:\/\/api.github.com\/users\/subhrm\/followers","following_url":"https:\/\/api.github.com\/users\/subhrm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/subhrm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/subhrm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/subhrm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/subhrm\/orgs","repos_url":"https:\/\/api.github.com\/users\/subhrm\/repos","events_url":"https:\/\/api.github.com\/users\/subhrm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/subhrm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601913041000,"updated_at":1601965903000,"closed_at":1601915141000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/717","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/717","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/717.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/717.patch"},"body":"Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/717\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/716","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/716\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/716\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/716\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/716","id":714952888,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw","number":716,"title":"Fixes #712 Attribute error in cell 3 of the overview notebook","user":{"login":"subhrm","id":850012,"node_id":"MDQ6VXNlcjg1MDAxMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/850012?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/subhrm","html_url":"https:\/\/github.com\/subhrm","followers_url":"https:\/\/api.github.com\/users\/subhrm\/followers","following_url":"https:\/\/api.github.com\/users\/subhrm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/subhrm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/subhrm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/subhrm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/subhrm\/orgs","repos_url":"https:\/\/api.github.com\/users\/subhrm\/repos","events_url":"https:\/\/api.github.com\/users\/subhrm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/subhrm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Referencing the wrong issue # in the commit message. Closing this to fix it again."],"created_at":1601912529000,"updated_at":1601912798000,"closed_at":1601912792000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/716","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/716","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/716.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/716.patch"},"body":"Fixes the Attribute error in cell 3 of the overview notebook","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/716\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/715","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/715\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/715\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/715\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/715","id":714690192,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2","number":715,"title":"Use python read for text dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["One thing though, could we try to read the files in parallel?","We could but I'm not sure this would help a lot since the bottleneck is the drive IO if the files are big enough.\r\nIt could make sense for very small files.","Looks like windows is not a big fan of this approach\r\nI'm working on a fix","I remember issue https:\/\/github.com\/huggingface\/datasets\/issues\/546 where this was kinda requested (but maybe IO would bottleneck). What do you think?","I think it's worth testing multiprocessing. It could also be something we add to our speed benchmarks","> I remember issue #546 where this was kinda requested (but maybe IO would bottleneck). What do you think?\r\n\r\nIt still would be interesting I think, especially in scenarios where IO is less of an issue (SSDs particularly) and where there are many smaller files. Wrapping this function in a `pool.map` is perhaps an easy thing to try. ","Merging this one for now for the patch release"],"created_at":1601891275000,"updated_at":1601903598000,"closed_at":1601903597000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/715","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/715","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/715.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/715.patch"},"body":"As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \\r characters in the text file.\r\n\r\nInstead I switched to pure python using `open` and `read`.\r\nFrom my benchmark on a 100MB text file, it's the same speed as the previous pandas reader.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/715\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/714","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/714\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/714\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/714\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/714","id":714487881,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx","number":714,"title":"Add the official dependabot implementation","user":{"login":"ALazyMeme","id":12804673,"node_id":"MDQ6VXNlcjEyODA0Njcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12804673?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ALazyMeme","html_url":"https:\/\/github.com\/ALazyMeme","followers_url":"https:\/\/api.github.com\/users\/ALazyMeme\/followers","following_url":"https:\/\/api.github.com\/users\/ALazyMeme\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ALazyMeme\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ALazyMeme\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ALazyMeme\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ALazyMeme\/orgs","repos_url":"https:\/\/api.github.com\/users\/ALazyMeme\/repos","events_url":"https:\/\/api.github.com\/users\/ALazyMeme\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ALazyMeme\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601869785000,"updated_at":1602503361000,"closed_at":1602503361000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/714","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/714","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/714.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/714.patch"},"body":"This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/714\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/713","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/713\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/713\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/713\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/713","id":714475732,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy","number":713,"title":"Fix reading text files with carriage return symbols","user":{"login":"mozharovsky","id":6762769,"node_id":"MDQ6VXNlcjY3NjI3Njk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6762769?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mozharovsky","html_url":"https:\/\/github.com\/mozharovsky","followers_url":"https:\/\/api.github.com\/users\/mozharovsky\/followers","following_url":"https:\/\/api.github.com\/users\/mozharovsky\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mozharovsky\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mozharovsky\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mozharovsky\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mozharovsky\/orgs","repos_url":"https:\/\/api.github.com\/users\/mozharovsky\/repos","events_url":"https:\/\/api.github.com\/users\/mozharovsky\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mozharovsky\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Discussed in #622, fixed in #715. Closing the issue. Thanks @lhoestq, it works now! \ud83d\udc4d "],"created_at":1601867223000,"updated_at":1602223105000,"closed_at":1601905769000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/713","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/713","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/713.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/713.patch"},"body":"The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\\r`). \r\n\r\nIt fails with the following error message:\r\n\r\n```\r\n...\r\n File \"pandas\/_libs\/parsers.pyx\", line 847, in pandas._libs.parsers.TextReader.read\r\n File \"pandas\/_libs\/parsers.pyx\", line 874, in pandas._libs.parsers.TextReader._read_low_memory\r\n File \"pandas\/_libs\/parsers.pyx\", line 918, in pandas._libs.parsers.TextReader._read_rows\r\n File \"pandas\/_libs\/parsers.pyx\", line 905, in pandas._libs.parsers.TextReader._tokenize_rows\r\n File \"pandas\/_libs\/parsers.pyx\", line 2042, in pandas._libs.parsers.raise_parser_error\r\npandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file.\r\n```\r\n\r\n___\r\nI figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine. \r\n\r\nPlease, consider this PR as it seems to be a common issue to solve.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/713\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/712","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/712\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/712\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/712\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/712","id":714242316,"node_id":"MDU6SXNzdWU3MTQyNDIzMTY=","number":712,"title":"Error in the notebooks\/Overview.ipynb notebook","user":{"login":"subhrm","id":850012,"node_id":"MDQ6VXNlcjg1MDAxMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/850012?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/subhrm","html_url":"https:\/\/github.com\/subhrm","followers_url":"https:\/\/api.github.com\/users\/subhrm\/followers","following_url":"https:\/\/api.github.com\/users\/subhrm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/subhrm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/subhrm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/subhrm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/subhrm\/orgs","repos_url":"https:\/\/api.github.com\/users\/subhrm\/repos","events_url":"https:\/\/api.github.com\/users\/subhrm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/subhrm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```","Thanks! This worked. I have created a PR to fix this in the notebook. "],"created_at":1601791111000,"updated_at":1601915140000,"closed_at":1601915140000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https:\/\/colab.research.google.com\/github\/huggingface\/datasets\/blob\/master\/notebooks\/Overview.ipynb) provided in the main README file to open it in colab. \r\n\r\n```python\r\n# You can access various attributes of the datasets before downloading them\r\nsquad_dataset = list_datasets()[datasets.index('squad')]\r\n\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```\r\n\r\nError message\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-5-8dc805c4949c> in <module>()\r\n 2 squad_dataset = list_datasets()[datasets.index('squad')]\r\n 3 \r\n ----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n \r\nAttributeError: 'str' object has no attribute '__dict__'\r\n```\r\n\r\nThe object `squad_dataset` is a `str` not a `dataclass` .","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/712\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/710","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/710\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/710\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/710\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/710","id":714186999,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0","number":710,"title":"fix README typos\/ consistency","user":{"login":"discdiver","id":7703961,"node_id":"MDQ6VXNlcjc3MDM5NjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7703961?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/discdiver","html_url":"https:\/\/github.com\/discdiver","followers_url":"https:\/\/api.github.com\/users\/discdiver\/followers","following_url":"https:\/\/api.github.com\/users\/discdiver\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/discdiver\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/discdiver\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/discdiver\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/discdiver\/orgs","repos_url":"https:\/\/api.github.com\/users\/discdiver\/repos","events_url":"https:\/\/api.github.com\/users\/discdiver\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/discdiver\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601763656000,"updated_at":1602928365000,"closed_at":1602928365000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/710","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/710","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/710.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/710.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/710\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/709","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/709\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/709\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/709\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/709","id":714067902,"node_id":"MDU6SXNzdWU3MTQwNjc5MDI=","number":709,"title":"How to use similarity settings other then \"BM25\" in Elasticsearch index ?","user":{"login":"nsankar","id":431890,"node_id":"MDQ6VXNlcjQzMTg5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/431890?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nsankar","html_url":"https:\/\/github.com\/nsankar","followers_url":"https:\/\/api.github.com\/users\/nsankar\/followers","following_url":"https:\/\/api.github.com\/users\/nsankar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nsankar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nsankar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nsankar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nsankar\/orgs","repos_url":"https:\/\/api.github.com\/users\/nsankar\/repos","events_url":"https:\/\/api.github.com\/users\/nsankar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nsankar\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration passed to datasets\r\n\r\n```\r\ncurl -X PUT \"localhost:9200\/index?pretty\" -H 'Content-Type: application\/json' -d'\r\n{\r\n \"settings\": {\r\n \"index\": {\r\n \"similarity\": {\r\n \"my_similarity\": {\r\n \"type\": \"DFR\",\r\n \"basic_model\": \"g\",\r\n \"after_effect\": \"l\",\r\n \"normalization\": \"h2\",\r\n \"normalization.h2.c\": \"3.0\"\r\n }\r\n }\r\n }\r\n }\r\n}\r\n'\r\n\r\n```"],"created_at":1601723929000,"updated_at":1626634975000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than \"BM25\" ?**\r\n**ES Reference**\r\nhttps:\/\/www.elastic.co\/guide\/en\/elasticsearch\/reference\/current\/index-modules-similarity.html\r\n**HF doc reference:**\r\nhttps:\/\/huggingface.co\/docs\/datasets\/faiss_and_ea.html\r\n\r\n**context :**\r\n========\r\n\r\nI used the latest Elasticsearch server version 7.9.2\r\nWhen I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error\r\n\r\nFor example DFR that I had tried in the first instance in mappings as below.,\r\n`\"mappings\": {\"properties\": {\"text\": {\"type\": \"text\", \"analyzer\": \"standard\", \"similarity\": \"DFR\"}}},`\r\n\r\nI get the following error \r\nRequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]')\r\n\r\nThe other thing as another option I had tried was to declare \"similarity\": \"my_similarity\" within settings and then assigning \"my_similarity\" inside the mappings as below \r\n\r\n`es_config = {\r\n \"settings\": {\r\n \"number_of_shards\": 1,\r\n **\"similarity\": \"my_similarity\"**: {\r\n \"type\": \"DFR\",\r\n \"basic_model\": \"g\",\r\n \"after_effect\": \"l\",\r\n \"normalization\": \"h2\",\r\n \"normalization.h2.c\": \"3.0\"\r\n } ,\r\n \"analysis\": {\"analyzer\": {\"stop_standard\": {\"type\": \"standard\", \" stopwords\": \"_english_\"}}},\r\n \r\n },\r\n \"mappings\": {\"properties\": {\"text\": {\"type\": \"text\", \"analyzer\": \"standard\", \"similarity\": \"my_similarity\"}}},\r\n }`\r\n\r\nFor this , I got the following error\r\nRequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/709\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/708","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/708\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/708\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/708\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/708","id":714020953,"node_id":"MDU6SXNzdWU3MTQwMjA5NTM=","number":708,"title":"Datasets performance slow? - 6.4x slower than in memory dataset","user":{"login":"eugeneware","id":38154,"node_id":"MDQ6VXNlcjM4MTU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38154?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eugeneware","html_url":"https:\/\/github.com\/eugeneware","followers_url":"https:\/\/api.github.com\/users\/eugeneware\/followers","following_url":"https:\/\/api.github.com\/users\/eugeneware\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eugeneware\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eugeneware\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eugeneware\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eugeneware\/orgs","repos_url":"https:\/\/api.github.com\/users\/eugeneware\/repos","events_url":"https:\/\/api.github.com\/users\/eugeneware\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eugeneware\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.","And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?","Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.","We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?","By default the datasets loaded with `load_dataset` live on disk.\r\nIt's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.\r\n\r\nSmall correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice to add it indeed :)","Yes indeed we should add it!","Great! Thanks a lot.\r\n\r\nI did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.\r\n\r\n```python\r\nfeatures = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)\r\nfeatures.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nfeatures_in_memory = dataset.map(tokenize, batched=True, keep_in_memory=True, remove_columns=dataset['train'].column_names)\r\nfeatures_in_memory.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nin_memory = [features['train'][i] for i in range(len(features['train']))]\r\n```\r\n\r\nFor using the features without any tweak, I got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nFor using the features mapped with `keep_in_memory=True`, I also got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features_in_memory['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nAnd for the case using every element in memory, converted from the original dataset, I got **12.5s**:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(in_memory, batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nTaking a closer look in my SQuAD code, using a profiler, I see a lot of calls to `posix read` api. It seems that it is really reliying on disk, which results in a very high train time.","I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.\r\n\r\nIn disk:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='\/home\/ad\/Desktop\/bookcorpus', split='train[:20%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=2500)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\".\/mobile_bert_big\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=16,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n eval_steps=100,\r\n no_cuda=False,\r\n gradient_accumulation_steps=16,\r\n fp16=True)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n```\r\n\r\nIn disk I can only get 0,17 it\/s:\r\n`[ 13\/28907 01:03 < 46:03:27, 0.17 it\/s, Epoch 0.00\/1] `\r\n\r\nIf I load it with torch.utils.data.Dataset()\r\n```\r\nclass BCorpusDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n self.encodings = encodings\r\n\r\n def __getitem__(self, idx):\r\n item = [torch.tensor(val[idx]) for key, val in self.encodings.items()][0]\r\n return item\r\n\r\n def __len__(self):\r\n length = [len(val) for key, val in self.encodings.items()][0]\r\n return length\r\n\r\n**book_corpus = book_corpus.select([i for i in range(16*2000)])** # filtering to not have 20% of BC in memory...\r\nbook_corpus = book_corpus(book_corpus)\r\n```\r\nI can get:\r\n` [ 5\/62 00:09 < 03:03, 0.31 it\/s, Epoch 0.06\/1]`\r\n\r\nBut obviously I can not get BookCorpus in memory xD\r\n\r\nEDIT: it is something weird. If i load in disk 1% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='\/home\/ad\/Desktop\/bookcorpus', split='train[:1%]')\r\n```\r\n\r\nI can get 0.28 it\/s, (the same that in memory) but if I load 20% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='\/home\/ad\/Desktop\/bookcorpus', split='train[:20%]')\r\n```\r\nI get again 0.17 it\/s. \r\n\r\nI am missing something? I think it is something related to size, and not disk or in-memory.","There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches","My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks."],"created_at":1601707447000,"updated_at":1613139208000,"closed_at":1613139208000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.\r\n\r\nNow, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.\r\n\r\nFor example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.\r\n\r\nIs this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.\r\n\r\nFor reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.\r\n\r\nI can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.\r\n\r\nWhat am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?\r\n\r\nAt 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?\r\n\r\nIn any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.\r\n\r\n``` py\r\nimport sys\r\nfrom datasets import load_dataset\r\nfrom transformers import DataCollatorWithPadding, BertTokenizerFast\r\nfrom torch.utils.data import DataLoader\r\nfrom tqdm import tqdm\r\n\r\nif __name__ == '__main__':\r\n tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')\r\n collate_fn = DataCollatorWithPadding(tokenizer, padding=True)\r\n\r\n ds = load_dataset('yelp_polarity')\r\n\r\n def do_tokenize(x):\r\n return tokenizer(x['text'], truncation=True)\r\n\r\n ds = ds.map(do_tokenize, batched=True)\r\n ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\n if len(sys.argv) == 2 and sys.argv[1] == 'memory':\r\n # copy to memory - probably a faster way to do this - but demonstrates the point\r\n # approximately 530 batches per second - 17500 batches in 0:33\r\n print('using memory')\r\n _ds = [data for data in tqdm(ds['train'])]\r\n else:\r\n # approximately 83 batches per second - 17500 batches in 3:31\r\n print('using datasets')\r\n _ds = ds['train']\r\n\r\n dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)\r\n\r\n for data in tqdm(dl):\r\n for k, v in data.items():\r\n data[k] = v.to('cuda')\r\n```\r\n\r\nFor reference, my conda environment is [here](https:\/\/gist.github.com\/05b6101518ff70ed42a858b302a0405d)\r\n\r\nOnce again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.\r\n\r\nThanks for all your great work.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/708\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/707","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/707\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/707\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/707\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/707","id":713954666,"node_id":"MDU6SXNzdWU3MTM5NTQ2NjY=","number":707,"title":"Requirements should specify pyarrow<1","user":{"login":"mathcass","id":918541,"node_id":"MDQ6VXNlcjkxODU0MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/918541?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mathcass","html_url":"https:\/\/github.com\/mathcass","followers_url":"https:\/\/api.github.com\/users\/mathcass\/followers","following_url":"https:\/\/api.github.com\/users\/mathcass\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mathcass\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mathcass\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mathcass\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mathcass\/orgs","repos_url":"https:\/\/api.github.com\/users\/mathcass\/repos","events_url":"https:\/\/api.github.com\/users\/mathcass\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mathcass\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @mathcass I would want to work on this issue. May I do the same? ","@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.","Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish our package on pypi.org. Also, found some instructions on same in setup.py documentation.\r\n\r\n3. Then I Perplexity document link that you shared above. I created a colab link from there keep both tensorflow and pytorch means a mixed option and tried to run it in colab but I encountered no errors at a point where you mentioned. Can you help me to figure out the issue. \r\n\r\n4.Here is the link of the colab file with my saved responses. \r\nhttps:\/\/colab.research.google.com\/drive\/1hfYz8Ira39FnREbxgwa_goZWpOojp2NH?usp=sharing","Also, please share some links which made you conclude that pyarrow < 1 would help. ","Access granted for the colab link. ","Thanks for looking at this @punitaojha and thanks for sharing the notebook. \r\n\r\nI just tried to reproduce this on my own (based on the environment where I had this issue) and I can't reproduce it somehow. If I run into this again, I'll include some steps to reproduce it. I'll close this as invalid. \r\n\r\nThanks again. ","I am sorry for hijacking this closed issue, but I believe I was able to reproduce this very issue. Strangely enough, it also turned out that running `pip install \"pyarrow<1\" --upgrade` did indeed fix the issue (PyArrow was installed in version `0.14.1` in my case).\r\n\r\nPlease see the Colab below:\r\n\r\nhttps:\/\/colab.research.google.com\/drive\/15QQS3xWjlKW2aK0J74eEcRFuhXUddUST\r\n\r\nThanks!"],"created_at":1601681979000,"updated_at":1607070159000,"closed_at":1601844628000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I was looking at the docs on [Perplexity](https:\/\/huggingface.co\/transformers\/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,\r\n\r\n```\r\nmodule 'pyarrow' has no attribute 'PyExtensionType'\r\n```\r\nI traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file. \r\n\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/e86a2a8f869b91654e782c9133d810bb82783200\/setup.py#L68\r\n\r\nDowngrading by installing `pip install \"pyarrow<1\"` resolved the issue.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/707\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/706","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/706\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/706\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/706\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/706","id":713721959,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0","number":706,"title":"Fix config creation for data files with NamedSplit","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601653609000,"updated_at":1601885700000,"closed_at":1601885699000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/706","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/706","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/706.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/706.patch"},"body":"During config creation, we need to iterate through the data files of all the splits to compute a hash.\r\nTo make sure the hash is unique given a certain combination of files\/splits, we sort the split names.\r\nHowever the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead.\r\n\r\nFix #705 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/706\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/705","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/705\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/705\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/705\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/705","id":713709100,"node_id":"MDU6SXNzdWU3MTM3MDkxMDA=","number":705,"title":"TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'","user":{"login":"pvcastro","id":12713359,"node_id":"MDQ6VXNlcjEyNzEzMzU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12713359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pvcastro","html_url":"https:\/\/github.com\/pvcastro","followers_url":"https:\/\/api.github.com\/users\/pvcastro\/followers","following_url":"https:\/\/api.github.com\/users\/pvcastro\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pvcastro\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pvcastro\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pvcastro\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pvcastro\/orgs","repos_url":"https:\/\/api.github.com\/users\/pvcastro\/repos","events_url":"https:\/\/api.github.com\/users\/pvcastro\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pvcastro\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR","Thanks @lhoestq !"],"created_at":1601652475000,"updated_at":1601885699000,"closed_at":1601885699000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Environment info\r\n<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.\r\n Don't forget to fill out the missing fields in that output! -->\r\n \r\n- `transformers` version: 3.3.1 (installed from master)\r\n- `datasets` version: 1.0.2 (installed as a dependency from transformers)\r\n- Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid\r\n- Python version: 3.7.9\r\n\r\nI'm testing my own text classification dataset using [this example](https:\/\/github.com\/huggingface\/transformers\/tree\/master\/examples\/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train \/ dev \/ test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample:\r\n```\r\ntext,label\r\n\"Registra-se a presen\u00e7a do acad\u00eamico <name> . <REL_SEP> Ao me deparar com a descri\u00e7\u00e3o de dois autores no polo ativo da a\u00e7\u00e3o junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclama\u00e7\u00e3o trabalhista individual . <REL_SEP> Diante disso , face a aus\u00eancia injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com rela\u00e7\u00e3o a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concess\u00e3o dos benef\u00edcios da Justi\u00e7a Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audi\u00eancia encerrada \u00e0s 8h42min . <REL_SEP> <name> <REL_SEP> Ju\u00edza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secret\u00e1rio de Audi\u00eancia .\",NO_RELATION\r\n```\r\n\r\nHowever, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section.\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Created a new conda environment using conda env -n transformers python=3.7\r\n2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples\/requirements.txt \r\n3. Installed tensorflow with `pip install tensorflow`\r\n3. Ran `run_tf_text_classification.py` with the following parameters:\r\n\r\n```\r\n--train_file <DATASET_PATH>\/train.csv \\\r\n--dev_file <DATASET_PATH>\/dev.csv \\ \r\n--test_file <DATASET_PATH>\/test.csv \\\r\n--label_column_id 1 \\\r\n--model_name_or_path neuralmind\/bert-base-portuguese-cased \\\r\n--output_dir <OUTPUT_PATH> \\\r\n--num_train_epochs 4 \\\r\n--per_device_train_batch_size 4 \\\r\n--per_device_eval_batch_size 4 \\\r\n--do_train \\\r\n--do_eval \\\r\n--do_predict \\\r\n--logging_steps 1000 \\\r\n--evaluate_during_training \\\r\n--save_steps 1000 \\\r\n--overwrite_output_dir \\\r\n--overwrite_cache\r\n```\r\n\r\nI have also copied [@Santosh-Gupta 's colab notebook](https:\/\/colab.research.google.com\/drive\/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference.\r\n\r\n<!-- If you have code snippets, error messages, stack traces please provide them here as well.\r\n Important! Use code tags to correctly format your code. See https:\/\/help.github.com\/en\/github\/writing-on-github\/creating-and-highlighting-code-blocks#syntax-highlighting\r\n Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->\r\n\r\nHere is the stack trace:\r\n\r\n```\r\n2020-10-02 07:33:41.622011: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n\/media\/discoD\/repositorios\/transformers_pedro\/src\/transformers\/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options)\r\n FutureWarning,\r\n2020-10-02 07:33:43.471648: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1\r\n2020-10-02 07:33:43.471791: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:43.472664: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1716] Found device 0 with properties: \r\npciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1\r\ncoreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB\/s\r\n2020-10-02 07:33:43.472684: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n2020-10-02 07:33:43.472765: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10\r\n2020-10-02 07:33:43.472809: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10\r\n2020-10-02 07:33:43.472848: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10\r\n2020-10-02 07:33:43.474209: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10\r\n2020-10-02 07:33:43.474276: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10\r\n2020-10-02 07:33:43.561219: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7\r\n2020-10-02 07:33:43.561397: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:43.562345: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:43.563219: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1858] Adding visible gpu devices: 0\r\n2020-10-02 07:33:43.563595: I tensorflow\/core\/platform\/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2020-10-02 07:33:43.570091: I tensorflow\/core\/platform\/profile_utils\/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz\r\n2020-10-02 07:33:43.570494: I tensorflow\/compiler\/xla\/service\/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\r\n2020-10-02 07:33:43.570511: I tensorflow\/compiler\/xla\/service\/service.cc:176] StreamExecutor device (0): Host, Default Version\r\n2020-10-02 07:33:43.570702: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:43.571599: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1716] Found device 0 with properties: \r\npciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1\r\ncoreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB\/s\r\n2020-10-02 07:33:43.571633: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n2020-10-02 07:33:43.571645: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10\r\n2020-10-02 07:33:43.571654: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10\r\n2020-10-02 07:33:43.571664: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10\r\n2020-10-02 07:33:43.571691: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10\r\n2020-10-02 07:33:43.571704: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10\r\n2020-10-02 07:33:43.571718: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7\r\n2020-10-02 07:33:43.571770: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:43.572641: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:43.573475: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1858] Adding visible gpu devices: 0\r\n2020-10-02 07:33:47.139227: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:\r\n2020-10-02 07:33:47.139265: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1263] 0 \r\n2020-10-02 07:33:47.139272: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1276] 0: N \r\n2020-10-02 07:33:47.140323: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:47.141248: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:47.142085: I tensorflow\/stream_executor\/cuda\/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2020-10-02 07:33:47.142854: I tensorflow\/core\/common_runtime\/gpu\/gpu_device.cc:1402] Created TensorFlow device (\/job:localhost\/replica:0\/task:0\/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)\r\n2020-10-02 07:33:47.146317: I tensorflow\/compiler\/xla\/service\/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\r\n2020-10-02 07:33:47.146336: I tensorflow\/compiler\/xla\/service\/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1\r\n10\/02\/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False\r\n10\/02\/2020 07:33:47 - INFO - __main__ - Training\/evaluation parameters TFTrainingArguments(output_dir='\/media\/discoD\/models\/datalawyer\/pedidos\/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs\/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='\/media\/discoD\/models\/datalawyer\/pedidos\/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False)\r\n10\/02\/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on \/home\/user\/.cache\/huggingface\/datasets\/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock\r\n10\/02\/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on \/home\/user\/.cache\/huggingface\/datasets\/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"run_tf_text_classification.py\", line 283, in <module>\r\n main()\r\n File \"run_tf_text_classification.py\", line 222, in main\r\n max_seq_length=data_args.max_seq_length,\r\n File \"run_tf_text_classification.py\", line 43, in get_tfds\r\n ds = datasets.load_dataset(\"csv\", data_files=files)\r\n File \"\/media\/discoD\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 604, in load_dataset\r\n **config_kwargs,\r\n File \"\/media\/discoD\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 158, in __init__\r\n **config_kwargs,\r\n File \"\/media\/discoD\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 269, in _create_builder_config\r\n for key in sorted(data_files.keys()):\r\nTypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'\r\n```\r\n\r\n## Expected behavior\r\n\r\nShould be able to run the text-classification example as described in [https:\/\/github.com\/huggingface\/transformers\/tree\/master\/examples\/text-classification#run-generic-text-classification-script-in-tensorflow](https:\/\/github.com\/huggingface\/transformers\/tree\/master\/examples\/text-classification#run-generic-text-classification-script-in-tensorflow)\r\n\r\nOriginally opened this issue at transformers' repository: [https:\/\/github.com\/huggingface\/transformers\/issues\/7535](https:\/\/github.com\/huggingface\/transformers\/issues\/7535). @jplu instructed me to open here, since according to [this](https:\/\/github.com\/huggingface\/transformers\/issues\/7535#issuecomment-702778885) evidence, the problem is from datasets.\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/705\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/704","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/704\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/704\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/704\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/704","id":713572556,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0","number":704,"title":"Fix remote tests for new datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601640484000,"updated_at":1601640722000,"closed_at":1601640721000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/704","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/704","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/704.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/704.patch"},"body":"When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet)\r\nTo fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/704\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/703","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/703\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/703\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/703\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/703","id":713559718,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5","number":703,"title":"Add hotpot QA","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome :) \r\n\r\nDon't pay attention to the RemoteDatasetTest error, I'm fixing it right now","You can rebase from master to fix the CI test :)","If we're lucky we can even include this dataset in today's release","Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `easy`, `medium`, `hard` should they be `ClassLabel`?","> Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `easy`, `medium`, `hard` should they be `ClassLabel`?\r\n\r\nI think it's more a tag than a label. I guess a string is fine\r\n"],"created_at":1601639068000,"updated_at":1601643281000,"closed_at":1601643281000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/703","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/703","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/703.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/703.patch"},"body":"Added the [HotpotQA](https:\/\/github.com\/hotpotqa\/hotpot) multi-hop question answering dataset.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/703\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/702","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/702\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/702\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/702\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/702","id":713499628,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4","number":702,"title":"Complete rouge kwargs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601632741000,"updated_at":1601633464000,"closed_at":1601633463000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/702","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/702","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/702.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/702.patch"},"body":"In #701 we noticed that some kwargs were missing for rouge","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/702\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/701","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/701\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/701\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/701\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/701","id":713485757,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1","number":701,"title":"Add rouge 2 and rouge Lsum to rouge metric outputs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oups too late, sorry"],"created_at":1601631346000,"updated_at":1601632514000,"closed_at":1601632338000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/701","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/701","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/701.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/701.patch"},"body":"Continuation of #700 \r\n\r\nRouge 2 and Rouge Lsum were missing in Rouge's outputs.\r\nRouge Lsum is also useful to evaluate Rouge L for sentences with `\\n`\r\n\r\nFix #617 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/701\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/700","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/700\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/700\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/700\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/700","id":713450295,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz","number":700,"title":"Add rouge-2 in rouge_types for metric calculation","user":{"login":"Shashi456","id":18056781,"node_id":"MDQ6VXNlcjE4MDU2Nzgx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18056781?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Shashi456","html_url":"https:\/\/github.com\/Shashi456","followers_url":"https:\/\/api.github.com\/users\/Shashi456\/followers","following_url":"https:\/\/api.github.com\/users\/Shashi456\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Shashi456\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Shashi456\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Shashi456\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Shashi456\/orgs","repos_url":"https:\/\/api.github.com\/users\/Shashi456\/repos","events_url":"https:\/\/api.github.com\/users\/Shashi456\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Shashi456\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed there's currently a mismatch between the description and what it rouge actually returns.\r\nThanks for proposing this fix :) \r\n\r\nI think it's better to return rouge 1-2-L.\r\nWas there a reason to only include rouge 1 and rouge L @thomwolf ? ","rougeLsum is also missing, could you add it ?","Adding `RougeLSum` would fix https:\/\/github.com\/huggingface\/datasets\/issues\/617","I am opening a PR with both of them right now actually :)","Also the format of the output isn't exactly ideal, It's usually only the F-1 score that is cared about. \r\n\r\nFormatting the output to reflect how `ROUGE-1-5-5` (the perl version thats usually used and pyrouge is a wrapper over it), would be better.\r\n\r\n","I'll close this since you seem to have already added it in another PR. Sorry for the delay in responding to you @lhoestq.","What do you mean by \"Formatting the output to reflect how ROUGE-1-5-5\" @Shashi456 ?","I like the idea of returning all the scores for two reason:\r\n- Rouge's aggregator does sampling and therefore it returns \"low\" \"mid\" and \"high\" scores\r\n- It is interesting to have the precision and recall to see how the F1 score was computed\r\nBut I understand your point that returning only the F1 score makes sense since it's the one that's always used ","@thomwolf the scores now returned look like this:\r\n```\r\n{'rouge1': AggregateScore(low=Score(precision=0.16620308156871524, recall=0.18219819615984395, fmeasure=0.16226017699359463), mid=Score(precision=0.17274338501705871, recall=0.1890957812369246, fmeasure=0.16823877588620403), high=Score(precision=0.17934569582981455, recall=0.1965626706042028, fmeasure=0.17491509794856058)), \r\n'rouge2': AggregateScore(low=Score(precision=0.12478835737689957, recall=0.1362113231755514, fmeasure=0.12055941950062395), mid=Score(precision=0.1303967602691664, recall=0.1423747229852964, fmeasure=0.1258363976151122), high=Score(precision=0.13654527560789362, recall=0.1488071465116122, fmeasure=0.13184989406704056)), \r\n'rougeL': AggregateScore(low=Score(precision=0.16568068818352072, recall=0.1811919016674486, fmeasure=0.1614784523482225), mid=Score(precision=0.17156684723552357, recall=0.1879777628247058, fmeasure=0.16720699286250762), high=Score(precision=0.17788847350584547, recall=0.1948899838530898, fmeasure=0.17316501523379826))}\r\n```\r\n\r\nWhile when computed through the perl rouge script, it looks like:\r\n```\r\nROUGE-1 Average_R: 0.34775 (95%-conf.int. 0.34546 - 0.35025)\r\nROUGE-1 Average_P: 0.19381 (95%-conf.int. 0.19246 - 0.19538)\r\nROUGE-1 Average_F: 0.24070 (95%-conf.int. 0.23925 - 0.24230)\r\n---------------------------------------------\r\nROUGE-2 Average_R: 0.07160 (95%-conf.int. 0.07010 - 0.07298)\r\nROUGE-2 Average_F: 0.04845 (95%-conf.int. 0.04741 - 0.04942)\r\n---------------------------------------------\r\nROUGE-L Average_R: 0.26404 (95%-conf.int. 0.26215 - 0.26598)\r\nROUGE-L Average_P: 0.14696 (95%-conf.int. 0.14576 - 0.14815)\r\nROUGE-L Average_F: 0.18245 (95%-conf.int. 0.18120 - 0.18367)\r\n```\r\nwhile the wrapper returns the much more readable:\r\n```\r\n[2020-07-30 18:13:38,556 INFO] Rouges at step 13000 \r\n>> ROUGE-F(1\/2\/3\/l): 43.43\/20.42\/39.78 \r\nROUGE-R(1\/2\/3\/l): 53.91\/25.34\/49.32\r\n```\r\n\r\nThe formatting allows for easy reading, and although \"low\", \"mid\", \"high\" make sense, this is more concise and effective. \r\n\r\nOne way of changing this might be to return a dictionary that returns values like `rouge_1_precision`, `rouge_1_F1`, `rouge_1_recall`, and maybe also having the ability to get the values you are interested in and keeping `recall` and `F1` as default.","cc: @lhoestq ","Ok I see.\r\nI think it's also important to follow one of the existing output format (there are already too many different formats, let's try not to add another different one)\r\nI'd still stick with the current format and not transform the output of the python implementation of rouge since it's already widely used.\r\nWhat do you think ?","Maybe we could convert the dataclasses in dictionnaries, would that help @Shashi456 ?","@thomwolf yeah I think that would help. I initially didn't understand the high low mid categories. Dictionaries could help in this case I guess, and if we allow the user to choose what they want i.e F1 and precision or recall."],"created_at":1601627805000,"updated_at":1601636929000,"closed_at":1601632745000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/700","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/700","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/700.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/700.patch"},"body":"The description of the ROUGE metric says, \r\n```\r\n_KWARGS_DESCRIPTION = \"\"\"\r\nCalculates average rouge scores for a list of hypotheses and references\r\nArgs:\r\n predictions: list of predictions to score. Each predictions\r\n should be a string with tokens separated by spaces.\r\n references: list of reference for each prediction. Each\r\n reference should be a string with tokens separated by spaces.\r\nReturns:\r\n rouge1: rouge_1 f1,\r\n rouge2: rouge_2 f1,\r\n rougeL: rouge_l f1,\r\n rougeLsum: rouge_l precision\r\n\"\"\"\r\n```\r\n\r\nbut the `rouge_types` argument defaults to `rouge_types = [\"rouge1\", \"rougeL\"]`, this PR updates and add `rouge2` to the list so as to reflect the description card.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/700\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/699","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/699\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/699\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/699\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/699","id":713395642,"node_id":"MDU6SXNzdWU3MTMzOTU2NDI=","number":699,"title":"XNLI dataset is not loading ","user":{"login":"imadarsh1001","id":14936525,"node_id":"MDQ6VXNlcjE0OTM2NTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14936525?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/imadarsh1001","html_url":"https:\/\/github.com\/imadarsh1001","followers_url":"https:\/\/api.github.com\/users\/imadarsh1001\/followers","following_url":"https:\/\/api.github.com\/users\/imadarsh1001\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/imadarsh1001\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/imadarsh1001\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/imadarsh1001\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/imadarsh1001\/orgs","repos_url":"https:\/\/api.github.com\/users\/imadarsh1001\/repos","events_url":"https:\/\/api.github.com\/users\/imadarsh1001\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/imadarsh1001\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["also i tried below code to solve checksum error \r\n`datasets-cli test .\/datasets\/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 268, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/.\/datasets\/xnli\/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 279, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 308, in cached_path\r\n use_etag=download_config.use_etag,\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/file_utils.py\", line 474, in get_from_cache\r\n raise FileNotFoundError(\"Couldn't find file at {}\".format(url))\r\nFileNotFoundError: Couldn't find file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/.\/datasets\/xnli\/xnli.py\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/opt\/conda\/bin\/datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/commands\/test.py\", line 76, in run\r\n module_path, hash = prepare_module(path)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 283, in prepare_module\r\n combined_path, github_file_path, file_path\r\nFileNotFoundError: Couldn't find file locally at .\/datasets\/xnli\/xnli.py, or remotely at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.2\/datasets\/.\/datasets\/xnli\/xnli.py or https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/datasets\/datasets\/.\/datasets\/xnli\/xnli.py\r\n```\r\n\r\n","Hi !\r\nYes the download url changed.\r\nIt's updated on the master branch. I'm doing a release today to fix that :)","the issue is fixed with latest release \r\n\r\n"],"created_at":1601621596000,"updated_at":1601747152000,"closed_at":1601747017000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"`dataset = datasets.load_dataset(path='xnli')`\r\n\r\nshowing below error \r\n```\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/nlp\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 36 if len(bad_urls) > 0:\r\n 37 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 39 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 40 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/www.nyu.edu\/projects\/bowman\/xnli\/XNLI-1.0.zip']\r\n```\r\n\r\nI think URL is now changed to \"https:\/\/cims.nyu.edu\/~sbowman\/xnli\/XNLI-MT-1.0.zip\"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/699\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/697","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/697\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/697\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/697\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/697","id":712979029,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5","number":697,"title":"Update README.md","user":{"login":"bishug","id":71011306,"node_id":"MDQ6VXNlcjcxMDExMzA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/71011306?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bishug","html_url":"https:\/\/github.com\/bishug","followers_url":"https:\/\/api.github.com\/users\/bishug\/followers","following_url":"https:\/\/api.github.com\/users\/bishug\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bishug\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bishug\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bishug\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bishug\/orgs","repos_url":"https:\/\/api.github.com\/users\/bishug\/repos","events_url":"https:\/\/api.github.com\/users\/bishug\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bishug\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601568162000,"updated_at":1601568720000,"closed_at":1601568720000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/697","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/697","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/697.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/697.patch"},"body":"Hey I was just telling my subscribers to check out your repositories \r\nThank you","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/697\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/696","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/696\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/696\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/696\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/696","id":712942977,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy","number":696,"title":"Elasticsearch index docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601565538000,"updated_at":1601624899000,"closed_at":1601624898000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/696","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/696","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/696.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/696.patch"},"body":"I added the docs for ES indexes.\r\n\r\nI also added a `load_elasticsearch_index` method to load an index that has already been built.\r\n\r\nI checked the tests for the ES index and we have tests that mock ElasticSearch.\r\nI think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/696\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/695","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/695\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/695\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/695\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/695","id":712843949,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2MjU5NTM0","number":695,"title":"Update XNLI download link","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601558842000,"updated_at":1601560875000,"closed_at":1601560874000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/695","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/695","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/695.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/695.patch"},"body":"The old link isn't working anymore. I updated it with the new official link.\r\nFix #690 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/695\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/694","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/694\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/694\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/694\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/694","id":712827751,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0","number":694,"title":"Use GitHub instead of aws in remote dataset tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601557670000,"updated_at":1601624848000,"closed_at":1601624847000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/694","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/694","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/694.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/694.patch"},"body":"Recently we switched from aws s3 to github to download dataset scripts.\r\nHowever in the tests, the dummy data were still downloaded from s3.\r\nSo I changed that to download them from github instead, in the MockDownloadManager.\r\n\r\nMoreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/694\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/693","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/693\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/693\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/693\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/693","id":712822200,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw","number":693,"title":"Rachel ker add dataset\/mlsum","user":{"login":"pdhg","id":32742136,"node_id":"MDQ6VXNlcjMyNzQyMTM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32742136?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pdhg","html_url":"https:\/\/github.com\/pdhg","followers_url":"https:\/\/api.github.com\/users\/pdhg\/followers","following_url":"https:\/\/api.github.com\/users\/pdhg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pdhg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pdhg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pdhg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pdhg\/orgs","repos_url":"https:\/\/api.github.com\/users\/pdhg\/repos","events_url":"https:\/\/api.github.com\/users\/pdhg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pdhg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like an outdated PR (we've already added mlsum). Closing it"],"created_at":1601557270000,"updated_at":1601571673000,"closed_at":1601571673000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/693","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/693","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/693.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/693.patch"},"body":".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/693\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/692","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/692\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/692\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/692\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/692","id":712818968,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw","number":692,"title":"Update README.md","user":{"login":"mayank1897","id":62796466,"node_id":"MDQ6VXNlcjYyNzk2NDY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/62796466?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mayank1897","html_url":"https:\/\/github.com\/mayank1897","followers_url":"https:\/\/api.github.com\/users\/mayank1897\/followers","following_url":"https:\/\/api.github.com\/users\/mayank1897\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mayank1897\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mayank1897\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mayank1897\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mayank1897\/orgs","repos_url":"https:\/\/api.github.com\/users\/mayank1897\/repos","events_url":"https:\/\/api.github.com\/users\/mayank1897\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mayank1897\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hacktoberfest spam","To enhance its readability.....not Hacktoberfest spam","How is adding a punctuation to the end of a sentence justified as \"To enhance its readability\". \r\nConsidering that this is not your first \"README enhancement '' please don't spam the open source community with useless PR to get a free T-Shirt it just hurts the maintainers.\r\n\r\n\/\/Joey","closed as spam"],"created_at":1601557042000,"updated_at":1601636519000,"closed_at":1601636519000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/692","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/692","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/692.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/692.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/692\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/691","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/691\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/691\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/691\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/691","id":712389499,"node_id":"MDU6SXNzdWU3MTIzODk0OTk=","number":691,"title":"Add UI filter to filter datasets based on task","user":{"login":"praateekmahajan","id":7589415,"node_id":"MDQ6VXNlcjc1ODk0MTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7589415?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/praateekmahajan","html_url":"https:\/\/github.com\/praateekmahajan","followers_url":"https:\/\/api.github.com\/users\/praateekmahajan\/followers","following_url":"https:\/\/api.github.com\/users\/praateekmahajan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/praateekmahajan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/praateekmahajan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/praateekmahajan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/praateekmahajan\/orgs","repos_url":"https:\/\/api.github.com\/users\/praateekmahajan\/repos","events_url":"https:\/\/api.github.com\/users\/praateekmahajan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/praateekmahajan\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601513778000,"updated_at":1603812270000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"This is great work, so huge shoutout to contributors and huggingface.\r\n\r\nThe [\/nlp\/viewer](https:\/\/huggingface.co\/nlp\/viewer\/) is great and the [\/datasets](https:\/\/huggingface.co\/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)\r\n\r\n- Classification\r\n\t- Multi label\r\n\t- Multi class\r\n- Q&A\r\n- Summarization\r\n- Translation\r\n\r\nI believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.\r\n\r\nThank you :) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/691\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/690","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/690\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/690\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/690\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/690","id":712150321,"node_id":"MDU6SXNzdWU3MTIxNTAzMjE=","number":690,"title":"XNLI dataset: NonMatchingChecksumError","user":{"login":"xiey1","id":13307358,"node_id":"MDQ6VXNlcjEzMzA3MzU4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13307358?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xiey1","html_url":"https:\/\/github.com\/xiey1","followers_url":"https:\/\/api.github.com\/users\/xiey1\/followers","following_url":"https:\/\/api.github.com\/users\/xiey1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xiey1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xiey1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xiey1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xiey1\/orgs","repos_url":"https:\/\/api.github.com\/users\/xiey1\/repos","events_url":"https:\/\/api.github.com\/users\/xiey1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xiey1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.","Well actually it looks like the link isn't working anymore :(","The new link is https:\/\/cims.nyu.edu\/~sbowman\/xnli\/XNLI-1.0.zip\r\nI'll update the dataset script","I'll do a release in the next few days to make the fix available for everyone.\r\nIn the meantime you can load `xnli` with\r\n```\r\nxnli = load_dataset('xnli', script_version=\"master\")\r\n```\r\nThis will use the latest version of the xnli script (available on master branch), instead of the old one.","That's awesome! Thanks a lot!"],"created_at":1601488203000,"updated_at":1601572508000,"closed_at":1601560874000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI tried to download \"xnli\" dataset in colab using \r\n`xnli = load_dataset(path='xnli')`\r\nbut got 'NonMatchingChecksumError' error\r\n\r\n`NonMatchingChecksumError Traceback (most recent call last)\r\n<ipython-input-27-a87bedc82eeb> in <module>()\r\n----> 1 xnli = load_dataset(path='xnli')\r\n\r\n3 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 37 if len(bad_urls) > 0:\r\n 38 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 40 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 41 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/www.nyu.edu\/projects\/bowman\/xnli\/XNLI-1.0.zip']`\r\n\r\nThe same code worked well several days ago in colab but stopped working now. Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/690\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/689","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/689\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/689\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/689\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/689","id":712095262,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk1NjMzNjMy","number":689,"title":"Switch to pandas reader for text dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If the windows tests in the CI pass, today will be a happy day"],"created_at":1601483292000,"updated_at":1601484332000,"closed_at":1601484331000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/689","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/689","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/689.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/689.patch"},"body":"Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator.\r\n\r\nIn this PR I switched to pandas to read the file.\r\n\r\nMoreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text file that is bigger than RAM (we used to have to shard text files an mentioned in https:\/\/github.com\/huggingface\/datasets\/issues\/610#issuecomment-691672919)\r\n\r\nFrom a test that I did locally on a 1GB text file, the pyarrow reader used to run in 150ms while the new one takes 650ms (multithreading off for pyarrow). This is probably due to chunking since I am having the same speed difference by calling `read()` and calling `read(chunksize)` + `readline()` to read the text file.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/689\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/688","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/688\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/688\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/688\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/688","id":711804828,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk1MzkwMTc1","number":688,"title":"Disable tokenizers parallelism in multiprocessed map","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601459614000,"updated_at":1601541946000,"closed_at":1601541945000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/688","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/688","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/688.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/688.patch"},"body":"It was reported in #620 that using multiprocessing with a tokenizers shows this message:\r\n```\r\nThe current process just got forked. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)\r\n```\r\nThis message is shown when TOKENIZERS_PARALLELISM is unset.\r\nMoreover if it is set to `true`, then the program just hangs.\r\n\r\nTo hide the message (if TOKENIZERS_PARALLELISM is unset) and avoid hanging (if TOKENIZERS_PARALLELISM is `true`), then I set TOKENIZERS_PARALLELISM to `false` when forking the process. After forking is gets back to its original value.\r\n\r\nAlso I added a warning if TOKENIZERS_PARALLELISM was `true` and is set to `false`:\r\n```\r\nSetting TOKENIZERS_PARALLELISM=false for forked processes.\r\n```\r\n\r\ncc @n1t0 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/688\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/687","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/687\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/687\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/687\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/687","id":711664810,"node_id":"MDU6SXNzdWU3MTE2NjQ4MTA=","number":687,"title":"`ArrowInvalid` occurs while running `Dataset.map()` function","user":{"login":"peinan","id":5601012,"node_id":"MDQ6VXNlcjU2MDEwMTI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5601012?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/peinan","html_url":"https:\/\/github.com\/peinan","followers_url":"https:\/\/api.github.com\/users\/peinan\/followers","following_url":"https:\/\/api.github.com\/users\/peinan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/peinan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/peinan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/peinan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/peinan\/orgs","repos_url":"https:\/\/api.github.com\/users\/peinan\/repos","events_url":"https:\/\/api.github.com\/users\/peinan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/peinan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese is not one of them unfortunately and I don't think it will be added for now (see https:\/\/github.com\/huggingface\/transformers\/pull\/7141)...\r\ncc @thomwolf for confirmation.\r\n\r\nTherefore what I'd suggest for now is disable batching and process one text at a time using `encode`.\r\nNote that you can make it faster by using multiprocessing:\r\n\r\n```python\r\nnum_proc = None # Specify here the number of processes if you want to use multiprocessing. ex: num_proc = 4\r\nencoded = train_ds.map(\r\n lambda example: {'tokens': t.encode(example['title'], max_length=1000)}, num_proc=num_proc\r\n)\r\n```\r\n","Thank you very much for the kind and precise suggestion!\r\nI'm looking forward to seeing BertJapaneseTokenizer built into the \"fast\" tokenizers.\r\n\r\nI tried `map` with multiprocessing as follows, and it worked!\r\n\r\n```python\r\n# There was a Pickle problem if I use `lambda` for multiprocessing\r\ndef encode(examples):\r\n return {'tokens': t.encode(examples['title'], max_length=1000)}\r\n\r\nnum_proc = 8\r\nencoded = train_ds.map(encode, num_proc=num_proc)\r\n```\r\n\r\nThank you again!"],"created_at":1601446610000,"updated_at":1601459583000,"closed_at":1601459583000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It seems to fail to process the final batch. This [colab](https:\/\/colab.research.google.com\/drive\/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.\r\n\r\nCode:\r\n\r\n```python\r\n# train_ds = Dataset(features: {\r\n# 'title': Value(dtype='string', id=None), \r\n# 'score': Value(dtype='float64', id=None)\r\n# }, num_rows: 99999)\r\n\r\n# suggested in #665 \r\nclass PicklableTokenizer(BertJapaneseTokenizer):\r\n def __getstate__(self):\r\n state = dict(self.__dict__)\r\n state['do_lower_case'] = self.word_tokenizer.do_lower_case\r\n state['never_split'] = self.word_tokenizer.never_split\r\n del state['word_tokenizer']\r\n return state\r\n \r\n def __setstate(self):\r\n do_lower_case = state.pop('do_lower_case')\r\n never_split = state.pop('never_split')\r\n self.__dict__ = state\r\n self.word_tokenizer = MecabTokenizer(\r\n do_lower_case=do_lower_case, never_split=never_split\r\n )\r\n\r\nt = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')\r\n\r\nencoded = train_ds.map(\r\n lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000\r\n)\r\n```\r\n\r\nError Message:\r\n\r\n```\r\n 99% 99\/100 [00:22<00:00, 39.07ba\/s]\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<timed exec> in <module>\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1242 fn_kwargs=fn_kwargs,\r\n 1243 new_fingerprint=new_fingerprint,\r\n-> 1244 update_data=update_data,\r\n 1245 )\r\n 1246 else:\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 151 \"output_all_columns\": self._output_all_columns,\r\n 152 }\r\n--> 153 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 154 if new_format[\"columns\"] is not None:\r\n 155 new_format[\"columns\"] = list(set(new_format[\"columns\"]) & set(out.column_names))\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 161 # Call actual function\r\n 162 \r\n--> 163 out = func(self, *args, **kwargs)\r\n 164 \r\n 165 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1496 if update_data:\r\n 1497 batch = cast_to_python_objects(batch)\r\n-> 1498 writer.write_batch(batch)\r\n 1499 if update_data:\r\n 1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 272 typed_sequence_examples[col] = typed_sequence\r\n--> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 274 self.write_table(pa_table)\r\n 275 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.from_arrays()\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.validate()\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Column 4 named tokens expected length 999 but got length 1000\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/687\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/686","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/686\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/686\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/686\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/686","id":711385739,"node_id":"MDU6SXNzdWU3MTEzODU3Mzk=","number":686,"title":"Dataset browser url is still https:\/\/huggingface.co\/nlp\/viewer\/","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)","This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!"],"created_at":1601407312000,"updated_at":1610130566000,"closed_at":1610130566000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Might be worth updating to https:\/\/huggingface.co\/datasets\/viewer\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/686\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/685","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/685\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/685\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/685\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/685","id":711182185,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz","number":685,"title":"Add features parameter to CSV","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601390616000,"updated_at":1601455196000,"closed_at":1601455194000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/685","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/685","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/685.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/685.patch"},"body":"Add support for the `features` parameter when loading a csv dataset:\r\n\r\n```python\r\nfrom datasets import load_dataset, Features\r\n\r\nfeatures = Features({...})\r\ncsv_dataset = load_dataset(\"csv\", data_files=[\"path\/to\/my\/file.csv\"], features=features)\r\n```\r\n\r\nI added tests to make sure that it is also compatible with the caching system\r\n\r\nFix #623 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/685\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/684","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/684\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/684\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/684\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/684","id":711080947,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk0ODA2NjE1","number":684,"title":"Fix column order issue in cast","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601383753000,"updated_at":1601395006000,"closed_at":1601395005000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/684","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/684","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/684.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/684.patch"},"body":"Previously, the order of the columns in the features passes to `cast_` mattered.\r\nHowever even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order.\r\nThis issue was reported by @lewtun in #623 \r\n\r\nTo fix that I fixed the schema to follow the order of the arrow table columns.\r\nI also added the possibility to give features that are not ordered the same way as the dataset features.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/684\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/683","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/683\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/683\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/683\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/683","id":710942704,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1","number":683,"title":"Fix wrong delimiter in text dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601372604000,"updated_at":1620239071000,"closed_at":1601372646000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/683","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/683","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/683.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/683.patch"},"body":"The delimiter is set to the bell character as it is used nowhere is text files usually.\r\nHowever in the text dataset the delimiter was set to `\\b` which is backspace in python, while the bell character is `\\a`.\r\nI replace \\b by \\a\r\n\r\nHopefully it fixes issues mentioned by some users in #622 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/683\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/682","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/682\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/682\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/682\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/682","id":710325399,"node_id":"MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw","number":682,"title":"Update navbar chapter titles color","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601303717000,"updated_at":1601314213000,"closed_at":1601314212000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/682","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/682","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/682.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/682.patch"},"body":"Consistency with the color change that was done in transformers at https:\/\/github.com\/huggingface\/transformers\/pull\/7423\r\nIt makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections.\r\n\r\nsee changes [here](https:\/\/691-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/index.html)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/682\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/681","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/681\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/681\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/681\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/681","id":710075721,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz","number":681,"title":"Adding missing @property (+2 small flake8 fixes).","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601283233000,"updated_at":1601288773000,"closed_at":1601288769000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/681","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/681","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/681.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/681.patch"},"body":"Fixes #678","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/681\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/680","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/680\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/680\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/680\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/680","id":710066138,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4","number":680,"title":"Fix bug related to boolean in GAP dataset.","user":{"login":"otakumesi","id":14996977,"node_id":"MDQ6VXNlcjE0OTk2OTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14996977?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/otakumesi","html_url":"https:\/\/github.com\/otakumesi","followers_url":"https:\/\/api.github.com\/users\/otakumesi\/followers","following_url":"https:\/\/api.github.com\/users\/otakumesi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/otakumesi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/otakumesi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/otakumesi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/otakumesi\/orgs","repos_url":"https:\/\/api.github.com\/users\/otakumesi\/repos","events_url":"https:\/\/api.github.com\/users\/otakumesi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/otakumesi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nGood catch, thanks for creating this PR :)\r\n\r\nCould you also regenerate the metadata for this dataset using \r\n```\r\ndatasets-cli test .\/datasets\/gap --save_infos --all_configs\r\n```\r\n\r\nThat'd be awesome","@lhoestq Thank you for your revieing!!!\r\n\r\nI've performed it and have read CONTRIBUTING.md now!"],"created_at":1601282379000,"updated_at":1601394887000,"closed_at":1601394887000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/680","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/680","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/680.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/680.patch"},"body":"### Why I did\r\nThe value in `row[\"A-coref\"]` and `row[\"B-coref\"]` is `'TRUE'` or `'FALSE'`.\r\nThis type is `string`, then `bool('FALSE')` is equal to `True` in Python.\r\nSo, both rows are transformed into `True` now.\r\n\r\nSo, I modified this problem.\r\n\r\n### What I did\r\nI modified `bool(row[\"A-coref\"])` and `bool(row[\"B-coref\"])` to `row[\"A-coref\"] == \"TRUE\"` and `row[\"B-coref\"] == \"TRUE\"`.\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/680\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/679","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/679\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/679\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/679\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/679","id":710065838,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx","number":679,"title":"Fix negative ids when slicing with an array","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601282348000,"updated_at":1601304140000,"closed_at":1601304139000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/679","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/679","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/679.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/679.patch"},"body":"```python\r\nfrom datasets import Dataset\r\n\r\nd = ds.Dataset.from_dict({\"a\": range(10)})\r\nprint(d[[0, -1]])\r\n# OverflowError\r\n```\r\n\r\nraises an error because of the negative id.\r\n\r\nThis PR fixes that.\r\nFix #668 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/679\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/678","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/678\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/678\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/678\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/678","id":710060497,"node_id":"MDU6SXNzdWU3MTAwNjA0OTc=","number":678,"title":"The download instructions for c4 datasets are not contained in the error message","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good catch !\r\nIndeed the `@property` is missing.\r\n\r\nFeel free to open a PR :)","Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.\r\nFor example Dataflow, Spark, Flink etc.\r\n\r\nUsually we generate the dataset on our side once and for all, but we haven't done it for C4 yet.\r\nMore info about beam datasets [here](https:\/\/huggingface.co\/docs\/datasets\/beam_dataset.html)\r\n\r\nLet me know if you have any questions"],"created_at":1601281854000,"updated_at":1601288769000,"closed_at":1601288769000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The manual download instructions are not clear \r\n```The dataset c4 with config en requires manual data. \r\n Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>. \r\n Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path\/to\/manual\/data>')\r\n```\r\n\r\nEither `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think.\r\n\r\nLet me know if you want a PR for this, but I'm not sure which possible fix is the correct one.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/678\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/677","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/677\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/677\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/677\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/677","id":710055239,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkzOTczNDE3","number":677,"title":"Move cache dir root creation in builder's init","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601281366000,"updated_at":1601304163000,"closed_at":1601304162000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/677","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/677","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/677.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/677.patch"},"body":"We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init.\r\n\r\nFix #671 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/677\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/676","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/676\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/676\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/676\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/676","id":710014319,"node_id":"MDU6SXNzdWU3MTAwMTQzMTk=","number":676,"title":"train_test_split returns empty dataset item","user":{"login":"mojave-pku","id":26648528,"node_id":"MDQ6VXNlcjI2NjQ4NTI4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26648528?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mojave-pku","html_url":"https:\/\/github.com\/mojave-pku","followers_url":"https:\/\/api.github.com\/users\/mojave-pku\/followers","following_url":"https:\/\/api.github.com\/users\/mojave-pku\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mojave-pku\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mojave-pku\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mojave-pku\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mojave-pku\/orgs","repos_url":"https:\/\/api.github.com\/users\/mojave-pku\/repos","events_url":"https:\/\/api.github.com\/users\/mojave-pku\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mojave-pku\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The problem still exists after removing the cache files.","Can you reproduce this example in a Colab so we can investigate? (or give more information on your software\/hardware config)","Thanks for reporting.\r\nI just found the issue, I'm creating a PR","We'll do a release pretty soon to include the fix :)\r\nIn the meantime you can install the lib from source if you want to "],"created_at":1601277573000,"updated_at":1602078393000,"closed_at":1602077886000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.\r\nThe codes:\r\n```\r\nyelp_data = datasets.load_from_disk('\/home\/ssd4\/huanglianzhe\/test_yelp')\r\n print(yelp_data[0])\r\n yelp_data = yelp_data.train_test_split(test_size=0.1)\r\n print(yelp_data)\r\n print(yelp_data['test'])\r\n print(yelp_data['test'][0])\r\n```\r\nThe outputs:\r\n```\r\n{'stars': 2.0, 'text': 'xxxx'}\r\nLoading cached split indices for dataset at \/home\/ssd4\/huanglianzhe\/test_yelp\/cache-f9b22d8b9d5a7346.arrow and \/home\/ssd4\/huanglianzhe\/test_yelp\/cache-4aa26fa4005059d1.arrow\r\nDatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)})\r\nDataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)\r\n{} # yelp_data['test'][0] is empty\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/676\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/675","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/675\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/675\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/675\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/675","id":709818725,"node_id":"MDU6SXNzdWU3MDk4MTg3MjU=","number":675,"title":"Add custom dataset to NLP?","user":{"login":"timpal0l","id":6556710,"node_id":"MDQ6VXNlcjY1NTY3MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6556710?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timpal0l","html_url":"https:\/\/github.com\/timpal0l","followers_url":"https:\/\/api.github.com\/users\/timpal0l\/followers","following_url":"https:\/\/api.github.com\/users\/timpal0l\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timpal0l\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timpal0l\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timpal0l\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timpal0l\/orgs","repos_url":"https:\/\/api.github.com\/users\/timpal0l\/repos","events_url":"https:\/\/api.github.com\/users\/timpal0l\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timpal0l\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes you can have a look here: https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#csv-files","No activity, closing"],"created_at":1601241770000,"updated_at":1603184929000,"closed_at":1603184929000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Is it possible to add a custom dataset such as a .csv to the NLP library?\r\n\r\nThanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/675\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/674","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/674\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/674\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/674\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/674","id":709661006,"node_id":"MDU6SXNzdWU3MDk2NjEwMDY=","number":674,"title":"load_dataset() won't download in Windows","user":{"login":"ThisDavehead","id":34422661,"node_id":"MDQ6VXNlcjM0NDIyNjYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/34422661?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ThisDavehead","html_url":"https:\/\/github.com\/ThisDavehead","followers_url":"https:\/\/api.github.com\/users\/ThisDavehead\/followers","following_url":"https:\/\/api.github.com\/users\/ThisDavehead\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ThisDavehead\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ThisDavehead\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ThisDavehead\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ThisDavehead\/orgs","repos_url":"https:\/\/api.github.com\/users\/ThisDavehead\/repos","events_url":"https:\/\/api.github.com\/users\/ThisDavehead\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ThisDavehead\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```","This was fixed in #644 \r\nI'll do a new release soon :)\r\n\r\nIn the meantime you can run it by installing from source","Closing since version 1.1.0 got released with Windows support :) \r\nLet me know if it works for you now"],"created_at":1601178985000,"updated_at":1601886498000,"closed_at":1601886498000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.\r\n\r\nAdditionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.\r\n\r\nCould this be a bug, or is there something I'm doing wrong or not thinking of?\r\n\r\nThanks.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/674\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/673","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/673\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/673\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/673\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/673","id":709603989,"node_id":"MDU6SXNzdWU3MDk2MDM5ODk=","number":673,"title":"blog_authorship_corpus crashed","user":{"login":"Moshiii","id":7553188,"node_id":"MDQ6VXNlcjc1NTMxODg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7553188?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Moshiii","html_url":"https:\/\/github.com\/Moshiii","followers_url":"https:\/\/api.github.com\/users\/Moshiii\/followers","following_url":"https:\/\/api.github.com\/users\/Moshiii\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Moshiii\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Moshiii\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Moshiii\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Moshiii\/orgs","repos_url":"https:\/\/api.github.com\/users\/Moshiii\/repos","events_url":"https:\/\/api.github.com\/users\/Moshiii\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Moshiii\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nWe'll free some memory"],"created_at":1601151328000,"updated_at":1601280290000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"This is just to report that When I pick blog_authorship_corpus in \r\nhttps:\/\/huggingface.co\/nlp\/viewer\/?dataset=blog_authorship_corpus\r\nI get this:\r\n![image](https:\/\/user-images.githubusercontent.com\/7553188\/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/673\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/672","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/672\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/672\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/672\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/672","id":709575527,"node_id":"MDU6SXNzdWU3MDk1NzU1Mjc=","number":672,"title":"Questions about XSUM ","user":{"login":"danyaljj","id":2441454,"node_id":"MDQ6VXNlcjI0NDE0NTQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2441454?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/danyaljj","html_url":"https:\/\/github.com\/danyaljj","followers_url":"https:\/\/api.github.com\/users\/danyaljj\/followers","following_url":"https:\/\/api.github.com\/users\/danyaljj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/danyaljj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/danyaljj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/danyaljj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/danyaljj\/orgs","repos_url":"https:\/\/api.github.com\/users\/danyaljj\/repos","events_url":"https:\/\/api.github.com\/users\/danyaljj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/danyaljj\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated","Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking issue for us; would appreciate any progress on this front. We can also help with the fix, if you deem it appropriately. ","I just started the generation on my side, I'll let you know how it goes :) ","Hmm after a first run I'm still missing 136668\/226711 urls.\r\nI'll relaunch it tomorrow to try to get the remaining ones.","Update: I'm missing 36\/226711 urls but I haven't managed to download them yet","Thanks! That sounds like a reasonable number! ","So I managed to download them all but when parsing only 226,181\/226,711 worked.\r\nNot sure if it's worth digging and debugging parsing at this point :\/ ","Maybe @sshleifer can help, I think he's already played with xsum at one point","Thanks @lhoestq\r\nIt would be great to improve coverage, but IDs are the really crucial part for us. We'd really appreciate an update to the dataset with IDs either way!","I gave up at an even earlier point. The dataset I use has 204,017 train examples.","@lhoestq @sshleifer like @jbragg said earlier, the main issue for us is that the current XSUM dataset (in your package) does not have IDs suggested by the original dataset ([here is the file](https:\/\/raw.githubusercontent.com\/EdinburghNLP\/XSum\/master\/XSum-Dataset\/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json).) Would appreciate if you update the XSUM dataset to include the instance IDs. \r\n\r\nThe missing instances is also a problem, but likely not worth pursuing given its relatively small scale. ",">So I managed to download them all but when parsing only 226,181\/226,711 worked.\r\n\r\n@lhoestq any chance we could update the HF-hosted dataset with the IDs in your new version? Happy to help if there's something I can do.","Well I couldn't parse what I downloaded.\r\nUnfortunately I think I won't be able to take a look at it this week.\r\nI can try to send you what I got if you want to give it a shot @jbragg \r\nOtherwise feel free to re-run the xsum download script, maybe you'll be luckier than me"],"created_at":1601140584000,"updated_at":1603185367000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi there \u270b \r\n\r\nI'm looking into your `xsum` dataset and I have several questions on that. \r\nSo here is how I loaded the data: \r\n```\r\n>>> data = datasets.load_dataset('xsum', version='1.0.1')\r\n>>> data['train']\r\nDataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017)\r\n>>> data['test']\r\nDataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333)\r\n```\r\n\r\nThe first issue is, the instance counts don\u2019t match what I see on [the dataset's website](https:\/\/github.com\/EdinburghNLP\/XSum\/tree\/master\/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set)\r\n```\r\n \u2026 training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set.\r\n```\r\nAny thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https:\/\/github.com\/huggingface\/datasets\/pull\/289 (reviewed by @patrickvonplaten) \r\n\r\nAnother issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https:\/\/github.com\/EdinburghNLP\/XSum\/blob\/master\/XSum-Dataset\/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match. \r\n\r\nCC @jbragg \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/672\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/671","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/671\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/671\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/671\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/671","id":709093151,"node_id":"MDU6SXNzdWU3MDkwOTMxNTE=","number":671,"title":"[BUG] No such file or directory","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601051934000,"updated_at":1601304162000,"closed_at":1601304162000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"This happens when both\r\n1. Huggingface datasets cache dir does not exist\r\n2. Try to load a local dataset script\r\n\r\nbuilder.py throws an error when trying to create a filelock in a directory (cache\/datasets) that does not exist\r\nhttps:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/datasets\/builder.py#L177\r\n\r\nTested on v1.0.2\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/671\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/670","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/670\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/670\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/670\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/670","id":709061231,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkzMTc4OTQw","number":670,"title":"Fix SQuAD metric kwargs description","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1601050137000,"updated_at":1601395059000,"closed_at":1601395058000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/670","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/670","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/670.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/670.patch"},"body":"The `answer_start` field was missing in the kwargs docstring.\r\n\r\nThis should fix #657 \r\n\r\nFYI another fix was proposed by @tshrjn in #658 and suggests to remove this field.\r\nHowever IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I think it's better to keep it this way, so that you can just give references=squad[\"answers\"] to .compute(). \r\n\r\nLet me know what sounds the best for you\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/670\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/669","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/669\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/669\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/669\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/669","id":708857595,"node_id":"MDU6SXNzdWU3MDg4NTc1OTU=","number":669,"title":"How to skip a example when running dataset.map","user":{"login":"xixiaoyao","id":24541791,"node_id":"MDQ6VXNlcjI0NTQxNzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24541791?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xixiaoyao","html_url":"https:\/\/github.com\/xixiaoyao","followers_url":"https:\/\/api.github.com\/users\/xixiaoyao\/followers","following_url":"https:\/\/api.github.com\/users\/xixiaoyao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xixiaoyao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xixiaoyao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xixiaoyao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xixiaoyao\/orgs","repos_url":"https:\/\/api.github.com\/users\/xixiaoyao\/repos","events_url":"https:\/\/api.github.com\/users\/xixiaoyao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xixiaoyao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @xixiaoyao,\r\nDepending on what you want to do you can:\r\n- use a first step of `filter` to filter out the invalid examples: https:\/\/huggingface.co\/docs\/datasets\/processing.html#filtering-rows-select-and-filter\r\n- or directly detect the invalid examples inside the callable used with `map` and return them unchanged or even remove them at the same time if you are using `map` in batched mode. Here is an example where we use `map` in batched mode to add new rows on the fly but you can also use it to remove examples on the fly (that's what `filter` actually do under-the-hood): https:\/\/huggingface.co\/docs\/datasets\/processing.html#augmenting-the-dataset","Closing this one.\r\nFeel free to re-open if you have other questions"],"created_at":1601032673000,"updated_at":1601915293000,"closed_at":1601915293000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/669\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/668","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/668\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/668\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/668\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/668","id":708310956,"node_id":"MDU6SXNzdWU3MDgzMTA5NTY=","number":668,"title":"OverflowError when slicing with an array containing negative ids","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600964834000,"updated_at":1601304139000,"closed_at":1601304139000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"```python\r\nfrom datasets import Dataset\r\n\r\nd = ds.Dataset.from_dict({\"a\": range(10)})\r\n\r\nprint(d[0])\r\n# {'a': 0}\r\n\r\nprint(d[-1])\r\n# {'a': 9}\r\n\r\nprint(d[[0, -1]])\r\n# OverflowError\r\n```\r\nresults in\r\n```\r\n---------------------------------------------------------------------------\r\nOverflowError Traceback (most recent call last)\r\n<ipython-input-5-863dc3555598> in <module>\r\n----> 1 d[[0, -1]]\r\n\r\n~\/Desktop\/hf\/nlp\/src\/datasets\/arrow_dataset.py in __getitem__(self, key)\r\n 1070 format_columns=self._format_columns,\r\n 1071 output_all_columns=self._output_all_columns,\r\n-> 1072 format_kwargs=self._format_kwargs,\r\n 1073 )\r\n 1074 \r\n\r\n~\/Desktop\/hf\/nlp\/src\/datasets\/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)\r\n 1025 indices = key\r\n 1026 \r\n-> 1027 indices_array = pa.array([int(i) for i in indices], type=pa.uint64())\r\n 1028 \r\n 1029 # Check if we need to convert indices\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\nOverflowError: can't convert negative value to unsigned int\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/668\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/667","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/667\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/667\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/667\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/667","id":708258392,"node_id":"MDU6SXNzdWU3MDgyNTgzOTI=","number":667,"title":"Loss not decrease with Datasets and Transformers","user":{"login":"wangcongcong123","id":23032865,"node_id":"MDQ6VXNlcjIzMDMyODY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23032865?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wangcongcong123","html_url":"https:\/\/github.com\/wangcongcong123","followers_url":"https:\/\/api.github.com\/users\/wangcongcong123\/followers","following_url":"https:\/\/api.github.com\/users\/wangcongcong123\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wangcongcong123\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wangcongcong123\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wangcongcong123\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wangcongcong123\/orgs","repos_url":"https:\/\/api.github.com\/users\/wangcongcong123\/repos","events_url":"https:\/\/api.github.com\/users\/wangcongcong123\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wangcongcong123\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["And I tested it on T5ForConditionalGeneration, that works no problem.","Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread"],"created_at":1600960483000,"updated_at":1609531285000,"closed_at":1609531285000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"HI,\r\n\r\nThe following script is used to fine-tune a BertForSequenceClassification model on SST2.\r\n\r\nThe script is adapted from [this colab](https:\/\/colab.research.google.com\/github\/huggingface\/datasets\/blob\/master\/notebooks\/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad dataset. In that colab, loss works fine. When I adapt it to SST2, the loss fails to decrease as it should. I attach the adapted script below and appreciate anyone pointing out what I miss?\r\n\r\n```python\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import BertForSequenceClassification\r\nfrom transformers import BertTokenizerFast\r\n# Load our training dataset and tokenizer\r\ndataset = load_dataset(\"glue\", 'sst2')\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')\r\ndel dataset[\"test\"] # let's remove it in this demo\r\n\r\n# Tokenize our training dataset\r\ndef convert_to_features(example_batch):\r\n encodings = tokenizer(example_batch[\"sentence\"])\r\n encodings.update({\"labels\": example_batch[\"label\"]})\r\n return encodings\r\n\r\nencoded_dataset = dataset.map(convert_to_features, batched=True)\r\n# Format our dataset to outputs torch.Tensor to train a pytorch model\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'labels']\r\nencoded_dataset.set_format(type='torch', columns=columns)\r\n\r\n# Instantiate a PyTorch Dataloader around our dataset\r\n# Let's do dynamic batching (pad on the fly with our own collate_fn)\r\ndef collate_fn(examples):\r\n return tokenizer.pad(examples, return_tensors='pt')\r\n\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset['train'], collate_fn=collate_fn, batch_size=8)\r\n# Now let's train our model\r\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\r\n# Let's load a pretrained Bert model and a simple optimizer\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-cased', return_dict=True)\r\noptimizer = torch.optim.Adam(model.parameters(), lr=1e-5)\r\nmodel.train().to(device)\r\nfor i, batch in enumerate(dataloader):\r\n batch.to(device)\r\n outputs = model(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n model.zero_grad()\r\n print(f'Step {i} - loss: {loss:.3}')\r\n\r\n\r\n```\r\nIn case needed.\r\n\r\n- datasets == 1.0.2\r\n- transformers == 3.2.0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/667\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/666","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/666\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/666\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/666\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/666","id":707608578,"node_id":"MDU6SXNzdWU3MDc2MDg1Nzg=","number":666,"title":"Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?","user":{"login":"wahab4114","id":31090427,"node_id":"MDQ6VXNlcjMxMDkwNDI3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31090427?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wahab4114","html_url":"https:\/\/github.com\/wahab4114","followers_url":"https:\/\/api.github.com\/users\/wahab4114\/followers","following_url":"https:\/\/api.github.com\/users\/wahab4114\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wahab4114\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wahab4114\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wahab4114\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wahab4114\/orgs","repos_url":"https:\/\/api.github.com\/users\/wahab4114\/repos","events_url":"https:\/\/api.github.com\/users\/wahab4114\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wahab4114\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["No they are other similar copies but they are not provided by the official Bert models authors."],"created_at":1600887745000,"updated_at":1603811965000,"closed_at":1603811965000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/666\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/665","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/665\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/665\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/665\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/665","id":707037738,"node_id":"MDU6SXNzdWU3MDcwMzc3Mzg=","number":665,"title":"runing dataset.map, it raises TypeError: can't pickle Tokenizer objects","user":{"login":"xixiaoyao","id":24541791,"node_id":"MDQ6VXNlcjI0NTQxNzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24541791?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xixiaoyao","html_url":"https:\/\/github.com\/xixiaoyao","followers_url":"https:\/\/api.github.com\/users\/xixiaoyao\/followers","following_url":"https:\/\/api.github.com\/users\/xixiaoyao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xixiaoyao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xixiaoyao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xixiaoyao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xixiaoyao\/orgs","repos_url":"https:\/\/api.github.com\/users\/xixiaoyao\/repos","events_url":"https:\/\/api.github.com\/users\/xixiaoyao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xixiaoyao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers\/datasets are you using ?","transformers and datasets are both the latest","Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Colab reproducing the error for us to be able to debug this error.","And your version of `dill` if possible :)","I have the same issue with `transformers\/BertJapaneseTokenizer`.\r\n\r\n\r\n\r\n```python\r\n# train_ds = Dataset(features: {\r\n# 'title': Value(dtype='string', id=None), \r\n# 'score': Value(dtype='float64', id=None)\r\n# }, num_rows: 99999)\r\n\r\nt = BertJapaneseTokenizer.from_pretrained('bert-base-japanese-whole-word-masking')\r\nencoded = train_ds.map(lambda examples: {'tokens': t.encode(examples['title'])}, batched=True)\r\n```\r\n\r\n<details><summary>Error Message<\/summary>\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-35-2b7d66b291c1> in <module>\r\n 2 \r\n 3 encoded = train_ds.map(lambda examples:\r\n----> 4 {'tokens': t.encode(examples['title'])}, batched=True)\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1242 fn_kwargs=fn_kwargs,\r\n 1243 new_fingerprint=new_fingerprint,\r\n-> 1244 update_data=update_data,\r\n 1245 )\r\n 1246 else:\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 151 \"output_all_columns\": self._output_all_columns,\r\n 152 }\r\n--> 153 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 154 if new_format[\"columns\"] is not None:\r\n 155 new_format[\"columns\"] = list(set(new_format[\"columns\"]) & set(out.column_names))\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 156 kwargs_for_fingerprint[\"fingerprint_name\"] = fingerprint_name\r\n 157 kwargs[fingerprint_name] = update_fingerprint(\r\n--> 158 self._fingerprint, transform, kwargs_for_fingerprint\r\n 159 )\r\n 160 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)\r\n 103 for key in sorted(transform_args):\r\n 104 hasher.update(key)\r\n--> 105 hasher.update(transform_args[key])\r\n 106 return hasher.hexdigest()\r\n 107 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py in update(self, value)\r\n 55 def update(self, value):\r\n 56 self.m.update(f\"=={type(value)}==\".encode(\"utf8\"))\r\n---> 57 self.m.update(self.hash(value).encode(\"utf-8\"))\r\n 58 \r\n 59 def hexdigest(self):\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py in hash(cls, value)\r\n 51 return cls.dispatch[type(value)](cls, value)\r\n 52 else:\r\n---> 53 return cls.hash_default(value)\r\n 54 \r\n 55 def update(self, value):\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/fingerprint.py in hash_default(cls, value)\r\n 44 @classmethod\r\n 45 def hash_default(cls, value):\r\n---> 46 return cls.hash_bytes(dumps(value))\r\n 47 \r\n 48 @classmethod\r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py in dumps(obj)\r\n 365 file = StringIO()\r\n 366 with _no_cache_fields(obj):\r\n--> 367 dump(obj, file)\r\n 368 return file.getvalue()\r\n 369 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py in dump(obj, file)\r\n 337 def dump(obj, file):\r\n 338 \"\"\"pickle an object to a file\"\"\"\r\n--> 339 Pickler(file, recurse=True).dump(obj)\r\n 340 return\r\n 341 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/dill\/_dill.py in dump(self, obj)\r\n 444 raise PicklingError(msg)\r\n 445 else:\r\n--> 446 StockPickler.dump(self, obj)\r\n 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects\r\n 448 return\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in dump(self, obj)\r\n 407 if self.proto >= 4:\r\n 408 self.framer.start_framing()\r\n--> 409 self.save(obj)\r\n 410 self.write(STOP)\r\n 411 self.framer.end_framing()\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/dill\/_dill.py in save_function(pickler, obj)\r\n 1436 globs, obj.__name__,\r\n 1437 obj.__defaults__, obj.__closure__,\r\n-> 1438 obj.__dict__, fkwdefaults), obj=obj)\r\n 1439 else:\r\n 1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 608 else:\r\n 609 save(func)\r\n--> 610 save(args)\r\n 611 write(REDUCE)\r\n 612 \r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_tuple(self, obj)\r\n 749 write(MARK)\r\n 750 for element in obj:\r\n--> 751 save(element)\r\n 752 \r\n 753 if id(obj) in memo:\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in _batch_setitems(self, items)\r\n 850 k, v = tmp[0]\r\n 851 save(k)\r\n--> 852 save(v)\r\n 853 write(SETITEM)\r\n 854 # else tmp is empty, and we're done\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 519 \r\n 520 # Save the reduce() output and finally memoize the object\r\n--> 521 self.save_reduce(obj=obj, *rv)\r\n 522 \r\n 523 def persistent_id(self, obj):\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 632 \r\n 633 if state is not None:\r\n--> 634 save(state)\r\n 635 write(BUILD)\r\n 636 \r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in _batch_setitems(self, items)\r\n 845 for k, v in tmp:\r\n 846 save(k)\r\n--> 847 save(v)\r\n 848 write(SETITEMS)\r\n 849 elif n:\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 519 \r\n 520 # Save the reduce() output and finally memoize the object\r\n--> 521 self.save_reduce(obj=obj, *rv)\r\n 522 \r\n 523 def persistent_id(self, obj):\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 632 \r\n 633 if state is not None:\r\n--> 634 save(state)\r\n 635 write(BUILD)\r\n 636 \r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 474 f = self.dispatch.get(t)\r\n 475 if f is not None:\r\n--> 476 f(self, obj) # Call unbound method with explicit self\r\n 477 return\r\n 478 \r\n\r\n\/usr\/local\/lib\/python3.6\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save_dict(self, obj)\r\n 819 \r\n 820 self.memoize(obj)\r\n--> 821 self._batch_setitems(obj.items())\r\n 822 \r\n 823 dispatch[dict] = save_dict\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in _batch_setitems(self, items)\r\n 845 for k, v in tmp:\r\n 846 save(k)\r\n--> 847 save(v)\r\n 848 write(SETITEMS)\r\n 849 elif n:\r\n\r\n\/usr\/local\/lib\/python3.6\/pickle.py in save(self, obj, save_persistent_id)\r\n 494 reduce = getattr(obj, \"__reduce_ex__\", None)\r\n 495 if reduce is not None:\r\n--> 496 rv = reduce(self.proto)\r\n 497 else:\r\n 498 reduce = getattr(obj, \"__reduce__\", None)\r\n\r\nTypeError: can't pickle Tagger objects\r\n```\r\n\r\n<\/details>\r\n\r\ntrainsformers: 2.10.0\r\ndatasets: 1.0.2\r\ndill: 0.3.2\r\npython: 3.6.8\r\n\r\nOS: ubuntu 16.04 (Docker Image) on [Deep Learning VM](https:\/\/console.cloud.google.com\/marketplace\/details\/click-to-deploy-images\/deeplearning) (GCP)\r\nGPU: Tesla P100 (CUDA 10)\r\n","> I have the same issue with `transformers\/BertJapaneseTokenizer`.\r\n\r\nIt looks like it this tokenizer is not supported unfortunately.\r\nThis is because `t.word_tokenizer.mecab` is a `fugashi.fugashi.GenericTagger` which is not compatible with pickle nor dill.\r\n\r\nWe need objects passes to `map` to be picklable for our caching system to work properly.\r\nHere it crashes because the caching system is not able to pickle the GenericTagger.\r\n\r\n\\> Maybe you can create an issue on [fugashi](https:\/\/github.com\/polm\/fugashi\/issues) 's repo and ask to make `fugashi.fugashi.GenericTagger` compatible with pickle ?\r\n\r\nWhat you can do in the meantime is use a picklable wrapper of the tokenizer:\r\n\r\n\r\n```python\r\nfrom transformers import BertJapaneseTokenizer, MecabTokenizer\r\n\r\nclass PicklableTokenizer(BertJapaneseTokenizer):\r\n\r\n def __getstate__(self):\r\n state = dict(self.__dict__)\r\n state[\"do_lower_case\"] = self.word_tokenizer.do_lower_case\r\n state[\"never_split\"] = self.word_tokenizer.never_split \r\n del state[\"word_tokenizer\"]\r\n return state\r\n\r\n def __setstate__(self, state):\r\n do_lower_case = state.pop(\"do_lower_case\")\r\n never_split = state.pop(\"never_split\")\r\n self.__dict__ = state\r\n self.word_tokenizer = MecabTokenizer(\r\n do_lower_case=do_lower_case, never_split=never_split)\r\n )\r\n\r\nt = PicklableTokenizer.from_pretrained(\"cl-tohoku\/bert-base-japanese-whole-word-masking\")\r\nencoded = train_ds.map(lambda examples: {'tokens': t.encode(examples['title'])}, batched=True) # it works\r\n```","We can also update the `BertJapaneseTokenizer` in `transformers` as you just shown @lhoestq to make it compatible with pickle. It will be faster than asking on fugashi 's repo and good for the other users of `transformers` as well.\r\n\r\nI'm currently working on `transformers` I'll include it in the https:\/\/github.com\/huggingface\/transformers\/pull\/7141 PR and the next release of `transformers`.","Thank you for the rapid and polite response!\r\n\r\n@lhoestq Thanks for the suggestion! I've passed the pickle phase, but another `ArrowInvalid` problem occored. I created another issue #687 .\r\n\r\n@thomwolf Wow, really fast work. I'm looking forward to the next release \ud83e\udd17"],"created_at":1600835294000,"updated_at":1602149536000,"closed_at":1602149536000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.\r\n\r\n```\r\ndef convert_to_features(example):\r\n # Tokenize contexts and questions (as pairs of inputs)\r\n input_pairs = [example['question'], example['context']]\r\n encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512)\r\n context_encodings = tokenizer.encode_plus(example['context'])\r\n \r\n\r\n # Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes.\r\n # this will give us the position of answer span in the context text\r\n start_idx, end_idx = get_correct_alignement(example['context'], example['answers'])\r\n start_positions_context = context_encodings.char_to_token(start_idx)\r\n end_positions_context = context_encodings.char_to_token(end_idx-1)\r\n\r\n # here we will compute the start and end position of the answer in the whole example\r\n # as the example is encoded like this <s> question<\/s><\/s> context<\/s>\r\n # and we know the postion of the answer in the context\r\n # we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens)\r\n # this will give us the position of the answer span in whole example \r\n sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id)\r\n start_positions = start_positions_context + sep_idx + 1\r\n end_positions = end_positions_context + sep_idx + 1\r\n\r\n if end_positions > 512:\r\n start_positions, end_positions = 0, 0\r\n\r\n encodings.update({'start_positions': start_positions,\r\n 'end_positions': end_positions,\r\n 'attention_mask': encodings['attention_mask']})\r\n return encodings\r\n```\r\n\r\nThen I run `dataset.map(convert_to_features)`, it raise\r\n```\r\nIn [59]: a.map(convert_to_features) \r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-59-c453b508761d> in <module>\r\n----> 1 a.map(convert_to_features)\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1242 fn_kwargs=fn_kwargs,\r\n 1243 new_fingerprint=new_fingerprint,\r\n-> 1244 update_data=update_data,\r\n 1245 )\r\n 1246 else:\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 151 \"output_all_columns\": self._output_all_columns,\r\n 152 }\r\n--> 153 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 154 if new_format[\"columns\"] is not None:\r\n 155 new_format[\"columns\"] = list(set(new_format[\"columns\"]) & set(out.column_names))\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 156 kwargs_for_fingerprint[\"fingerprint_name\"] = fingerprint_name\r\n 157 kwargs[fingerprint_name] = update_fingerprint(\r\n--> 158 self._fingerprint, transform, kwargs_for_fingerprint\r\n 159 )\r\n 160 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)\r\n 103 for key in sorted(transform_args):\r\n 104 hasher.update(key)\r\n--> 105 hasher.update(transform_args[key])\r\n 106 return hasher.hexdigest()\r\n 107 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in update(self, value)\r\n 55 def update(self, value):\r\n 56 self.m.update(f\"=={type(value)}==\".encode(\"utf8\"))\r\n---> 57 self.m.update(self.hash(value).encode(\"utf-8\"))\r\n 58 \r\n 59 def hexdigest(self):\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in hash(cls, value)\r\n 51 return cls.dispatch[type(value)](cls, value)\r\n 52 else:\r\n---> 53 return cls.hash_default(value)\r\n 54 \r\n 55 def update(self, value):\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/fingerprint.py in hash_default(cls, value)\r\n 44 @classmethod\r\n 45 def hash_default(cls, value):\r\n---> 46 return cls.hash_bytes(dumps(value))\r\n 47 \r\n 48 @classmethod\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py in dumps(obj)\r\n 365 file = StringIO()\r\n 366 with _no_cache_fields(obj):\r\n--> 367 dump(obj, file)\r\n 368 return file.getvalue()\r\n 369 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/utils\/py_utils.py in dump(obj, file)\r\n 337 def dump(obj, file):\r\n 338 \"\"\"pickle an object to a file\"\"\"\r\n--> 339 Pickler(file, recurse=True).dump(obj)\r\n 340 return\r\n 341 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/dill\/_dill.py in dump(self, obj)\r\n 444 raise PicklingError(msg)\r\n 445 else:\r\n--> 446 StockPickler.dump(self, obj)\r\n 447 stack.clear() # clear record of 'recursion-sensitive' pickled objects\r\n 448 return\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in dump(self, obj)\r\n 435 if self.proto >= 4:\r\n 436 self.framer.start_framing()\r\n--> 437 self.save(obj)\r\n 438 self.write(STOP)\r\n 439 self.framer.end_framing()\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_function(pickler, obj)\r\n 1436 globs, obj.__name__,\r\n 1437 obj.__defaults__, obj.__closure__,\r\n-> 1438 obj.__dict__, fkwdefaults), obj=obj)\r\n 1439 else:\r\n 1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 636 else:\r\n 637 save(func)\r\n--> 638 save(args)\r\n 639 write(REDUCE)\r\n 640 \r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_tuple(self, obj)\r\n 787 write(MARK)\r\n 788 for element in obj:\r\n--> 789 save(element)\r\n 790 \r\n 791 if id(obj) in memo:\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 857 \r\n 858 self.memoize(obj)\r\n--> 859 self._batch_setitems(obj.items())\r\n 860 \r\n 861 dispatch[dict] = save_dict\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 883 for k, v in tmp:\r\n 884 save(k)\r\n--> 885 save(v)\r\n 886 write(SETITEMS)\r\n 887 elif n:\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 547 \r\n 548 # Save the reduce() output and finally memoize the object\r\n--> 549 self.save_reduce(obj=obj, *rv)\r\n 550 \r\n 551 def persistent_id(self, obj):\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 660 \r\n 661 if state is not None:\r\n--> 662 save(state)\r\n 663 write(BUILD)\r\n 664 \r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 857 \r\n 858 self.memoize(obj)\r\n--> 859 self._batch_setitems(obj.items())\r\n 860 \r\n 861 dispatch[dict] = save_dict\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 883 for k, v in tmp:\r\n 884 save(k)\r\n--> 885 save(v)\r\n 886 write(SETITEMS)\r\n 887 elif n:\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 547 \r\n 548 # Save the reduce() output and finally memoize the object\r\n--> 549 self.save_reduce(obj=obj, *rv)\r\n 550 \r\n 551 def persistent_id(self, obj):\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)\r\n 660 \r\n 661 if state is not None:\r\n--> 662 save(state)\r\n 663 write(BUILD)\r\n 664 \r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 502 f = self.dispatch.get(t)\r\n 503 if f is not None:\r\n--> 504 f(self, obj) # Call unbound method with explicit self\r\n 505 return\r\n 506 \r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/dill\/_dill.py in save_module_dict(pickler, obj)\r\n 931 # we only care about session the first pass thru\r\n 932 pickler._session = False\r\n--> 933 StockPickler.save_dict(pickler, obj)\r\n 934 log.info(\"# D2\")\r\n 935 return\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save_dict(self, obj)\r\n 857 \r\n 858 self.memoize(obj)\r\n--> 859 self._batch_setitems(obj.items())\r\n 860 \r\n 861 dispatch[dict] = save_dict\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in _batch_setitems(self, items)\r\n 883 for k, v in tmp:\r\n 884 save(k)\r\n--> 885 save(v)\r\n 886 write(SETITEMS)\r\n 887 elif n:\r\n\r\n\/opt\/conda\/lib\/python3.7\/pickle.py in save(self, obj, save_persistent_id)\r\n 522 reduce = getattr(obj, \"__reduce_ex__\", None)\r\n 523 if reduce is not None:\r\n--> 524 rv = reduce(self.proto)\r\n 525 else:\r\n 526 reduce = getattr(obj, \"__reduce__\", None)\r\n\r\nTypeError: can't pickle Tokenizer objects\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/665\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/664","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/664\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/664\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/664\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/664","id":707017791,"node_id":"MDU6SXNzdWU3MDcwMTc3OTE=","number":664,"title":"load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable ","user":{"login":"xixiaoyao","id":24541791,"node_id":"MDQ6VXNlcjI0NTQxNzkx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24541791?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/xixiaoyao","html_url":"https:\/\/github.com\/xixiaoyao","followers_url":"https:\/\/api.github.com\/users\/xixiaoyao\/followers","following_url":"https:\/\/api.github.com\/users\/xixiaoyao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/xixiaoyao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/xixiaoyao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/xixiaoyao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/xixiaoyao\/orgs","repos_url":"https:\/\/api.github.com\/users\/xixiaoyao\/repos","events_url":"https:\/\/api.github.com\/users\/xixiaoyao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/xixiaoyao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?","Hi @xixiaoyao did you manage to fix your issue ?","No activity, closing"],"created_at":1600833216000,"updated_at":1603184773000,"closed_at":1603184773000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"\r\nversion: 1.0.2\r\n\r\n```\r\ntrain_dataset = datasets.load_dataset('squad') \r\n```\r\n\r\nThe above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.\r\n```\r\ntrain_dataset = datasets.load_dataset('.\/my_squad.py') \r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-25a84b4d1581> in <module>\r\n----> 1 train_dataset = nlp.load_dataset('.\/my_squad.py')\r\n\r\n\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 602 hash=hash,\r\n 603 features=features,\r\n--> 604 **config_kwargs,\r\n 605 )\r\n 606 \r\n\r\nTypeError: 'NoneType' object is not callable\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/664\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/663","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/663\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/663\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/663\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/663","id":706732636,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkxMjI3NzUz","number":663,"title":"Created dataset card snli.md","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"closed","locked":false,"assignee":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"assignees":[{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Adding a direct link to the rendered markdown:\r\nhttps:\/\/github.com\/mcmillanmajora\/datasets\/blob\/add_dataset_documentation\/datasets\/snli\/README.md\r\n","It would be amazing if we ended up with this much information on all of our datasets :) \r\n\r\nI don't think there's too much repetition, everything that is in here is relevant. The main challenge will be to figure out how to structure the sheet so that all of the information can be presented without overwhelming the reader. We'll also want to have as much of it as possible in structured form so it can be easily navigated.","@mcmillanmajora for now can you remove the prompts \/ quoted blocks so we can see what the datasheet would look like on its own?\r\n\r\nWould also love to hear if @sgugger has some first impressions","I removed the prompts. It's definitely a little easier to read without them!","Should we name the file `README.md` for consistency with models?","Asked @sleepinyourhat for some insights too :) ","Thank you for taking the time to look through the card and for all your comments @sleepinyourhat ! I've incorporated them in the latest update. ","Be careful to keep the \u2018sa\u2019 term in the license. It\u2019s something we\ninherited from the Flickr captions.\n\nOn Thu, Oct 1, 2020 at 10:09 AM Julien Chaumond <notifications@github.com>\nwrote:\n\n> *@julien-c* commented on this pull request.\n> ------------------------------\n>\n> In datasets\/snli\/README.md\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__github.com_huggingface_datasets_pull_663-23discussion-5Fr498273172&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=WbEkKXCbL6j5Ui3sox_WqvzrbShbJn2WW-51SENL2ZQ&e=>\n> :\n>\n> > +---\n> +language:\n> +- en\n> +task:\n> +- text-classification\n> +purpose:\n> +- NLI\n> +size:\n> +- \">100k\"\n> +language producers:\n> +- crowdsourced\n> +annotation:\n> +- crowdsourced\n> +tags:\n> +- extended-from-other-datasets\n> +license: \"CC BY-SA 4.0\"\n>\n> \u2b07\ufe0f Suggested change\n>\n> -license: \"CC BY-SA 4.0\"\n> +license: cc-by-4.0\n>\n> For models (documented at\n> https:\/\/huggingface.co\/docs#what-metadata-can-i-add-to-my-model-card\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__huggingface.co_docs-23what-2Dmetadata-2Dcan-2Di-2Dadd-2Dto-2Dmy-2Dmodel-2Dcard&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=ck3x8c_ujrwKReDTSGuWWgD9W6REHEPbZaO7S4GFRd4&e=>)\n> we use the License keywords listed by GitHub at\n> https:\/\/docs.github.com\/en\/free-pro-team@latest\/github\/creating-cloning-and-archiving-repositories\/licensing-a-repository#searching-github-by-license-type\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__docs.github.com_en_free-2Dpro-2Dteam-40latest_github_creating-2Dcloning-2Dand-2Darchiving-2Drepositories_licensing-2Da-2Drepository-23searching-2Dgithub-2Dby-2Dlicense-2Dtype&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=dWBP-ZvtMErD-egoBiBTCKA4500mjDXVSk03oW1g16U&e=>\n>\n> (Hopefully we'll plug some sort of form validation for users at some point)\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__github.com_huggingface_datasets_pull_663-23pullrequestreview-2D500386385&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=HU2Hwi7HH9W2NtMoCIiQlhXxxEULLi8L9gnWU5PBAPY&e=>,\n> or unsubscribe\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAJZSWL63W2LB7SBICA2GMTSISEPZANCNFSM4RWKAZRA&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=PHPCew9Xj3CBQrudcaii70ln-wpRtbngE_tj3Ioy3NI&s=086__lKQLxTanHfjE8kOIpaJbaWPzBB9gGIt_prWeH8&e=>\n> .\n>\n","@sleepinyourhat You're right, wrong copy\/paste","Question: Where does this standard come from? It looks similar to both\n'Data Statements' and 'Datasheets for Datasets', but it doesn't look quite\nlike either.\n\nOn Mon, Oct 12, 2020 at 4:27 PM Yacine Jernite <notifications@github.com>\nwrote:\n\n> Merged #663\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__github.com_huggingface_datasets_pull_663&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=D34WbiHBTYHOdXsI9JV9wJqSieP6zAPGqGKDziM5uKU&s=s4_X-BSEnTKgGg9rPLBt3cyVptyMX_iWD5Ql3UMBi-I&e=>\n> into master.\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__github.com_huggingface_datasets_pull_663-23event-2D3868180429&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=D34WbiHBTYHOdXsI9JV9wJqSieP6zAPGqGKDziM5uKU&s=elcM4umqReQfIrgHhpey9W_wPaq5QRgq7xNlubM47QI&e=>,\n> or unsubscribe\n> <https:\/\/urldefense.proofpoint.com\/v2\/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAJZSWJVGQRCR4OTTV27VTTSKNRBXANCNFSM4RWKAZRA&d=DwMCaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=sCzLyHdE8zgQwk2-sKwA1w&m=D34WbiHBTYHOdXsI9JV9wJqSieP6zAPGqGKDziM5uKU&s=NB6nEROnTPgwNyF3ZklOmHnvP7kOkOm7sEa740KbVCs&e=>\n> .\n>\n","@sleepinyourhat The schema is definitely drawing from Data Statements and Datasheets for Datasets but we also wanted to include some more general information to introduce the dataset to new users. If you have any suggestions for changes to the schema itself, please let us know!"],"created_at":1600813777000,"updated_at":1602608720000,"closed_at":1602534412000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/663","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/663","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/663.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/663.patch"},"body":"First draft of a dataset card using the SNLI corpus as an example.\r\n\r\nThis is mostly based on the [Google Doc draft](https:\/\/docs.google.com\/document\/d\/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos\/edit), but I added a few sections and moved some things around. \r\n\r\n- I moved **Who Was Involved** to follow **Language**, both because I thought the authors should be presented more towards the front and because I think it makes sense to present the speakers close to the language so it doesn't have to be repeated.\r\n\r\n- I created a section I called **Data Characteristics** by pulling some things out of the other sections. I was thinking that this would be more about the language use in context of the specific task construction. That name isn't very descriptive though and could probably be improved.\r\n-- Domain and language type out of **Language**. I particularly wanted to keep the Language section as simple and as abstracted from the task as possible.\r\n-- 'How was the data collected' out of **Who Was Involved** \r\n-- Normalization out of **Features\/Dataset Structure** \r\n-- I also added an annotation process section.\r\n\r\n- I kept the **Features** section mostly the same as the Google Doc, but I renamed it **Dataset Structure** to more clearly separate it from the language use, and added some links to the documentation pages. \r\n\r\n- I also kept **Tasks Supported**, **Known Limitations**, and **Licensing Information** mostly the same. Looking at it again though, maybe **Tasks Supported** should come before **Data Characteristics**?\r\n\r\nThe trickiest part about writing a dataset card for the SNLI corpus specifically is that it's built on datasets which are themselves built on datasets so I had to dig in a lot of places to find information. I think this will be easier with other datasets and once there is more uptake of dataset cards so they can just link to each other. (Maybe that needs to be an added section?)\r\n\r\nI also made an effort not to repeat information across the sections or to refer to a previous section if the information was relevant in a later one. Is there too much repetition still?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/663\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/662","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/662\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/662\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/662\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/662","id":706689866,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkxMTkyNTM3","number":662,"title":"Created dataset card snli.md","user":{"login":"mcmillanmajora","id":26722925,"node_id":"MDQ6VXNlcjI2NzIyOTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26722925?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mcmillanmajora","html_url":"https:\/\/github.com\/mcmillanmajora","followers_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/followers","following_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/orgs","repos_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/repos","events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mcmillanmajora\/received_events","type":"User","site_admin":false},"labels":[{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Resubmitting on a new fork"],"created_at":1600808417000,"updated_at":1600809981000,"closed_at":1600809981000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/662","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/662","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/662.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/662.patch"},"body":"First draft of a dataset card using the SNLI corpus as an example","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/662\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/661","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/661\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/661\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/661\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/661","id":706465936,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkxMDA3NjEw","number":661,"title":"Replace pa.OSFile by open","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600787159000,"updated_at":1620239076000,"closed_at":1600787725000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/661","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/661","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/661.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/661.patch"},"body":"It should fix #643 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/661\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/660","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/660\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/660\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/660\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/660","id":706324032,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwODkyMjQ0","number":660,"title":"add openwebtext","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.","> BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.\r\n\r\nI don't think so.\r\nWe have a command for black and isort but not flake8 as far as I know.","Thanks for your awesome work too.\r\nBTW a little reminder, this solves #132 "],"created_at":1600776322000,"updated_at":1601976010000,"closed_at":1601284046000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/660","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/660","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/660.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/660.patch"},"body":"This adds [The OpenWebText Corpus](https:\/\/skylion007.github.io\/OpenWebTextCorpus\/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI\u2019s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA.\r\n\r\nIt solves #132 .\r\n\r\n### Besides dataset building script, I made some changes to the library.\r\n\r\n1. Extract large amount of compressed files with multi processing\r\nI add a `num_proc` argument to `DownloadManager.extract` and pass this `num_proc` to `map_nested`. So I can decompress 20 thousands compressed files faster. `num_proc` I add is default to `None`, so it shouldn't break any other thing.\r\n\r\n2. In `cached_path`, I change the order to deal with different kind of compressed files (zip, tar, gzip)\r\nBecause there is no way to 100% detect a file is a zip file (see [this](https:\/\/stackoverflow.com\/questions\/18194688\/how-can-i-determine-if-a-file-is-a-zip-file)), I found it wrongly detect `'.\/datasets\/downloads\/extracted\/58764bd6898fa339b25d92e7fbbc3d8dbf64fb504edff1a30a1d7d99d1561027\/openwebtext\/urlsf_subset13-630_data.xz'` as a zip and try decompress it with zip, sure it will get error. So I made it detect wheter the file is tar or gzip first and detect zip in the last.\r\n\r\n3. `MockDownloadManager.extract`\r\nCuz I pass `num_proc` to `DownloadManager.extract`, I also have to make `MockDownloadManager.extract` to accept extra keywork arguments. So I make it `extract(path, *args, **kwargs)`, but just return the path as original implementation.\r\n\r\n**Note**: If there is better way for points mentioned above, thought I would like to help, unless we can solve point4 (make dataset building fast), I may not be able to afford rebuild the dataset again because of change of the dataset script (Building the dataset cost me 4 days). \r\n\r\n### There is something I think we can improve\r\n\r\n4. Long time to decompress compressed files\r\nEven I decompress those 20 thousands compressed files with 12 process on my 16 core 3.x Ghz server. It still took about 3 ~ 4days to complete dataset building. Most of time spent on decompress those files.\r\n\r\n### Info about the source data\r\nThe source data is an tar.xz file with following structure, files\/directory beyond compressed file is what can we get after decompress it.\r\n```\r\nopenwebtext.tar.xz\r\n |__ openwebtext\r\n |__subset000.xz\r\n | |__ ....txt\r\n | |__ ....txt\r\n | ...\r\n |__ subset001.xz\r\n |\r\n ....\r\n```\r\nAnd this the structure of dummy data, same as the original one.\r\n```\r\ndummy_data.zip\r\n |__ dummy_data\r\n |__ openwebtext\r\n |__fake_subset-1_data-dirxz # actually it is a directory\r\n | |__ ....txt\r\n | |__ ....txt\r\n |__ fake_subset-2_data-dirxz\r\n |__ ....txt\r\n |__ ....txt\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/660\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/659","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/659\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/659\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/659\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/659","id":706231506,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwODE4NTY1","number":659,"title":"Keep new columns in transmit format","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600768043000,"updated_at":1600769242000,"closed_at":1600769240000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/659","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/659","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/659.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/659.patch"},"body":"When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. \r\n\r\nIt caused `KeyError` issues in #620 \r\n\r\nI changed the logic to add those new columns to the list that `__getitem__` should return.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/659\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/658","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/658\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/658\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/658\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/658","id":706206247,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0","number":658,"title":"Fix squad metric's Features","user":{"login":"tshrjn","id":8372098,"node_id":"MDQ6VXNlcjgzNzIwOTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8372098?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tshrjn","html_url":"https:\/\/github.com\/tshrjn","followers_url":"https:\/\/api.github.com\/users\/tshrjn\/followers","following_url":"https:\/\/api.github.com\/users\/tshrjn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tshrjn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tshrjn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tshrjn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tshrjn\/orgs","repos_url":"https:\/\/api.github.com\/users\/tshrjn\/repos","events_url":"https:\/\/api.github.com\/users\/tshrjn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tshrjn\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this one in favor of #670 \r\n\r\nThanks again for reporting the issue and proposing this fix !\r\nLet me know if you have other remarks"],"created_at":1600765792000,"updated_at":1601395110000,"closed_at":1601395110000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/658","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/658","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/658.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/658.patch"},"body":"Resolves issue [657](https:\/\/github.com\/huggingface\/datasets\/issues\/657).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/658\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/657","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/657\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/657\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/657\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/657","id":706204383,"node_id":"MDU6SXNzdWU3MDYyMDQzODM=","number":657,"title":"Squad Metric Description & Feature Mismatch","user":{"login":"tshrjn","id":8372098,"node_id":"MDQ6VXNlcjgzNzIwOTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8372098?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tshrjn","html_url":"https:\/\/github.com\/tshrjn","followers_url":"https:\/\/api.github.com\/users\/tshrjn\/followers","following_url":"https:\/\/api.github.com\/users\/tshrjn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tshrjn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tshrjn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tshrjn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tshrjn\/orgs","repos_url":"https:\/\/api.github.com\/users\/tshrjn\/repos","events_url":"https:\/\/api.github.com\/users\/tshrjn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tshrjn\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[\"answers\"]` to `.compute()`.\r\nMaybe we can just fix the description then.","But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https:\/\/github.com\/huggingface\/datasets\/pull\/658\/files)."],"created_at":1600765620000,"updated_at":1602555416000,"closed_at":1601395058000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The [description](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/squad\/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/squad\/squad.py#L68). It's also not used in the evaluation.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/657\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/656","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/656\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/656\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/656\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/656","id":705736319,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwNDEwODAz","number":656,"title":"Use multiprocess from pathos for multiprocessing","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We can just install multiprocess actually, I'll change that","Just an FYI: I remember that I wanted to try pathos a couple of years back and I ran into issues considering cross-platform; the code would just break on Windows. If I can verify this PR by running CPU tests on Windows, let me know!","That's good to know thanks\r\nI guess we can just wait for #644 to be merged first. I'm working on fixing the tests for windows","Looks like all the CI jobs on windows passed !\r\nI also tested locally on my windows and it works great :) \r\n\r\nI think this is ready to merge, let me know if you have any remarks @thomwolf @BramVanroy "],"created_at":1600704739000,"updated_at":1601304340000,"closed_at":1601304339000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/656","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/656","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/656.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/656.patch"},"body":"[Multiprocess](https:\/\/github.com\/uqfoundation\/multiprocess) (from the [pathos](https:\/\/github.com\/uqfoundation\/pathos) project) allows to use lambda functions in multiprocessed map.\r\nIt was suggested to use it by @kandorm.\r\n\r\nWe're already using dill which is its only dependency.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/656\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/655","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/655\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/655\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/655\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/655","id":705672208,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwMzU4OTQ3","number":655,"title":"added Winogrande debiased subset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To fix the CI you just have to copy the dummy data to the 1.1.0 folder, and maybe create the dummy ones for the `debiased` configuration","Fixed! Thanks @lhoestq "],"created_at":1600699868000,"updated_at":1600705240000,"closed_at":1600704964000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/655","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/655","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/655.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/655.patch"},"body":"The [Winogrande](https:\/\/arxiv.org\/abs\/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/655\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/654","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/654\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/654\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/654\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/654","id":705511058,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3","number":654,"title":"Allow empty inputs in metrics","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600687596000,"updated_at":1601956308000,"closed_at":1600704818000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/654","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/654","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/654.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/654.patch"},"body":"There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/654\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/653","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/653\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/653\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/653\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/653","id":705482391,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwMjAxOTg4","number":653,"title":"handle data alteration when trying type","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600684909000,"updated_at":1600704786000,"closed_at":1600704785000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/653","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/653","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/653.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/653.patch"},"body":"Fix #649 \r\n\r\nThe bug came from the type inference that didn't handle a weird case in Pyarrow.\r\nIndeed this code runs without error but alters the data in arrow:\r\n```python\r\nimport pyarrow as pa\r\n\r\ntype = pa.struct({\"a\": pa.struct({\"b\": pa.string()})})\r\narray_with_altered_data = pa.array([{\"a\": {\"b\": \"foo\", \"c\": \"bar\"}}] * 10, type=type)\r\nprint(array_with_altered_data[0].as_py())\r\n# {'a': {'b': 'foo'}} -> the sub-field \"c\" is missing\r\n```\r\n(I don't know if this is intended in pyarrow tbh)\r\n\r\nWe didn't take this case into account during type inference. By default it was keeping old features and maybe alter data.\r\nTo fix that I added a line that checks that the first element of the array is not altered.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/653\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/652","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/652\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/652\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/652\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/652","id":705390850,"node_id":"MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx","number":652,"title":"handle connection error in download_prepared_from_hf_gcs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600676471000,"updated_at":1600676923000,"closed_at":1600676922000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/652","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/652","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/652.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/652.patch"},"body":"Fix #647 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/652\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/651","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/651\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/651\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/651\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/651","id":705212034,"node_id":"MDU6SXNzdWU3MDUyMTIwMzQ=","number":651,"title":"Problem with JSON dataset format","user":{"login":"vikigenius","id":12724810,"node_id":"MDQ6VXNlcjEyNzI0ODEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12724810?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vikigenius","html_url":"https:\/\/github.com\/vikigenius","followers_url":"https:\/\/api.github.com\/users\/vikigenius\/followers","following_url":"https:\/\/api.github.com\/users\/vikigenius\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vikigenius\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vikigenius\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vikigenius\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vikigenius\/orgs","repos_url":"https:\/\/api.github.com\/users\/vikigenius\/repos","events_url":"https:\/\/api.github.com\/users\/vikigenius\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vikigenius\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Currently the `json` dataset doesn't support this format unfortunately.\r\nHowever you could load it with\r\n```python\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_json(\"path_to_local.json\", orient=\"index\")\r\ndataset = Dataset.from_pandas(df)\r\n```","or you can make a custom dataset script as explained in doc here: https:\/\/huggingface.co\/docs\/datasets\/add_dataset.html"],"created_at":1600646234000,"updated_at":1600690464000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have a local json dataset with the following form.\r\n\r\n{\r\n 'id01234': {'key1': value1, 'key2': value2, 'key3': value3},\r\n 'id01235': {'key1': value1, 'key2': value2, 'key3': value3},\r\n .\r\n .\r\n .\r\n 'id09999': {'key1': value1, 'key2': value2, 'key3': value3}\r\n}\r\nNote that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record.\r\n\r\nReading this with json:\r\n\r\n```\r\ndata = datasets.load('json', data_files='path_to_local.json')\r\n```\r\nThrows an error and asks me to chose a field. What's the right way to handle this?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/651\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/650","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/650\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/650\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/650\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/650","id":704861844,"node_id":"MDU6SXNzdWU3MDQ4NjE4NDQ=","number":650,"title":"dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi :) \r\nIn your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.\r\nLet me know if it helps","Thanks for your comment @lhoestq ,\r\nJust for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but actually kind of circumvent it. But since we will test the real data so it is ok ?","Yes it's fine for now. We plan to add a job for slow tests.\r\nAnd at one point we'll also do another pass on the dummy data handling and consider extracting files.","Thanks for the confirmation.\r\nAlso the suggestion works. Thank you."],"created_at":1600513623000,"updated_at":1600775650000,"closed_at":1600775649000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, I recently want to add a dataset whose source data is like this\r\n```\r\nopenwebtext.tar.xz\r\n |__ openwebtext\r\n |__subset000.xz\r\n | |__ ....txt\r\n | |__ ....txt\r\n | ...\r\n |__ subset001.xz\r\n |\r\n ....\r\n```\r\nSo I wrote `openwebtext.py` like this\r\n```\r\n def _split_generators(self, dl_manager):\r\n dl_dir = dl_manager.download_and_extract(_URL)\r\n owt_dir = os.path.join(dl_dir, 'openwebtext')\r\n subset_xzs = [\r\n os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock\r\n ]\r\n ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75))\r\n nested_txt_files = [ \r\n [ \r\n os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt')\r\n ] for ex_dir in ex_dirs\r\n ]\r\n txt_files = chain(*nested_txt_files)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"txt_files\": txt_files}\r\n ),\r\n ]\r\n```\r\nAll went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me.\r\n\r\nHow should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/650\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/649","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/649\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/649\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/649\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/649","id":704838415,"node_id":"MDU6SXNzdWU3MDQ4Mzg0MTU=","number":649,"title":"Inconsistent behavior in map","user":{"login":"krandiash","id":10166085,"node_id":"MDQ6VXNlcjEwMTY2MDg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10166085?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/krandiash","html_url":"https:\/\/github.com\/krandiash","followers_url":"https:\/\/api.github.com\/users\/krandiash\/followers","following_url":"https:\/\/api.github.com\/users\/krandiash\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/krandiash\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/krandiash\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/krandiash\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/krandiash\/orgs","repos_url":"https:\/\/api.github.com\/users\/krandiash\/repos","events_url":"https:\/\/api.github.com\/users\/krandiash\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/krandiash\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week"],"created_at":1600504872000,"updated_at":1600704785000,"closed_at":1600704785000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.\r\n\r\n```python\r\nimport datasets\r\n\r\n# Dataset with a single feature called 'field' consisting of two examples\r\ndataset = datasets.Dataset.from_dict({'field': ['a', 'b']})\r\nprint(dataset[0])\r\n# outputs\r\n{'field': 'a'}\r\n\r\n# Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital'\r\ndataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}})\r\nprint(dataset[0])\r\n# output is okay\r\n{'field': 'a', 'otherfield': {'capital': 'A'}}\r\n\r\n# Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield'\r\nprint(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0])\r\n# printing out the first example after applying the map shows that the new key 'append_x' doesn't get added\r\n# it also messes up the value stored at 'capital'\r\n{'field': 'a', 'otherfield': {'capital': None}}\r\n\r\n# Instead, I try to do the same thing by using a different mapped fn\r\nprint(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0])\r\n# this preserves the value under capital, but still no 'append_x'\r\n{'field': 'a', 'otherfield': {'capital': 'A'}}\r\n\r\n# Instead, I try to pass 'otherfield' to remove_columns\r\nprint(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0])\r\n# this still doesn't fix the problem\r\n{'field': 'a', 'otherfield': {'capital': 'A'}}\r\n\r\n# Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset.\r\n\r\n# Recreate the dataset\r\ndataset = datasets.Dataset.from_dict({'field': ['a', 'b']})\r\n# Now map the entire 'otherfield' dict directly, instead of incrementally as before\r\nprint(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0])\r\n# This looks good!\r\n{'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}}\r\n```\r\n\r\nThis might be a new issue, because I didn't see this behavior in the `nlp` library. \r\n\r\nAny help is appreciated!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/649\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/648","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/648\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/648\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/648\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/648","id":704753123,"node_id":"MDU6SXNzdWU3MDQ3NTMxMjM=","number":648,"title":"offset overflow when multiprocessing batched map on large datasets.","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This should be fixed with #645 ","Feel free to re-open if it still occurs"],"created_at":1600481711000,"updated_at":1600534027000,"closed_at":1600533991000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"It only happened when \"multiprocessing\" + \"batched\" + \"large dataset\" at the same time.\r\n\r\n```\r\ndef bprocess(examples):\r\n examples['len'] = []\r\n for text in examples['text']:\r\n examples['len'].append(len(text))\r\n return examples\r\nwiki.map(brpocess, batched=True, num_proc=8)\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nRemoteTraceback Traceback (most recent call last)\r\nRemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"\/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/multiprocessing\/pool.py\", line 121, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"\/home\/yisiang\/datasets\/src\/datasets\/arrow_dataset.py\", line 153, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"\/home\/yisiang\/datasets\/src\/datasets\/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/home\/yisiang\/datasets\/src\/datasets\/arrow_dataset.py\", line 1486, in _map_single\r\n batch = self[i : i + batch_size]\r\n File \"\/home\/yisiang\/datasets\/src\/datasets\/arrow_dataset.py\", line 1071, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"\/home\/yisiang\/datasets\/src\/datasets\/arrow_dataset.py\", line 972, in _getitem\r\n data_subset = self._data.take(indices_array)\r\n File \"pyarrow\/table.pxi\", line 1145, in pyarrow.lib.Table.take\r\n File \"\/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/pyarrow\/compute.py\", line 268, in take\r\n return call_function('take', [data, indices], options)\r\n File \"pyarrow\/_compute.pyx\", line 298, in pyarrow._compute.call_function\r\n File \"pyarrow\/_compute.pyx\", line 192, in pyarrow._compute.Function.call\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nArrowInvalid Traceback (most recent call last)\r\n in \r\n 30 owt = datasets.load_dataset('\/home\/yisiang\/datasets\/datasets\/openwebtext\/openwebtext.py', cache_dir='.\/datasets')['train']\r\n 31 print('load\/create data from OpenWebText Corpus for ELECTRA')\r\n---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f\"electra_owt_{c.max_length}.arrow\")\r\n 33 dsets.append(e_owt)\r\n 34 \r\n\r\n~\/Reexamine_Attention\/electra_pytorch\/_utils\/utils.py in map(self, **kwargs)\r\n 126 writer_batch_size=10**4,\r\n 127 num_proc=num_proc,\r\n--> 128 **kwargs\r\n 129 )\r\n 130 \r\n\r\n~\/hugdatafast\/hugdatafast\/transform.py in my_map(self, *args, **kwargs)\r\n 21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow'\r\n 22 if '\/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name)\r\n---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs)\r\n 24 \r\n 25 @patch\r\n\r\n~\/datasets\/src\/datasets\/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1285 logger.info(\"Spawning {} processes\".format(num_proc))\r\n 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]\r\n-> 1287 transformed_shards = [r.get() for r in results]\r\n 1288 logger.info(\"Concatenating {} shards from multiprocessing\".format(num_proc))\r\n 1289 result = concatenate_datasets(transformed_shards)\r\n\r\n~\/datasets\/src\/datasets\/arrow_dataset.py in (.0)\r\n 1285 logger.info(\"Spawning {} processes\".format(num_proc))\r\n 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard]\r\n-> 1287 transformed_shards = [r.get() for r in results]\r\n 1288 logger.info(\"Concatenating {} shards from multiprocessing\".format(num_proc))\r\n 1289 result = concatenate_datasets(transformed_shards)\r\n\r\n~\/miniconda3\/envs\/ml\/lib\/python3.7\/multiprocessing\/pool.py in get(self, timeout)\r\n 655 return self._value\r\n 656 else:\r\n--> 657 raise self._value\r\n 658 \r\n 659 def _set(self, i, obj):\r\n\r\nArrowInvalid: offset overflow while concatenating arrays\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/648\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/647","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/647\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/647\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/647\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/647","id":704734764,"node_id":"MDU6SXNzdWU3MDQ3MzQ3NjQ=","number":647,"title":"Cannot download dataset_info.json","user":{"login":"chiyuzhang94","id":33407613,"node_id":"MDQ6VXNlcjMzNDA3NjEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33407613?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chiyuzhang94","html_url":"https:\/\/github.com\/chiyuzhang94","followers_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/followers","following_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/orgs","repos_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/repos","events_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nWe should add support for servers without internet connection indeed\r\nI'll do that early next week","Thanks, @lhoestq !\r\nPlease let me know when it is available. ","Right now the recommended way is to create the dataset on a server with internet connection and then to save it and copy the serialized dataset to the server without internet connection.","#652 should allow you to load text\/json\/csv\/pandas datasets without an internet connection **IF** you've the dataset script locally.\r\n\r\nExample: \r\nIf you have `datasets\/text\/text.py` locally, then you can do `load_dataset(\".\/datasets\/text\", data_files=...)`"],"created_at":1600479315000,"updated_at":1600676922000,"closed_at":1600676922000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:\r\n\r\n```\r\nConnectionError: Couldn't reach https:\/\/storage.googleapis.com\/huggingface-nlp\/cache\/datasets\/text\/default-53ee3045f07ba8ca\/0.0.0\/dataset_info.json\r\n```\r\n\r\nI tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually?\r\n\r\nVersions:\r\nPython version 3.7.3\r\nPyTorch version 1.6.0\r\nTensorFlow version 2.3.0\r\ndatasets version: 1.0.1 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/647\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/646","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/646\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/646\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/646\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/646","id":704607371,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5NTAyMTM3","number":646,"title":"Fix docs typos","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600457547000,"updated_at":1600705854000,"closed_at":1600704852000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/646","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/646","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/646.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/646.patch"},"body":"This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs\/source\/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/646\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/645","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/645\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/645\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/645\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/645","id":704542234,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5NDQ5MjAx","number":645,"title":"Don't use take on dataset table in pyarrow 1.0.x","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I tried lower batch sizes and it didn't accelerate filter (quite the opposite actually).\r\nThe slow-down also appears for pyarrow 0.17.1 for some reason, not sure it comes from these changes","I just checked the benchmarks of other PRs and some of them had 300s (!!) for filter. This needs some investigation..","Merging this one since it's not the cause of the the slow down"],"created_at":1600450294000,"updated_at":1600533992000,"closed_at":1600533991000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/645","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/645","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/645.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/645.patch"},"body":"Fix #615 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/645\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/644","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/644\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/644\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/644\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/644","id":704534501,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5NDQzMTk1","number":644,"title":"Better windows support","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This PR is ready :)\r\nIt brings official support for windows.\r\n\r\nSome tests `AWSDatasetTest` are failing.\r\nThis is because I had to fix a few datasets that were not compatible with windows.\r\nThese test will pass once they got merged on master :)"],"created_at":1600449456000,"updated_at":1601042550000,"closed_at":1601042548000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/644","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/644","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/644.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/644.patch"},"body":"There are a few differences in the behavior of python and pyarrow on windows.\r\n\r\nFor example there are restrictions when accessing\/deleting files that are open\r\n\r\nFix #590 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/644\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/643","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/643\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/643\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/643\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/643","id":704477164,"node_id":"MDU6SXNzdWU3MDQ0NzcxNjQ=","number":643,"title":"Caching processed dataset at wrong folder","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nIt uses a temporary file to write the data.\r\nHowever it looks like the temporary file is not placed in the right directory during the processing","Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected.\r\nWhich version of `datasets` are you using ?","`datasets-1.0.1`\r\nHere you can reproduce it here:\r\nhttps:\/\/colab.research.google.com\/drive\/1O0KcepTFsmpkBbrbLLMq42iwTKmQh8d5?usp=sharing\r\n","It looks like a pyarrow issue with google colab.\r\nFor some reason this code increases the disk usage of google colab while it actually writes into google drive:\r\n\r\n```python\r\nimport pyarrow as pa\r\n\r\nstream = pa.OSFile(\"\/content\/drive\/My Drive\/path\/to\/file.arrow\", \"wb\")\r\nwriter = pa.RecordBatchStreamWriter(stream, schema=pa.schema({\"text\": pa.string()}))\r\nwriter.write_table(pa.Table.from_pydict({\"text\": [\"a\"*511 + \"\\n\"] * ((1 << 30) \/\/ 512)})) # 1GiB\r\nwriter.close()\r\nstream.close()\r\n```\r\n\r\nMoreover if I `rm` the file on google drive, it frees disk space on google colab.","It looks like replacing `pa.OSFile` by `open` fixes it, I'm going to open a PR","Ok. Thank you so much!","Actually I did more tests it doesn't >.<\r\nI'll let you know if I find a way to fix that","Actually I also have the issue when writing a regular text file\r\n\r\n```python\r\nf = open(\"\/content\/drive\/My Drive\/path\/to\/file\", \"w\")\r\nf.write((\"a\"*511 + \"\\n\") * ((1 << 30) \/\/ 512)) # 1GiB\r\nf.close()\r\n```\r\n\r\nIs that supposed to happen ?","The code you wrote should write a 1GB file in the Google Drive folder. Doesn't it? ","Yes it does, but the disk usage of google colab also increases by 1GB","I could check it and as you say as I write to te Drive disk the colab disk also increases...","To reproduce it: \r\n```bash\r\n!df -h | grep sda1\r\n```\r\n```python\r\nf = open(\"\/content\/drive\/My Drive\/test_to_remove.txt\", \"w\")\r\nf.write((\"a\"*511 + \"\\n\") * ((1 << 30) \/\/ 512)) # 1GiB\r\nf.write((\"a\"*511 + \"\\n\") * ((1 << 30) \/\/ 512)) # 1GiB\r\nf.close()\r\n```\r\n```bash\r\n!ls -lh \/content\/drive\/My\\ Drive\/test_to_remove.txt\r\n\r\n!df -h | grep sda1\r\n\r\n!rm -rf \/content\/drive\/My\\ Drive\/test_to_remove.txt\r\n\r\n```\r\n[Colab](https:\/\/colab.research.google.com\/drive\/1D0UiweCYQwwWZ65EEhuqqbaDDbhJYXfm?usp=sharing)\r\n\r\n\r\n"],"created_at":1600443686000,"updated_at":1601309680000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi guys, I run this on my Colab (PRO):\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('text', data_files='\/content\/corpus.txt', cache_dir='\/content\/drive\/My Drive', split='train')\r\n\r\ndef encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n\r\ndataset = dataset.map(encode, batched=True)\r\n```\r\nThe file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it.\r\nThe dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs.\r\n\r\nWhat gets me crazy, it prints it is processing\/encoding the dataset in the right folder:\r\n```\r\nTesting the mapped function outputs\r\nTesting finished, running the mapping function on the dataset\r\nCaching processed dataset at \/content\/drive\/My Drive\/text\/default-ad3e69d6242ee916\/0.0.0\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\/cache-b16341780a59747d.arrow\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/643\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/642","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/642\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/642\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/642\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/642","id":704397499,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5MzMwMDAx","number":642,"title":"Rename wnut fields","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600437091000,"updated_at":1600449511000,"closed_at":1600449510000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/642","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/642","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/642.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/642.patch"},"body":"As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/642\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/641","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/641\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/641\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/641\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/641","id":704373940,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5MzExOTU3","number":641,"title":"Add Polyglot-NER Dataset","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @joeddav thanks for adding this! (I did a long webarchive.org session to actually find that dataset a while ago).\r\n\r\nOne question: should we manually correct the labeling scheme to (at least) IOB1?\r\n\r\nThat means \"LOC\" will be converted to \"I-LOC\". IOB1 is not explict. mentioned in the paper, but it is used in the documentation:\r\n\r\nhttps:\/\/polyglot.readthedocs.io\/en\/latest\/NamedEntityRecognition.html","@stefan-it I went back and forth on this. My biggest problem with it is that once you are in IOB, there is the expectation that the beginning of new entities are marked with a `B-` (at least in the case of two back-to-back entities):\r\n```\r\nToday O\r\nAlice I-PER\r\nBob B-PER\r\nand O\r\nI O \r\nate O\r\nlasagna O\r\n```\r\nIf we just prepend `I-` to everything, `Bob` would be incorrectly tagged `I-PER`, meaning `Bob Alice` is a single entity. The current format is bad but is at least clear that it does not contain that information.\r\n\r\nBut I could go either way if someone has a strong opinion.","Indeed I'm not sure we can convert them to IOB because of this issue. I'm fine with keeping it like that","I'll do a release later today, hopefully we can include this dataset in the release :)\r\n\r\nLet me know if you need help with the dummy data","@lhoestq cool thanks, I think I've got it right now \u2013 just zipped them wrong. I'm running tests locally now and then will push.","@lhoestq set to merge?","@joeddav I'm fine with keeping the original labeling scheme :) "],"created_at":1600435304000,"updated_at":1600571083000,"closed_at":1600571083000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/641","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/641","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/641.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/641.patch"},"body":"Adds the [Polyglot-NER dataset](https:\/\/sites.google.com\/site\/rmyeid\/projects\/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/641\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/640","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/640\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/640\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/640\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/640","id":704311758,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5MjYwNTc1","number":640,"title":"Make shuffle compatible with temp_seed","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600429138000,"updated_at":1600429671000,"closed_at":1600429670000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/640","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/640","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/640.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/640.patch"},"body":"This code used to return different dataset at each run\r\n```python\r\nimport dataset as ds\r\n\r\ndataset = ...\r\n\r\nwith ds.temp_seed(42):\r\n shuffled = dataset.shuffle()\r\n```\r\n\r\nNow it returns the same one since the seed is set","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/640\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/639","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/639\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/639\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/639\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/639","id":704217963,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg5MTgxOTY3","number":639,"title":"Update glue QQP checksum","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600420095000,"updated_at":1600429028000,"closed_at":1600429027000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/639","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/639","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/639.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/639.patch"},"body":"Fix #638 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/639\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/638","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/638\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/638\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/638\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/638","id":704146956,"node_id":"MDU6SXNzdWU3MDQxNDY5NTY=","number":638,"title":"GLUE\/QQP dataset: NonMatchingChecksumError","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Sure I'll take a look"],"created_at":1600412950000,"updated_at":1600429027000,"closed_at":1600429027000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. \ud83d\ude1a\r\n\r\ndatasets version: editable install of master at 9\/17\r\n\r\n`datasets.load_dataset('glue','qqp', cache_dir='.\/datasets')`\r\n\r\n```\r\nDownloading and preparing dataset glue\/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to .\/datasets\/glue\/qqp\/1.0.0\/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4...\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n in \r\n----> 1 datasets.load_dataset('glue','qqp', cache_dir='.\/datasets')\r\n\r\n~\/datasets\/src\/datasets\/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 609 download_config=download_config,\r\n 610 download_mode=download_mode,\r\n--> 611 ignore_verifications=ignore_verifications,\r\n 612 )\r\n 613 \r\n\r\n~\/datasets\/src\/datasets\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 467 if not downloaded_from_gcs:\r\n 468 self._download_and_prepare(\r\n--> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 470 )\r\n 471 # Sync info\r\n\r\n~\/datasets\/src\/datasets\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 527 if verify_infos:\r\n 528 verify_checksums(\r\n--> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 530 )\r\n 531 \r\n\r\n~\/datasets\/src\/datasets\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 37 if len(bad_urls) > 0:\r\n 38 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 40 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 41 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https:\/\/dl.fbaipublicfiles.com\/glue\/data\/QQP-clean.zip']\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/638\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/637","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/637\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/637\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/637\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/637","id":703539909,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg4NjMwNzk4","number":637,"title":"Add MATINF","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600345493000,"updated_at":1600348998000,"closed_at":1600348997000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/637","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/637","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/637.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/637.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/637\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/636","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/636\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/636\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/636\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/636","id":702883989,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg4MDg3OTA5","number":636,"title":"Consistent ner features","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600271785000,"updated_at":1600336379000,"closed_at":1600336378000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/636","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/636","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/636.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/636.patch"},"body":"As discussed in #613 , this PR aims at making NER feature names consistent across datasets.\r\n\r\nI changed the feature names of LinCE and XTREME\/PAN-X","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/636\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/635","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/635\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/635\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/635\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/635","id":702822439,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg4MDM2OTE5","number":635,"title":"Loglevel","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it's ready now @stas00, did you want to add something else ?\r\nThis PR includes your changes but with the level set to warning","LGTM, thank you, @lhoestq "],"created_at":1600267073000,"updated_at":1600336339000,"closed_at":1600336338000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/635","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/635","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/635.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/635.patch"},"body":"Continuation of #618 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/635\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/634","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/634\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/634\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/634\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/634","id":702676041,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg3OTEzOTk4","number":634,"title":"Add ConLL-2000 dataset","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600254851000,"updated_at":1600339090000,"closed_at":1600339090000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/634","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/634","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/634.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/634.patch"},"body":"Adds ConLL-2000 dataset used for text chunking. See https:\/\/www.clips.uantwerpen.be\/conll2000\/chunking\/ for details and [motivation](https:\/\/github.com\/huggingface\/transformers\/pull\/7041#issuecomment-692710948) behind this PR","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/634\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/633","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/633\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/633\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/633\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/633","id":702440484,"node_id":"MDU6SXNzdWU3MDI0NDA0ODQ=","number":633,"title":"Load large text file for LM pre-training resulting in OOM","user":{"login":"leethu2012","id":29704017,"node_id":"MDQ6VXNlcjI5NzA0MDE3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29704017?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/leethu2012","html_url":"https:\/\/github.com\/leethu2012","followers_url":"https:\/\/api.github.com\/users\/leethu2012\/followers","following_url":"https:\/\/api.github.com\/users\/leethu2012\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/leethu2012\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/leethu2012\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/leethu2012\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/leethu2012\/orgs","repos_url":"https:\/\/api.github.com\/users\/leethu2012\/repos","events_url":"https:\/\/api.github.com\/users\/leethu2012\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/leethu2012\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?","There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.","@lhoestq @sgugger Thanks for your comments. I have install from source code as you told, but the problem is still there.\r\nTo reproduce the issue, just replace [these lines](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_language_modeling.py#L241-L258) with: \r\n(load_dataset and DataCollatorForDatasetsLanguageModeling as [above mentioned](https:\/\/github.com\/huggingface\/datasets\/issues\/633#issue-702440484))\r\n```python\r\n dataset = load_dataset(\"bookcorpus\")\r\n dataset = dataset.train_test_split(test_size=0.1)\r\n train_dataset = dataset['train']\r\n eval_dataset = dataset['test'] if training_args.do_eval else None\r\n\r\n data_collator = DataCollatorForDatasetsLanguageModeling(\r\n tokenizer=tokenizer,\r\n mlm=data_args.mlm,\r\n mlm_probability=data_args.mlm_probability,\r\n block_size=data_args.block_size\r\n )\r\n```\r\nand run by:\r\n```bash\r\npython run_language_modeling.py\r\n--output_dir=output \\\r\n--model_type=bert \\\r\n--model_name_or_path=bert-base-uncased \\\r\n--do_train \\\r\n--do_eval \\\r\n--mlm \r\n```","Same here. Pre-training on wikitext-103 to do some test. At the end of the training it takes 32GB of RAM + ~30GB of SWAP. I installed dataset==1.1.0, not built from source. I will try uninstalling and building from source when it finish.","This seems to be on the `transformers` library side.\r\n\r\nIf you have more informations (pip env) or even better, a colab reproducing the error we can investigate.","It seems like it's solved with freshed versions of transformers. I have tried to replicate the error doing a fresh pip install transformers & datasets on colab and the error doesn't continue. On colab it keeps stable on 5GB! (Y)\r\n\r\nEdit: **Thanks for your great work**. Have a good day.","@gaceladri witch version transformers and datasets are you using now? I want to try again. Thanks.","transformers==3.3.1\r\ndatasets==1.1.0\r\ntokenizers==0.8.1rc2\r\n","doing some modifications to mobilebert\r\nhttps:\/\/colab.research.google.com\/drive\/1ba09ZOpyHGAOQLcsxiQAHRXl10qnMU5o?usp=sharing ","It does not happen to me anymore. Can we close? @leethu2012 ","It's happening to me again. After 4 hours of pre-training, my ram memory gets full and the kernel dies. I am using the last transformers version as today. 4.4.0 and the last version of datasets 1.2.1, both installed from master. The memory consumption keeps increasing.","It looks like it is something from pytorch\/python itself :face_with_head_bandage: https:\/\/github.com\/pytorch\/pytorch\/issues\/13246 ","Thanks for the investigation @gaceladri \r\n\r\nApparently this happens when `num_workers>0` and has to do with objects being copied-on-write.\r\nDid you try setting num_workers to 0 @gaceladri ?\r\nIf the issue doesn't happen with `num_workers=0` then this would confirm that it's indeed related to this python\/pytorch issue.\r\n\r\nSince a `Dataset` object is a wrapper of a pyarrow Table, we should investigate if the data being copied comes from the Table itself or from metadata in the `Dataset` object. If it comes from the metadata in the `Dataset` object, we should be able to implement a workaround. But if it comes from the Table, we'll need to see with the pyarrow team what we can do... ","@lhoestq I have tried and it keeps increasing also with `dataloader_num_workers=0`","Hmmm so this might come from another issue...\r\nSince it doesn't seem to be related to multiprocessing it should be easier to investigate though.\r\nDo you have some ideas @gaceladri ?","@lhoestq I looked quickly to a previously spoted bug in my env wandb \/sdk\/interface\/interface.py, because sometimes when I load the dataset I got a multiprocessing error at line 510 in wandb...interface.py\r\n\r\nThis bug is reported here https:\/\/github.com\/huggingface\/datasets\/issues\/847\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n<timed eval> in <module>\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/transformers\/trainer.py in train(self, model_path, trial)\r\n 877 print(len(epoch_iterator))\r\n 878 \r\n--> 879 for step, inputs in enumerate(epoch_iterator):\r\n 880 \r\n 881 start_step = time.time()\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py in __next__(self)\r\n 433 if self._sampler_iter is None:\r\n 434 self._reset()\r\n--> 435 data = self._next_data()\r\n 436 self._num_yielded += 1\r\n 437 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py in _next_data(self)\r\n 1083 else:\r\n 1084 del self._task_info[idx]\r\n-> 1085 return self._process_data(data)\r\n 1086 \r\n 1087 def _try_put_index(self):\r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/dataloader.py in _process_data(self, data)\r\n 1109 self._try_put_index()\r\n 1110 if isinstance(data, ExceptionWrapper):\r\n-> 1111 data.reraise()\r\n 1112 return data\r\n 1113 \r\n\r\n~\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/_utils.py in reraise(self)\r\n 426 # have message field\r\n 427 raise self.exc_type(message=msg)\r\n--> 428 raise self.exc_type(msg)\r\n 429 \r\n 430 \r\n\r\nAssertionError: Caught AssertionError in DataLoader worker process 0.\r\nOriginal Traceback (most recent call last):\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/_utils\/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1083, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1070, in _getitem\r\n format_kwargs=format_kwargs,\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 886, in _convert_outputs\r\n v = map_nested(command, v, **map_nested_kwargs)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/utils\/py_utils.py\", line 216, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 847, in command\r\n return torch.tensor(x, **format_kwargs)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/warnings.py\", line 101, in _showwarnmsg\r\n _showwarnmsg_impl(msg)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/warnings.py\", line 30, in _showwarnmsg_impl\r\n file.write(text)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/lib\/redirect.py\", line 100, in new_write\r\n cb(name, data)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/wandb_run.py\", line 729, in _console_callback\r\n self._backend.interface.publish_output(name, data)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 186, in publish_output\r\n self._publish_output(o)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 191, in _publish_output\r\n self._publish(rec)\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/site-packages\/wandb\/sdk\/interface\/interface.py\", line 510, in _publish\r\n if self._process and not self._process.is_alive():\r\n File \"\/home\/ad\/anaconda3\/envs\/tfm\/lib\/python3.6\/multiprocessing\/process.py\", line 134, in is_alive\r\n assert self._parent_pid == os.getpid(), 'can only test a child process'\r\nAssertionError: can only test a child process\r\n```\r\n\r\nMy workaround was to just comment those lines without looking to much into consecuences:\r\n\r\n```\r\ndef _publish(self, record: pb.Record, local: bool = None) -> None:\r\n #if self._process and not self._process.is_alive():\r\n # raise Exception(\"The wandb backend process has shutdown\")\r\n```\r\n\r\nIt worked so far... I need to try running without wandb and see if it could be causing something wrong with multiprocessing. I am going to try to launch the training setting wandb to false and I will let you know again.","@lhoestq But despite this, I got lost into the [class Dataset()](https:\/\/huggingface.co\/docs\/datasets\/_modules\/datasets\/arrow_dataset.html#Dataset) reading the pyarrow files.\r\n\r\nEdit: but you should be rigth, that it does not have to be related to multiprocessing since it keeps happening when `num_workers=0` ","Or maybe wandb uses multiprocessing ? One process for wandb logging and one for actual training ? If this is the case then even setting `num_workers=0` would cause the process to be forked for wandb and therefore cause the memory issue.","@lhoestq could be, but if we set wandb to false this should not happen. I am going to try.","@lhoestq It keeps happening. I have uninstalled wandb from my env, setted `%env WANDB_DISABLED=true` on my notebook, and commented this func:\r\n\r\n```\r\ndef get_available_reporting_integrations():\r\n integrations = []\r\n if is_azureml_available():\r\n integrations.append(\"azure_ml\")\r\n if is_comet_available():\r\n integrations.append(\"comet_ml\")\r\n if is_mlflow_available():\r\n integrations.append(\"mlflow\")\r\n if is_tensorboard_available():\r\n integrations.append(\"tensorboard\")\r\n # if is_wandb_available():\r\n # integrations.append(\"wandb\")\r\n return integrations\r\n```\r\nAs a fast test and it keeps increasing the ram memory. Wandb could not be the blameworthy here.","Thanks for checking @gaceladri . Let's investigate the single process setting then.\r\nIf you have some sort of colab notebook with a minimal code example that shows this behavior feel free to share it @gaceladri so that we can play around with it to find what causes this. Otherwise I'll probably try to reproduce on my side at one point","@lhoestq sure. Here you have https:\/\/colab.research.google.com\/drive\/1ba09ZOpyHGAOQLcsxiQAHRXl10qnMU5o?usp=sharing let me know if the link works and it reproduces the issue. To me, it reproduces the issue, since if you start the training the ram memory keeps increasing.\r\n\r\nLet me know. Thanks!","Could the bug be comming from tokenizers?\r\n\r\nI got this warning at the terminal from my jupyter notebook: \r\n```\r\nhuggingface\/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n```","I've never experienced memory issues with tokenizers so I don't know\r\nCc @n1t0 are you aware of any issue that would cause memory to keep increasing when the tokenizer is used in the Data Collator for language modeling ?","@lhoestq Thanks for pointing to n1t0, just to clarify. That warning was doing fine-tuning, without collator:\r\n```\r\n\r\nfrom datasets import load_dataset, load_metric\r\nimport numpy as np\r\n\r\nGLUE_TASKS = [\r\n \"cola\",\r\n \"mnli\",\r\n \"mnli-mm\",\r\n \"mrpc\",\r\n \"qnli\",\r\n \"qqp\",\r\n \"rte\",\r\n \"sst2\",\r\n \"stsb\",\r\n \"wnli\",\r\n]\r\ntask = \"mnli\"\r\nactual_task = \"mnli\" if task == \"mnli-mm\" else task\r\ndataset = load_dataset(\"glue\", actual_task)\r\nmetric = load_metric(\"glue\", actual_task)\r\nbatch_size = 16\r\nattention_type = \"linear\"\r\n\r\nfrom transformers.models.mobilebert_mod import (\r\n MobileBertForSequenceClassification,\r\n MobileBertTokenizerFast,\r\n)\r\nfrom transformers.models.mobilebert_mod.configuration_mobilebert import (\r\n MobileBertConfigMod,\r\n)\r\nfrom transformers import TrainingArguments, Trainer\r\n\r\nnum_labels = 3 if task.startswith(\"mnli\") else 1 if task == \"stsb\" else 2\r\ntokenizer = MobileBertTokenizerFast.from_pretrained(\r\n \"\/media\/ad\/00b5422b-9d54-4449-8b5d-08eab5cdac8c\/training_trfm\/big_linear_layerdrop_shared\/checkpoint-23000\/\",\r\n max_len=512,\r\n)\r\nmodel = MobileBertForSequenceClassification.from_pretrained(\r\n \"\/media\/ad\/00b5422b-9d54-4449-8b5d-08eab5cdac8c\/training_trfm\/big_linear_layerdrop_shared\/checkpoint-23000\/\",\r\n num_labels=num_labels,\r\n)\r\nprint(model.num_parameters())\r\n\r\ntask_to_keys = {\r\n \"cola\": (\"sentence\", None),\r\n \"mnli\": (\"premise\", \"hypothesis\"),\r\n \"mnli-mm\": (\"premise\", \"hypothesis\"),\r\n \"mrpc\": (\"sentence1\", \"sentence2\"),\r\n \"qnli\": (\"question\", \"sentence\"),\r\n \"qqp\": (\"question1\", \"question2\"),\r\n \"rte\": (\"sentence1\", \"sentence2\"),\r\n \"sst2\": (\"sentence\", None),\r\n \"stsb\": (\"sentence1\", \"sentence2\"),\r\n \"wnli\": (\"sentence1\", \"sentence2\"),\r\n}\r\n\r\nsentence1_key, sentence2_key = task_to_keys[task]\r\nif sentence2_key is None:\r\n print(f\"Sentence: {dataset['train'][0][sentence1_key]}\")\r\nelse:\r\n print(f\"Sentence 1: {dataset['train'][0][sentence1_key]}\")\r\n print(f\"Sentence 2: {dataset['train'][0][sentence2_key]}\")\r\n\r\n\r\ndef preprocess_function(examples):\r\n if sentence2_key is None:\r\n return tokenizer(examples[sentence1_key], truncation=True)\r\n return tokenizer(examples[sentence1_key], examples[sentence2_key], truncation=True)\r\n\r\n\r\nencoded_dataset = dataset.map(preprocess_function, batched=True)\r\nmetric_name = (\r\n \"pearson\"\r\n if task == \"stsb\"\r\n else \"matthews_correlation\"\r\n if task == \"cola\"\r\n else \"accuracy\"\r\n)\r\n\r\nargs = TrainingArguments(\r\n f\"test-glue\/{task}_{attention_type}\",\r\n evaluation_strategy=\"steps\",\r\n learning_rate=1e-5,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n logging_steps=200,\r\n num_train_epochs=5,\r\n gradient_accumulation_steps=1,\r\n warmup_steps=10000,\r\n fp16=True,\r\n dataloader_num_workers=10,\r\n weight_decay=0.1,\r\n load_best_model_at_end=True,\r\n metric_for_best_model=metric_name,\r\n)\r\n\r\n\r\ndef compute_metrics(eval_pred):\r\n predictions, labels = eval_pred\r\n if task != \"stsb\":\r\n predictions = np.argmax(predictions, axis=1)\r\n else:\r\n predictions = predictions[:, 0]\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\n\r\nvalidation_key = (\r\n \"validation_mismatched\"\r\n if task == \"mnli-mm\"\r\n else \"validation_matched\"\r\n if task == \"mnli\"\r\n else \"validation\"\r\n)\r\n\r\ntrainer = Trainer(\r\n model,\r\n args,\r\n train_dataset=encoded_dataset[\"train\"],\r\n eval_dataset=encoded_dataset[validation_key],\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics,\r\n)\r\n\r\ntrainer.train()\r\n```\r\n\r\nNow, I have come back to pre-training. The changes that I think I have done are: not formatting the dataset to torch: ~~`big_dataset.set_format(type='torch', columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"])`~~ so maybe some column is dropped and not freezed in memory and now I have not setted any validation dataset in the trainer. \r\n\r\nMy validation dataset before:\r\n```\r\nbook_corpus_eval = load_dataset(\r\n \"bookcorpus\",\r\n \"plain_text\",\r\n cache_dir=\"\/home\/ad\/Desktop\/bookcorpus\",\r\n split=\"train[98:99%]\",\r\n)\r\nbook_corpus_eval = book_corpus_eval.map(encode, batched=True)\r\nbook_corpus_eval.set_format(\r\n type=\"torch\", columns=[\"text\", \"input_ids\", \"attention_mask\", \"token_type_ids\"]\r\n)\r\n**book_corpus_eval = book_corpus_eval.select([i for i in range(1500)])**\r\n```\r\nMaybe _selecting_ or indexing the dataset before feeding it to the trainer, do something strange.\r\n\r\nMy trainer now:\r\n```\r\n\r\nbig_dataset = load_from_disk(\"\/home\/ad\/Desktop\/35percent_data.arrow\/\")\r\n\r\nfrom transformers import DataCollatorForWholeWordMask\r\n\r\ndata_collator = DataCollatorForWholeWordMask(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15)\r\n\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\".\/big_linear_layerdrop_shared_silu_secondtry\",\r\n overwrite_output_dir=True,\r\n per_device_train_batch_size=60,\r\n per_device_eval_batch_size=60,\r\n save_steps=500,\r\n save_total_limit=10,\r\n logging_first_step=True,\r\n logging_steps=100,\r\n# evaluation_strategy='steps',\r\n# eval_steps=250,\r\n gradient_accumulation_steps=8,\r\n fp16=True,\r\n dataloader_num_workers=10,\r\n warmup_steps=15000,\r\n learning_rate=6e-4,\r\n adam_epsilon=1e-6,\r\n adam_beta2=0.98,\r\n weight_decay=0.01,\r\n max_grad_norm=1.0,\r\n max_steps=500000, \r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=big_dataset,\r\n# eval_dataset=book_corpus_eval,\r\n tokenizer=tokenizer)\r\n\r\nimport wandb\r\nwandb.login()\r\n\r\ntrainer.train()\r\n```\r\n\r\nAnd surprisingly, the ram now keeps going up and down. The training is up now for 12h without collapse the ram. I don't know what could cause the leakage. :mag: \r\n\r\nEdit: I didn't see the swap memory, that keeps increasing. So the problem persist. ","Thanks for sharing your results.\r\nSo you still had the issue for fine-tuning ?\r\nAnd the issue still appears with a bare-bone dataset from an arrow file...","Yes, on both cases. Fine-tuning a pre-trained model and pre-training from scratch with a local arrow file already pre-processed."],"created_at":1600230795000,"updated_at":1613476921000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n@dataclass\r\nclass DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):\r\n \"\"\"\r\n Data collator used for language modeling based on DataCollatorForLazyLanguageModeling\r\n - collates batches of tensors, honoring their tokenizer's pad_token\r\n - preprocesses batches for masked language modeling\r\n \"\"\"\r\n\r\n block_size: int = 512\r\n\r\n def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]:\r\n examples = [example['text'] for example in examples]\r\n batch, attention_mask = self._tensorize_batch(examples)\r\n if self.mlm:\r\n inputs, labels = self.mask_tokens(batch)\r\n return {\"input_ids\": inputs, \"labels\": labels}\r\n else:\r\n labels = batch.clone().detach()\r\n if self.tokenizer.pad_token_id is not None:\r\n labels[labels == self.tokenizer.pad_token_id] = -100\r\n return {\"input_ids\": batch, \"labels\": labels}\r\n\r\n def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]:\r\n\r\n if self.tokenizer._pad_token is None:\r\n raise ValueError(\r\n \"You are attempting to pad samples but the tokenizer you are using\"\r\n f\" ({self.tokenizer.__class__.__name__}) does not have one.\"\r\n )\r\n\r\n tensor_examples = self.tokenizer.batch_encode_plus(\r\n [ex for ex in examples if ex],\r\n max_length=self.block_size,\r\n return_tensors=\"pt\",\r\n pad_to_max_length=True,\r\n return_attention_mask=True,\r\n truncation=True,\r\n )\r\n\r\n input_ids, attention_mask = tensor_examples[\"input_ids\"], tensor_examples[\"attention_mask\"]\r\n return input_ids, attention_mask\r\n\r\ndataset = load_dataset('text', data_files='train.txt',cache_dir=\".\/\", , split='train')\r\ndata_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True, \r\n mlm_probability=0.15, block_size=tokenizer.max_len)\r\ntrainer = Trainer(model=model, args=args, data_collator=data_collator,\r\n train_dataset=train_dataset, prediction_loss_only=True, )\r\ntrainer.train(model_path=model_path)\r\n```\r\nThis train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words. \r\nDuring training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/29704017\/93292112-5576b280-f817-11ea-8da2-b2db9bf35665.png)\r\n\r\nCould you please give me any suggestions on why this happened and how to fix it?\r\nThanks. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/633\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/632","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/632\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/632\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/632\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/632","id":702358124,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2","number":632,"title":"Fix typos in the loading datasets docs","user":{"login":"mariosasko","id":47462742,"node_id":"MDQ6VXNlcjQ3NDYyNzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47462742?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariosasko","html_url":"https:\/\/github.com\/mariosasko","followers_url":"https:\/\/api.github.com\/users\/mariosasko\/followers","following_url":"https:\/\/api.github.com\/users\/mariosasko\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariosasko\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariosasko\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariosasko\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariosasko\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariosasko\/repos","events_url":"https:\/\/api.github.com\/users\/mariosasko\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariosasko\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks!"],"created_at":1600216061000,"updated_at":1600705871000,"closed_at":1600239164000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/632","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/632","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/632.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/632.patch"},"body":"This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/632\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/631","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/631\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/631\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/631\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/631","id":701711255,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg3MTE3OTA0","number":631,"title":"Fix text delimiter","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Which OS are you using ?@abhi1nandy2","> Which OS are you using ?\r\n\r\nPRETTY_NAME=\"Debian GNU\/Linux 9 (stretch)\"\r\nNAME=\"Debian GNU\/Linux\"\r\nVERSION_ID=\"9\"\r\nVERSION=\"9 (stretch)\"\r\nVERSION_CODENAME=stretch\r\nID=debian\r\nHOME_URL=\"https:\/\/www.debian.org\/\"\r\nSUPPORT_URL=\"https:\/\/www.debian.org\/support\"\r\nBUG_REPORT_URL=\"https:\/\/bugs.debian.org\/\"","Do you mind sharing the data you used (or part of it), so I can try to reproduce ?\r\nOr at least some info about the text file you're using ? (size, n of lines, encoding)","Lot of data, difficult to share. There are 46 shards, each having about 256000 lines. using `file` command gives this - `ASCII text, with very long lines`.","Ok I see, no problem :) \r\nI'll see what I can do\r\n\r\nCould you just test with one single dummy text file with a few lines to see if you're having the issue ?\r\nAlso which version of `datasets` do you have ?"],"created_at":1600157322000,"updated_at":1600786986000,"closed_at":1600158385000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/631","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/631","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/631.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/631.patch"},"body":"I changed the delimiter in the `text` dataset script.\r\nIt should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 \r\n\r\nI changed the delimiter to an unused ascii character that is not present in text files : `\\b`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/631\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/630","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/630\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/630\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/630\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/630","id":701636350,"node_id":"MDU6SXNzdWU3MDE2MzYzNTA=","number":630,"title":"Text dataset not working with large files","user":{"login":"ksjae","id":17930170,"node_id":"MDQ6VXNlcjE3OTMwMTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17930170?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ksjae","html_url":"https:\/\/github.com\/ksjae","followers_url":"https:\/\/api.github.com\/users\/ksjae\/followers","following_url":"https:\/\/api.github.com\/users\/ksjae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ksjae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ksjae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ksjae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ksjae\/orgs","repos_url":"https:\/\/api.github.com\/users\/ksjae\/repos","events_url":"https:\/\/api.github.com\/users\/ksjae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ksjae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.","Can you give us some stats on the data files you use as inputs?","Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```\uc548\ub155\ud558\uc138\uc694, \uc774\uac83\uc740 \uc608\uc81c\ub85c \ud55c\ubc88 \ub9d0\ud574\ubcf4\ub294 \ud14d\uc2a4\ud2b8\uc785\ub2c8\ub2e4. \uadf8\ub0e5 \uc774\ub807\ub2e4\uace0\uc694.<|endoftext|>\\n```\r\n\r\nAlso, it gets stuck for a loooong time at ```Testing the mapped function outputs```, for more than 12 hours(currently ongoing)","It gets stuck while doing `.map()` ? Are you using multiprocessing ?\r\nIf you could provide a code snippet it could be very useful","From transformers\/examples\/language-modeling\/run-language-modeling.py :\r\n```\r\ndef get_dataset(\r\n args: DataTrainingArguments,\r\n tokenizer: PreTrainedTokenizer,\r\n evaluate: bool = False,\r\n cache_dir: Optional[str] = None,\r\n):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n if True:\r\n dataset = load_dataset(\"text\", data_files=glob.glob(file_path), split='train', use_threads=True, \r\n ignore_verifications=True, save_infos=True, block_size=104857600)\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n if args.line_by_line:\r\n return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer,\r\n file_path=file_path,\r\n block_size=args.block_size,\r\n overwrite_cache=args.overwrite_cache,\r\n cache_dir=cache_dir,\r\n )\r\n```\r\n\r\nNo, I'm not using multiprocessing.","I am not able to reproduce on my side :\/\r\n\r\nCould you send the version of `datasets` and `pyarrow` you're using ?\r\nCould you try to update the lib and try again ?\r\nOr do you think you could try to reproduce it on google colab ?","Huh, weird. It's fixed on my side too.\r\nBut now ```Caching processed dataset``` is taking forever - how can I disable it? Any flags?","Right after `Caching processed dataset`, your function is applied to the dataset and there's a progress bar that shows how much time is left. How much time does it take for you ?\r\n\r\nAlso caching isn't supposed to slow down your processing. But if you still want to disable it you can do `.map(..., load_from_cache_file=False)`","Ah, it\u2019s much faster now(Takes around 15~20min). \r\nBTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :(","> Ah, it\u2019s much faster now(Takes around 15~20min).\r\n\r\nGlad to see that it's faster now. What did you change exactly ?\r\n\r\n> BTW, any way to set default tensor output as plain tensors with distributed training? The ragged tensors are incompatible with tpustrategy :(\r\n\r\nOh I didn't know about that. Feel free to open an issue to mention that.\r\nI guess what you can do for now is set the dataset format to numpy instead of tensorflow, and use a wrapper of the dataset that converts the numpy arrays to tf tensors.\r\n\r\n",">>> Glad to see that it's faster now. What did you change exactly ?\r\nI don't know, it just worked...? Sorry I couldn't be more helpful.\r\n\r\nSetting with numpy array is a great idea! Thanks."],"created_at":1600149756000,"updated_at":1601072503000,"closed_at":1601072503000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"```\r\nTraceback (most recent call last):\r\n File \"examples\/language-modeling\/run_language_modeling.py\", line 333, in <module>\r\n main()\r\n File \"examples\/language-modeling\/run_language_modeling.py\", line 262, in main\r\n get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n File \"examples\/language-modeling\/run_language_modeling.py\", line 144, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split='train+test')\r\n File \"\/home\/ksjae\/.local\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/home\/ksjae\/.local\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 469, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/ksjae\/.local\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 546, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/ksjae\/.local\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 888, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"\/home\/ksjae\/.local\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1129, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/ksjae\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\/text.py\", line 104, in _generate_tables\r\n convert_options=self.config.convert_options,\r\n File \"pyarrow\/_csv.pyx\", line 714, in pyarrow._csv.read_csv\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\n```\r\n\r\n**pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)**\r\n\r\nIt gives the same message for both 200MB, 10GB .tx files but not for 700MB file.\r\nCan't upload due to size & copyright problem. sorry.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/630\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/629","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/629\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/629\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/629\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/629","id":701517550,"node_id":"MDU6SXNzdWU3MDE1MTc1NTA=","number":629,"title":"straddling object straddles two block boundaries","user":{"login":"bharaniabhishek123","id":17970177,"node_id":"MDQ6VXNlcjE3OTcwMTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17970177?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/bharaniabhishek123","html_url":"https:\/\/github.com\/bharaniabhishek123","followers_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/followers","following_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/orgs","repos_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/repos","events_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/bharaniabhishek123\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["sorry it's an apache arrow issue."],"created_at":1600129846000,"updated_at":1600130177000,"closed_at":1600129937000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : \r\n\r\nI tried calling read_json with readOptions but no luck .\r\n\r\n```\r\ntable = json.read_json(fn)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"pyarrow\/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/629\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/628","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/628\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/628\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/628\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/628","id":701496053,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg2OTQyNzgx","number":628,"title":"Update docs links in the contribution guideline","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1600126039000,"updated_at":1604351003000,"closed_at":1600150775000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/628","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/628","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/628.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/628.patch"},"body":"Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/628\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/627","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/627\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/627\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/627\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/627","id":701411661,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg2ODcxMTg2","number":627,"title":"fix (#619) MLQA features names","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600116119000,"updated_at":1604351072000,"closed_at":1600239251000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/627","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/627","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/627.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/627.patch"},"body":"Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/627\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/626","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/626\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/626\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/626\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/626","id":701352605,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1","number":626,"title":"Update GLUE URLs (now hosted on FB)","user":{"login":"jeswan","id":57466294,"node_id":"MDQ6VXNlcjU3NDY2Mjk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57466294?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeswan","html_url":"https:\/\/github.com\/jeswan","followers_url":"https:\/\/api.github.com\/users\/jeswan\/followers","following_url":"https:\/\/api.github.com\/users\/jeswan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeswan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeswan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeswan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeswan\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeswan\/repos","events_url":"https:\/\/api.github.com\/users\/jeswan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeswan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1600110339000,"updated_at":1600239198000,"closed_at":1600239198000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/626","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/626","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/626.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/626.patch"},"body":"NYU is switching dataset hosting from Google to FB. This PR closes https:\/\/github.com\/huggingface\/datasets\/issues\/608 and is necessary for https:\/\/github.com\/jiant-dev\/jiant\/issues\/161. This PR updates the data URLs based on changes made in https:\/\/github.com\/nyu-mll\/jiant\/pull\/1112.\r\n\r\nNote: rebased on huggingface\/datasets","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/626\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/625","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/625\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/625\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/625\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/625","id":701057799,"node_id":"MDU6SXNzdWU3MDEwNTc3OTk=","number":625,"title":"dtype of tensors should be preserved","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd then for your information, when reading from arrow format we have to cast from arrow to numpy (which is fast since pyarrow has a numpy integration), and then to torch.\r\n\r\nHowever there's one thing that can help you: we make sure that the dtypes correspond to what is defined in `features`.\r\nTherefore what you can do is provide `features` in `.map(preprocess, feature=...)` to specify the output types.\r\n\r\nFor example in your case:\r\n```python\r\nfrom datasets import Features, Value, Sequence\r\n\r\nfeatures = Features({\r\n \"input_ids\": Sequence(Value(\"int32\")),\r\n \"sembedding\": Sequence(Value(\"float32\"))\r\n})\r\npreprocessed_dataset = dataset.map(preprocess, features=features)\r\n\r\npreprocessed_dataset.set_format(\"torch\", columns=[\"input_ids\", \"sembedding\"])\r\nprint(preprocessed_dataset[0][\"sembedding\"].dtype)\r\n# \"torch.float32\"\r\n```\r\n\r\nLet me know if it helps","If the arrow format is basically lists, why is the intermediate step to numpy necessary? I am a bit confused about that part.\r\n\r\nThanks for your suggestion. as I have currently implemented this, I cast to torch.Tensor in my collate_fn to save disk space (so I do not have to save padded tensors to max_len but can pad up to max batch len in collate_fn) at the cost of a bit slower processing. So for me this is not relevant anymore, but I am sure it is for others!","I'm glad you managed to figure something out :)\r\n\r\nCasting from arrow to numpy can be 100x faster than casting from arrow to list.\r\nThis is because arrow has an integration with numpy that allows it to instantiate numpy arrays with zero-copy from arrow.\r\nOn the other hand to create python lists it is slow since it has to recreate the list object by iterating through each element in python.","Ah that is interesting. I have no direct experience with arrow so I didn't know. ","I encountered a simliar issue: `datasets` converted my float numpy array to `torch.float64` tensors, while many pytorch operations require `torch.float32` inputs and it's very troublesome. \r\n\r\nI tried @lhoestq 's solution, but since it's mixed with the preprocess function, it's not very intuitive. \r\n\r\nI just want to share another possible simpler solution: directly cast the dtype of the processed dataset.\r\n\r\nNow I want to change the type of `labels` in `train_dataset` from float64 to float32, I can do this.\r\n\r\n```\r\nfrom datasets import Value, Sequence, Features\r\nfeats = train_dataset.features.copy()\r\nfeats['labels'].feature = Value(dtype='float32')\r\nfeats = Features(feats)\r\ntrain_dataset.cast_(feats)\r\n```\r\n","Reopening since @bhavitvyamalik started looking into it !\r\n\r\nAlso I'm posting here a function that could be helpful to support preserving the dtype of tensors.\r\n\r\nIt's used to build a pyarrow array out of a numpy array and:\r\n- it doesn't convert the numpy array to a python list\r\n- it keeps the precision of the numpy array for the pyarrow array\r\n- it works with multidimensional arrays (while `pa.array` can only take a 1D array as input)\r\n- it builds the pyarrow ListArray from offsets created on-the-fly and values that come from the flattened numpy array\r\n\r\n```python\r\nfrom functools import reduce\r\nfrom operator import mul\r\n\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\ndef pa_ndarray(a):\r\n \"\"\"Build a PyArrow ListArray from a multidimensional NumPy array\"\"\"\r\n values = pa.array(a.flatten()) \r\n for i in range(a.ndim - 1): \r\n n_offsets = reduce(mul, a.shape[:a.ndim - i - 1], 1) \r\n step_offsets = a.shape[a.ndim - i - 1] \r\n offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32()) \r\n values = pa.ListArray.from_arrays(offsets, values) \r\n return values \r\n\r\nnarr = np.arange(42).reshape(7, 2, 3).astype(np.uint8)\r\nparr = pa_ndarray(narr)\r\nassert isinstance(parr, pa.Array)\r\nassert parr.type == pa.list_(pa.list_(pa.uint8()))\r\nassert narr.tolist() == parr.to_pylist()\r\n```\r\n\r\nThe only costly operation is the offsets computations. Since it doesn't iterate on the numpy array values this function is pretty fast.","@lhoestq Have you thought about this further?\r\n\r\nWe have a use case where we're attempting to load data containing numpy arrays using the `datasets` library.\r\n\r\nWhen using one of the \"standard\" methods (`[Value(...)]` or `Sequence()`) we see ~200 samples processed per second during the call to `_prepare_split`. This slowdown is caused by the vast number of calls to `encode_nested_example` (each sequence is converted to a list, and each element in the sequence...). \r\n\r\nUsing the `Feature` `ArrayND` improves this somewhat to ~500\/s as it now uses numpy's `tolist()` rather than iterating over each value in the array and converting them individually.\r\n\r\nHowever, it's still pretty slow and in theory it should be possible to avoid the `numpy -> python -> arrow` dance altogether. To demonstrate this, if you keep the `Feature` set to an `ArrayND` but instead return a `pa_ndarray(...)` in `_generate_examples` it skips the conversion (`return obj, False`) and hits ~11_000\/s. Two orders of magnitude speed up! The problem is this then fails later on when the `ArrowWriter` tries to write the examples to disk :-( \r\n\r\nIt would be nice to have first-class support for user-defined PyArrow objects. Is this a possibility? We have _large_ datasets where even an order of magnitude difference is important so settling on the middle ~500\/s is less than ideal! \r\n\r\nIs there a workaround for this or another method that should be used instead that gets near-to or equal performance to returning PyArrow arrays?","Note that manually generating the table using `pyarrow` achieves ~30_000\/s","Hi !\r\n\r\nIt would be awesome to achieve this speed for numpy arrays !\r\nFor now we have to use `encode_nested_example` to convert numpy arrays to python lists since pyarrow doesn't support multidimensional numpy arrays (only 1D).\r\n\r\nMaybe let's start a new PR from your PR @bhavitvyamalik (idk why we didn't answer your PR at that time, sorry about that).\r\nBasically the idea is to allow `TypedSequence` to support numpy arrays as you did, and remove the numpy->python casting in `_cast_to_python_objects`.\r\n\r\nThis is really important since we are starting to have a focus on other modalities than text as well (audio, images).\r\n\r\nThough until then @samgd, there is another feature that may interest you and that may give you the speed you want:\r\n\r\nIn a dataset script you can subclass either a GeneratorBasedBuilder (with the `_generate_examples ` method) or an ArrowBasedBuilder if you want. the ArrowBasedBuilder allows to yield arrow data by implementing the `_generate_tables` method (it's the same as `_generate_examples` except you must yield arrow tables). Since the data are already in arrow format, it doesn't call `encode_nested_example`. Let me know if that helps."],"created_at":1600087085000,"updated_at":1629189004000,"closed_at":1629189004000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https:\/\/discuss.pytorch.org\/t\/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32\/96221)). \r\n\r\nAs a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this:\r\n\r\n```python\r\ndef preprocess(sentences: List[str]):\r\n token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences]\r\n\r\n sembeddings = stransformer.encode(sentences)\r\n print(sembeddings.dtype)\r\n return {\"input_ids\": token_ids, \"sembedding\": sembeddings}\r\n```\r\n\r\nGiven a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column \"sembedding\" to a tensor that I as a user expect to be a float32.\r\n\r\nIt appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case. \r\n\r\nMy model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64.\r\n\r\n```python\r\ndataset.set_format(type=\"torch\", columns=[\"input_ids\", \"sembedding\"])\r\n```\r\n\r\nThis happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation\/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64. \r\n\r\n```python\r\nimport torch\r\nimport numpy as np\r\n\r\nl = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055]\r\ntorch_tensor = torch.tensor(l)\r\nnp_array = np.array(l)\r\nnp_to_torch = torch.from_numpy(np_array)\r\n\r\nprint(torch_tensor.dtype)\r\n# torch.float32\r\nprint(np_array.dtype)\r\n# float64\r\nprint(np_to_torch.dtype)\r\n# torch.float64\r\n```\r\n\r\nThis might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision.\r\n\r\nThe alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/625\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/624","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/624\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/624\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/624\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/624","id":700541628,"node_id":"MDU6SXNzdWU3MDA1NDE2Mjg=","number":624,"title":"Add learningq dataset","user":{"login":"krrishdholakia","id":17561003,"node_id":"MDQ6VXNlcjE3NTYxMDAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17561003?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/krrishdholakia","html_url":"https:\/\/github.com\/krrishdholakia","followers_url":"https:\/\/api.github.com\/users\/krrishdholakia\/followers","following_url":"https:\/\/api.github.com\/users\/krrishdholakia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/krrishdholakia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/krrishdholakia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/krrishdholakia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/krrishdholakia\/orgs","repos_url":"https:\/\/api.github.com\/users\/krrishdholakia\/repos","events_url":"https:\/\/api.github.com\/users\/krrishdholakia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/krrishdholakia\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599992427000,"updated_at":1600077002000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, \r\n\r\nThank you again for this amazing repo. \r\n\r\nWould it be possible for y'all to add the LearningQ dataset - https:\/\/github.com\/AngusGLChen\/LearningQ ? \r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/624\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/623","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/623\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/623\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/623\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/623","id":700235308,"node_id":"MDU6SXNzdWU3MDAyMzUzMDg=","number":623,"title":"Custom feature types in `load_dataset` from CSV","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'])\r\ndataset.cast_(emotion_features)\r\n```\r\n","Thanks for the clarification!","Hi @lhoestq we've tried out your suggestion but are now running into the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-163-81ffd5ac18c9> in <module>\r\n----> 1 dataset.cast_(emotion_features)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/dataset_dict.py in cast_(self, features)\r\n 125 self._check_values_type()\r\n 126 for dataset in self.values():\r\n--> 127 dataset.cast_(features=features)\r\n 128 \r\n 129 def remove_columns_(self, column_names: Union[str, List[str]]):\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/fingerprint.py in wrapper(*args, **kwargs)\r\n 161 # Call actual function\r\n 162 \r\n--> 163 out = func(self, *args, **kwargs)\r\n 164 \r\n 165 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/datasets\/arrow_dataset.py in cast_(self, features)\r\n 602 self._info.features = features\r\n 603 schema = pa.schema(features.type)\r\n--> 604 self._data = self._data.cast(schema)\r\n 605 \r\n 606 @fingerprint(inplace=True)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.cast()\r\n\r\nValueError: Target schema's field names are not matching the table's field names: ['text', 'label'], ['label', 'text']\r\n```\r\n\r\nLooking at the types in `emotion_features` we see that `label` and `text` appear to be swapped in the Arrow table:\r\n\r\n```\r\nemotion_features.type\r\nStructType(struct<label: int64, text: string>)\r\n```\r\n\r\nDid we define the `emotion_features` incorrectly? We just followed the instructions from the [docs](https:\/\/huggingface.co\/docs\/datasets\/features.html?highlight=features#dataset-features), but perhaps we misunderstood something \ud83d\ude2c \r\n\r\n","In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq?\r\n\r\nShould I add it?","> In general, I don't think there is any hard reason we don't allow to use `features` in the csv script, right @lhoestq?\r\n> \r\n> Should I add it?\r\n\r\nSure let's add it. Setting the convert options should do the job\r\n\r\n> Hi @lhoestq we've tried out your suggestion but are now running into the following error:\r\n> \r\n> ```\r\n> ---------------------------------------------------------------------------\r\n> ValueError Traceback (most recent call last)\r\n> <ipython-input-163-81ffd5ac18c9> in <module>\r\n> ----> 1 dataset.cast_(emotion_features)\r\n>\r\n> \/usr\/local\/lib\/python3.6\/dist-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.cast()\r\n> \r\n> ValueError: Target schema's field names are not matching the table's field names: ['text', 'label'], ['label', 'text']\r\n> ```\r\n>\r\n> Did we define the `emotion_features` incorrectly? We just followed the instructions from the [docs](https:\/\/huggingface.co\/docs\/datasets\/features.html?highlight=features#dataset-features), but perhaps we misunderstood something \ud83d\ude2c\r\n\r\nThanks for reporting, that's a bug :) I'm fixing it right now","PR is open for the `ValueError: Target schema's field names are not matching the table's field names` error.\r\n\r\nI'm adding the features parameter to csv","Thanks a lot for the PR and quick fix @lhoestq!"],"created_at":1599916894000,"updated_at":1601495503000,"closed_at":1601455194000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. \r\n\r\nI am working with the local files from the emotion dataset. To get the data you can use the following code:\r\n\r\n```Python\r\nfrom pathlib import Path\r\nimport wget\r\n\r\nEMOTION_PATH = Path(\".\/data\/emotion\")\r\nDOWNLOAD_URLS = [\r\n \"https:\/\/www.dropbox.com\/s\/1pzkadrvffbqw6o\/train.txt?dl=1\",\r\n \"https:\/\/www.dropbox.com\/s\/2mzialpsgf9k5l3\/val.txt?dl=1\",\r\n \"https:\/\/www.dropbox.com\/s\/ikkqxfdbdec3fuj\/test.txt?dl=1\",\r\n]\r\n\r\nif not Path.is_dir(EMOTION_PATH):\r\n Path.mkdir(EMOTION_PATH)\r\nfor url in DOWNLOAD_URLS:\r\n wget.download(url, str(EMOTION_PATH))\r\n```\r\n\r\nThe first five lines of the train set are:\r\n```\r\ni didnt feel humiliated;sadness\r\ni can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness\r\nim grabbing a minute to post i feel greedy wrong;anger\r\ni am ever feeling nostalgic about the fireplace i will know that it is still on the property;love\r\ni am feeling grouchy;anger\r\n```\r\n\r\nHere the code to reproduce the issue:\r\n```Python\r\nfrom datasets import Features, Value, ClassLabel, load_dataset\r\n\r\nclass_names = [\"sadness\", \"joy\", \"love\", \"anger\", \"fear\", \"surprise\"]\r\nemotion_features = Features({'text': Value('string'), 'label': ClassLabel(names=class_names)})\r\nfile_dict = {'train': EMOTION_PATH\/'train.txt'}\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'], features=emotion_features)\r\n```\r\n\r\n**Observed behaviour:**\r\n```Python\r\ndataset['train'].features\r\n```\r\n```Python\r\n{'text': Value(dtype='string', id=None),\r\n 'label': Value(dtype='string', id=None)}\r\n```\r\n**Expected behaviour:**\r\n```Python\r\ndataset['train'].features\r\n```\r\n```Python\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)}\r\n```\r\n\r\n**Things I've tried:**\r\n- deleting the cache\r\n- trying other types such as `int64`\r\n\r\nAm I missing anything? Thanks for any pointer in the right direction.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/623\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/622","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/622\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/622\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/622\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/622","id":700225826,"node_id":"MDU6SXNzdWU3MDAyMjU4MjY=","number":622,"title":"load_dataset for text files not working","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Can you give us more information on your os and pip environments (pip list)?","@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2020.6.20\r\nchardet 3.0.4\r\nclick 7.1.2\r\ndatasets 1.0.1\r\ndill 0.3.2\r\nfasttext 0.9.2\r\nfilelock 3.0.12\r\nfuture 0.18.2\r\nidna 2.10\r\njoblib 0.16.0\r\nnltk 3.5\r\nnumpy 1.19.1\r\npackaging 20.4\r\npandas 1.1.2\r\npip 20.0.2\r\nprotobuf 3.13.0\r\npyarrow 1.0.1\r\npybind11 2.5.0\r\npyparsing 2.4.7\r\npython-dateutil 2.8.1\r\npytz 2020.1\r\nregex 2020.7.14\r\nrequests 2.24.0\r\nsacremoses 0.0.43\r\nscikit-learn 0.23.2\r\nscipy 1.5.2\r\nsentence-transformers 0.3.6\r\nsentencepiece 0.1.91\r\nsetuptools 46.1.3\r\nsix 1.15.0\r\nstanza 1.1.1\r\nthreadpoolctl 2.1.0\r\ntokenizers 0.8.1rc2\r\ntorch 1.6.0+cu101\r\ntqdm 4.48.2\r\ntransformers 3.1.0\r\nurllib3 1.25.10\r\nwheel 0.34.2\r\nxxhash 2.0.0\r\n\r\nWindows 10 - Python 3.8\r\n================\r\nPackage - Version\r\n----------------------------\r\ncertifi 2020.6.20\r\nchardet 3.0.4\r\nclick 7.1.2\r\ndatasets 1.0.1\r\ndill 0.3.2\r\nfasttext 0.9.2\r\nfilelock 3.0.12\r\nfuture 0.18.2\r\nidna 2.10\r\njoblib 0.16.0\r\nnlp 0.4.0\r\nnltk 3.5\r\nnumpy 1.19.1\r\npackaging 20.4\r\npandas 1.1.1\r\npip 20.0.2\r\nprotobuf 3.13.0\r\npyarrow 1.0.1\r\npybind11 2.5.0\r\npyparsing 2.4.7\r\npython-dateutil 2.8.1\r\npytz 2020.1\r\nregex 2020.7.14\r\nrequests 2.24.0\r\nsacremoses 0.0.43\r\nscikit-learn 0.23.2\r\nscipy 1.5.2\r\nsentence-transformers 0.3.5.1\r\nsentencepiece 0.1.91\r\nsetuptools 46.1.3\r\nsix 1.15.0\r\nstanza 1.1.1\r\nthreadpoolctl 2.1.0\r\ntokenizers 0.8.1rc1\r\ntorch 1.6.0+cu101\r\ntqdm 4.48.2\r\ntransformers 3.0.2\r\nurllib3 1.25.10\r\nwheel 0.34.2\r\nxxhash 2.0.0","Downgrading to 3.7 does not help. Here is a dummy text file:\r\n\r\n```text\r\nVerzekering weigert vaker te betalen\r\nBedrijven van verzekeringen erkennen steeds minder arbeidsongevallen .\r\nIn 2012 weigerden de bedrijven te betalen voor 21.055 ongevallen op het werk .\r\nDat is 11,8 % van alle ongevallen op het werk .\r\nNog nooit weigerden verzekeraars zoveel zaken .\r\nIn 2012 hadden 135.118 mensen een ongeval op het werk .\r\nDat zijn elke werkdag 530 mensen .\r\nBij die ongevallen stierven 67 mensen .\r\nBijna 12.000 hebben een handicap na het ongeval .\r\nGeen echt arbeidsongeval Bedrijven moeten een verzekering hebben voor hun werknemers .\r\n```\r\n\r\nA temporary work around for the \"text\" type, is\r\n\r\n```python\r\ndataset = Dataset.from_dict({\"text\": Path(dataset_f).read_text().splitlines()})\r\n```","![image](https:\/\/user-images.githubusercontent.com\/6847024\/92997714-d2add900-f532-11ea-83d4-e3473c2d94d7.png)\r\n![image](https:\/\/user-images.githubusercontent.com\/6847024\/92997724-e22d2200-f532-11ea-951d-b1d8f4582ea3.png)\r\neven i am facing the same issue.","@banunitte Please do not post screenshots in the future but copy-paste your code and the errors. That allows others to copy-and-paste your code and test it. You may also want to provide the Python version that you are using.","I have the exact same problem in Windows 10, Python 3.8.\r\n","I have the same problem on Linux of the script crashing with a CSV error. This may be caused by 'CRLF', when changed 'CRLF' to 'LF', the problem solved.","I pushed a fix for `pyarrow.lib.ArrowInvalid: CSV parse error`. Let me know if you still have this issue.\r\n\r\nNot sure about the windows one yet","To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default):\r\n```python\r\ndataset = load_dataset('text', script_version='master', data_files=XXX)\r\n```\r\nWe do versioning by default, i.e. your version of the dataset lib will use the script with the same version by default (i.e. only the `1.0.1` version of the script if you have the PyPI version `1.0.1` of the lib).","![image](https:\/\/user-images.githubusercontent.com\/36957508\/93300760-fa9a8680-f829-11ea-9105-7a6f67ad8373.png)\r\nwin10, py3.6\r\n\r\n\r\n```\r\nfrom datasets import Features, Value, ClassLabel, load_dataset\r\n\r\n\r\nfeatures = Features({'text': Value('string'), 'ctext': Value('string')})\r\nfile_dict = {'train': PATH\/'summary.csv'}\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, script_version='master', delimiter='\\t', column_names=['text', 'ctext'], features=features)\r\n```","```python\r\nTraceback` (most recent call last):\r\n File \"main.py\", line 281, in <module>\r\n main()\r\n File \"main.py\", line 190, in main\r\n train_data, test_data = data_factory(\r\n File \"main.py\", line 129, in data_factory\r\n train_data = load_dataset('text', \r\n File \"\/home\/me\/Downloads\/datasets\/src\/datasets\/load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/me\/Downloads\/datasets\/src\/datasets\/builder.py\", line 468, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/me\/Downloads\/datasets\/src\/datasets\/builder.py\", line 546, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/me\/Downloads\/datasets\/src\/datasets\/builder.py\", line 888, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"\/home\/me\/.local\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/me\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014\/text.py\", line 103, in _generate_tables\r\n pa_table = pac.read_csv(\r\n File \"pyarrow\/_csv.pyx\", line 617, in pyarrow._csv.read_csv\r\n File \"pyarrow\/error.pxi\", line 123, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2\r\n```\r\n\r\nUnfortunately i am still getting this issue on Linux. I installed datasets from source and specified script_version to master.\r\n\r\n","> ![image](https:\/\/user-images.githubusercontent.com\/36957508\/93300760-fa9a8680-f829-11ea-9105-7a6f67ad8373.png)\r\n> win10, py3.6\r\n> \r\n> ```\r\n> from datasets import Features, Value, ClassLabel, load_dataset\r\n> \r\n> \r\n> features = Features({'text': Value('string'), 'ctext': Value('string')})\r\n> file_dict = {'train': PATH\/'summary.csv'}\r\n> \r\n> dataset = load_dataset('csv', data_files=file_dict, script_version='master', delimiter='\\t', column_names=['text', 'ctext'], features=features)\r\n> ```\r\n\r\nSince #644 it should now work on windows @ScottishFold007 \r\n\r\n> Trying the following snippet, I get different problems on Linux and Windows.\r\n> \r\n> ```python\r\n> dataset = load_dataset(\"text\", data_files=\"data.txt\")\r\n> # or \r\n> dataset = load_dataset(\"text\", data_files=[\"data.txt\"])\r\n> ```\r\n>\r\n> Windows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message:\r\n> \r\n> ```\r\n> Checking C:\\Users\\bramv\\.cache\\huggingface\\datasets\\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.\r\n> Found main folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\r\n> Found specific version folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\r\n> Found script file from https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py to C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\\text.py\r\n> Couldn't find dataset infos file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\\dataset_infos.json\r\n> Found metadata file for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\\text.json\r\n> Using custom data configuration default\r\n> ```\r\n\r\nSame for you @BramVanroy .\r\n\r\nNot sure about the one on linux though","> To complete what @lhoestq is saying, I think that to use the new version of the `text` processing script (which is on master right now) you need to either specify the version of the script to be the `master` one or to install the lib from source (in which case it uses the `master` version of the script by default):\r\n> \r\n> ```python\r\n> dataset = load_dataset('text', script_version='master', data_files=XXX)\r\n> ```\r\n> \r\n> We do versioning by default, i.e. your version of the dataset lib will use the script with the same version by default (i.e. only the `1.0.1` version of the script if you have the PyPI version `1.0.1` of the lib).\r\n\r\nLinux here:\r\n\r\nI was using the 0.4.0 nlp library load_dataset to load a text dataset of 9-10Gb without collapsing the RAM memory. However, today I got the csv error message mentioned in this issue. After installing the new (datasets) library from source and specifying the script_verson = 'master' I'm still having this same error message. Furthermore, I cannot use the dictionary \"trick\" to load the dataset since the system kills the process due to a RAM out of memory problem. Is there any other solution to this error? Thank you in advance. ","Hi @raruidol \r\nTo fix the RAM issue you'll need to shard your text files into smaller files (see https:\/\/github.com\/huggingface\/datasets\/issues\/610#issuecomment-691672919 for example)\r\n\r\nI'm not sure why you're having the csv error on linux.\r\nDo you think you could to to reproduce it on google colab for example ?\r\nOr send me a dummy .txt file that reproduces the issue ?","@lhoestq \r\n\r\nThe crash message shows up when loading the dataset:\r\n```\r\nprint('Loading corpus...') \r\nfiles = glob.glob('corpora\/shards\/*') \r\n-> dataset = load_dataset('text', script_version='master', data_files=files) \r\nprint('Corpus loaded.')\r\n```\r\nAnd this is the exact message:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 27, in <module>\r\n dataset = load_dataset('text', script_version='master', data_files=files)\r\n File \"\/home\/jupyter-raruidol\/DebatAnalyser\/env\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"\/home\/jupyter-raruidol\/DebatAnalyser\/env\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 471, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/jupyter-raruidol\/DebatAnalyser\/env\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 548, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/jupyter-raruidol\/DebatAnalyser\/env\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 892, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"\/home\/jupyter-raruidol\/DebatAnalyser\/env\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/jupyter-raruidol\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014\/text.py\", line 107, in _generate_tables\r\n convert_options=self.config.convert_options,\r\n File \"pyarrow\/_csv.pyx\", line 714, in pyarrow._csv.read_csv\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2\r\n```\r\n\r\nAnd these are the pip packages I have atm and their versions:\r\n\r\n```\r\nPackage Version Location \r\n--------------- --------- -------------------------------------------------------------\r\ncertifi 2020.6.20 \r\nchardet 3.0.4 \r\nclick 7.1.2 \r\ndatasets 1.0.2 \r\ndill 0.3.2 \r\nfilelock 3.0.12 \r\nfuture 0.18.2 \r\nidna 2.10 \r\njoblib 0.16.0 \r\nnumpy 1.19.1 \r\npackaging 20.4 \r\npandas 1.1.1 \r\npip 19.0.3 \r\npyarrow 1.0.1 \r\npyparsing 2.4.7 \r\npython-dateutil 2.8.1 \r\npytz 2020.1 \r\nregex 2020.7.14 \r\nrequests 2.24.0 \r\nsacremoses 0.0.43 \r\nsentencepiece 0.1.91 \r\nsetuptools 40.8.0 \r\nsix 1.15.0 \r\ntokenizers 0.8.1rc2 \r\ntorch 1.6.0 \r\ntqdm 4.48.2 \r\ntransformers 3.0.2 \/home\/jupyter-raruidol\/DebatAnalyser\/env\/src\/transformers\/src\r\n```\r\n\r\n\r\n","I tested on google colab which is also linux using this code:\r\n\r\n- first download an arbitrary text file\r\n```bash\r\nwget https:\/\/raw.githubusercontent.com\/abisee\/cnn-dailymail\/master\/url_lists\/all_train.txt\r\n```\r\n- then run\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"text\", data_files=\"all_train.txt\", script_version='master')\r\n```\r\nAnd I don't get this issue.\r\n\r\n\\> Could you test on your side if these lines work @raruidol ?\r\n\r\nalso cc @Skyy93 as it seems you have the same issue\r\n\r\nIf it works:\r\nIt could mean that the issue could come from unexpected patterns in the files you want to use.\r\nIn that case we should find a way to handle them.\r\n\r\nAnd if it doesn't work:\r\nIt could mean that it comes from the way pyarrow reads text files on linux.\r\nIn that case we should report it to pyarrow and find a workaround in the meantime\r\n\r\nEither way it should help to find where this bug comes from and fix it :)\r\n\r\nThank you in advance !","Update: also tested the above code in a docker container from [jupyter\/minimal-notebook](https:\/\/hub.docker.com\/r\/jupyter\/minimal-notebook\/) (based on ubuntu) and still not able to reproduce","It looks like with your text input file works without any problem. I have been doing some experiments this morning with my input files and I'm almost certain that the crash is caused by some unexpected pattern in the files. However, I've not been able to spot the main cause of it. What I find strange is that this same corpus was being loaded by the nlp 0.4.0 library without any problem... Where can I find the code where you structure the input text data in order to use it with pyarrow?","Under the hood it does\r\n```python\r\nimport pyarrow as pa\r\nimport pyarrow.csv\r\n\r\n# Use csv reader from Pyarrow with one column for text files\r\n\r\n# To force the one-column setting, we set an arbitrary character\r\n# that is not in text files as delimiter, such as \\b or \\v.\r\n# The bell character, \\b, was used to make beeps back in the days\r\nparse_options = pa.csv.ParseOptions( \r\n delimiter=\"\\b\", \r\n quote_char=False, \r\n double_quote=False, \r\n escape_char=False, \r\n newlines_in_values=False, \r\n ignore_empty_lines=False, \r\n)\r\n\r\nread_options= pa.csv.ReadOptions(use_threads=True, column_names=[\"text\"])\r\n\r\npa_table = pa.csv.read_csv(\"all_train.txt\", read_options=read_options, parse_options=parse_options)\r\n```\r\n\r\nNote that we changed the parse options with datasets 1.0\r\nIn particular the delimiter used to be `\\r` but this delimiter doesn't work on windows.","Could you try with `\\a` instead of `\\b` ? It looks like the bell character is \\a in python and not \\b","I was just exploring if the crash was happening in every shard or not, and which shards were generating the error message. With \\b I got the following list of shards crashing:\r\n\r\n```\r\nErrors on files: ['corpora\/shards\/shard_0069', 'corpora\/shards\/shard_0043', 'corpora\/shards\/shard_0014', 'corpora\/shards\/shard_0032', 'corpora\/shards\/shard_0088', 'corpora\/shards\/shard_0018', 'corpora\/shards\/shard_0073', 'corpora\/shards\/shard_0079', 'corpora\/shards\/shard_0038', 'corpora\/shards\/shard_0041', 'corpora\/shards\/shard_0007', 'corpora\/shards\/shard_0004', 'corpora\/shards\/shard_0102', 'corpora\/shards\/shard_0096', 'corpora\/shards\/shard_0030', 'corpora\/shards\/shard_0076', 'corpora\/shards\/shard_0067', 'corpora\/shards\/shard_0052', 'corpora\/shards\/shard_0026', 'corpora\/shards\/shard_0024', 'corpora\/shards\/shard_0064', 'corpora\/shards\/shard_0044', 'corpora\/shards\/shard_0013', 'corpora\/shards\/shard_0062', 'corpora\/shards\/shard_0057', 'corpora\/shards\/shard_0097', 'corpora\/shards\/shard_0094', 'corpora\/shards\/shard_0078', 'corpora\/shards\/shard_0075', 'corpora\/shards\/shard_0039', 'corpora\/shards\/shard_0077', 'corpora\/shards\/shard_0021', 'corpora\/shards\/shard_0040', 'corpora\/shards\/shard_0009', 'corpora\/shards\/shard_0023', 'corpora\/shards\/shard_0095', 'corpora\/shards\/shard_0107', 'corpora\/shards\/shard_0063', 'corpora\/shards\/shard_0086', 'corpora\/shards\/shard_0047', 'corpora\/shards\/shard_0089', 'corpora\/shards\/shard_0037', 'corpora\/shards\/shard_0101', 'corpora\/shards\/shard_0093', 'corpora\/shards\/shard_0082', 'corpora\/shards\/shard_0091', 'corpora\/shards\/shard_0065', 'corpora\/shards\/shard_0020', 'corpora\/shards\/shard_0070', 'corpora\/shards\/shard_0008', 'corpora\/shards\/shard_0058', 'corpora\/shards\/shard_0060', 'corpora\/shards\/shard_0022', 'corpora\/shards\/shard_0059', 'corpora\/shards\/shard_0100', 'corpora\/shards\/shard_0027', 'corpora\/shards\/shard_0072', 'corpora\/shards\/shard_0098', 'corpora\/shards\/shard_0019', 'corpora\/shards\/shard_0066', 'corpora\/shards\/shard_0042', 'corpora\/shards\/shard_0053']\r\n```\r\n\r\nI also tried with \\a and the list decreased but there were still several crashes:\r\n\r\n```\r\nErrors on files: ['corpora\/shards\/shard_0069', 'corpora\/shards\/shard_0055', 'corpora\/shards\/shard_0043', 'corpora\/shards\/shard_0014', 'corpora\/shards\/shard_0073', 'corpora\/shards\/shard_0025', 'corpora\/shards\/shard_0068', 'corpora\/shards\/shard_0102', 'corpora\/shards\/shard_0096', 'corpora\/shards\/shard_0076', 'corpora\/shards\/shard_0067', 'corpora\/shards\/shard_0026', 'corpora\/shards\/shard_0024', 'corpora\/shards\/shard_0044', 'corpora\/shards\/shard_0087', 'corpora\/shards\/shard_0092', 'corpora\/shards\/shard_0074', 'corpora\/shards\/shard_0094', 'corpora\/shards\/shard_0078', 'corpora\/shards\/shard_0039', 'corpora\/shards\/shard_0077', 'corpora\/shards\/shard_0040', 'corpora\/shards\/shard_0009', 'corpora\/shards\/shard_0107', 'corpora\/shards\/shard_0063', 'corpora\/shards\/shard_0103', 'corpora\/shards\/shard_0047', 'corpora\/shards\/shard_0033', 'corpora\/shards\/shard_0089', 'corpora\/shards\/shard_0037', 'corpora\/shards\/shard_0082', 'corpora\/shards\/shard_0071', 'corpora\/shards\/shard_0091', 'corpora\/shards\/shard_0065', 'corpora\/shards\/shard_0070', 'corpora\/shards\/shard_0058', 'corpora\/shards\/shard_0081', 'corpora\/shards\/shard_0060', 'corpora\/shards\/shard_0002', 'corpora\/shards\/shard_0059', 'corpora\/shards\/shard_0027', 'corpora\/shards\/shard_0072', 'corpora\/shards\/shard_0098', 'corpora\/shards\/shard_0019', 'corpora\/shards\/shard_0045', 'corpora\/shards\/shard_0036', 'corpora\/shards\/shard_0066', 'corpora\/shards\/shard_0053']\r\n```\r\n\r\nWhich means that it is quite possible that the assumption of that some unexpected pattern in the files is causing the crashes is true. If I am able to reach any conclusion I will post It here asap.","Hmmm I was expecting it to work with \\a, not sure why they appear in your text files though","Hi @lhoestq, is there any input length restriction which was not before the update of the nlp library?","No we never set any input length restriction on our side (maybe arrow but I don't think so)","@lhoestq Can you ever be certain that a delimiter character is not present in a plain text file? In other formats (e.g. CSV) , rules are set of what is allowed and what isn't so that it actually constitutes a CSV file. In a text file you basically have \"anything goes\", so I don't think you can ever be entirely sure that the chosen delimiter does not exist in the text file, or am I wrong? \r\n\r\nIf I understand correctly you choose a delimiter that we hope does not exist in the file, so that when the CSV parser starts splitting into columns, it will only ever create one column? Why can't we use a newline character though?","Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones:\r\n\r\n\r\n_4.\u2003DE L\u2019ORGANITZACI\u00d3 ESTAMENTAL A L\u2019ORGANITZACI\u00d3 EN CLASSES A mesura que es desenvolupava un sistema econ\u00f2mic capitalista i naixia una classe burgesa cada vegada m\u00e9s preparada per a substituir els dirigents de les velles monarquies absolutistes, es q\u00fcestionava l\u2019abund\u00e0ncia de b\u00e9ns amortitzats, que com s\u2019ha dit estaven fora del mercat i no pagaven tributs, pels perjudicis que ocasionaven a les finances p\u00fabliques i a l\u2019economia en general. Aquest estat d\u2019opini\u00f3 revolucionari va desembocar en un conjunt de mesures pr\u00e0ctiques de car\u00e0cter liberal. D\u2019una banda, les que intentaven desposseir les mans mortes del domini de b\u00e9ns acumulats, proc\u00e9s que acostumem a denominar desamortitzaci\u00f3, i que no \u00e9s m\u00e9s que la nacionalitzaci\u00f3 i venda d\u2019aquests b\u00e9ns eclesi\u00e0stics o civils en subhasta p\u00fablica al millor postor. D\u2019altra banda, les que redimien o redu\u00efen els censos i delmes o aixecaven les prohibicions de venda, \u00e9s a dir, les vinculacions. La desamortitzaci\u00f3, que va afectar b\u00e9ns dels ordes religiosos, dels pobles i d\u2019algunes corporacions civils, no va ser un cam\u00ed f\u00e0cil, perqu\u00e8 costava i costa trobar alg\u00fa que sigui indiferent a la p\u00e8rdua de b\u00e9ns, drets i privilegis. I t\u00e9 una gran transcend\u00e8ncia, va privar els antics estaments de les Espanyes, clero i pobles \u2014la noblesa en queda al marge\u2014, de la for\u00e7a econ\u00f2mica que els donaven bona part de les seves terres i, en \u00faltima inst\u00e0ncia, va preparar el terreny per a la substituci\u00f3 de la vella societat estamental per la nova societat classista. En aquesta societat, en teoria, les agrupacions socials s\u00f3n obertes, no tenen cap estatut jur\u00eddic privilegiat i estan definides per la possessi\u00f3 o no d\u2019uns b\u00e9ns econ\u00f2mics que s\u00f3n lliurement alienables. A les Espanyes la transformaci\u00f3 va afectar poc l\u2019aristocr\u00e0cia latifundista, all\u00e0 on n\u2019hi havia. Aquesta situaci\u00f3 va afavorir, en part, la persist\u00e8ncia de la vella cultura de la societat estamental en determinats ambients, i aix\u00f2 ha influ\u00eft decisivament en la manca de democr\u00e0cia que caracteritza la majoria de r\u00e8gims pol\u00edtics que s\u2019han anat succeint. Una manera de pensar que sempre sura en un moment o altre, i que de fet no acaba de desapar\u00e8ixer del tot. 5.\u2003INICI DE LA DESAMORTITZACI\u00d3 A LES ESPANYES Durant el segle xviii, dins d\u2019aquesta visi\u00f3 lliberal, va agafar for\u00e7a en alguns cercles de les Espanyes el corrent d\u2019opini\u00f3 contrari a les mans mortes. Durant el regnat de Carles III, s\u2019arbitraren les primeres mesures desamortitzadores proposades per alguns ministres il\u00b7lustrats. Aquestes disposicions foren modestes i poc eficaces, no van aturar l\u2019acumulaci\u00f3 de terres per part dels estaments que constitu\u00efen les mans mortes i varen afectar principalment b\u00e9ns dels pobles. L\u2019Esgl\u00e9sia no va ser tocada, excepte en el cas de 110_\r\n\r\n_la revoluci\u00f3 liberal, perqu\u00e8, encara que havia perdut els seus drets jurisdiccionals, havia conservat la majoria de terres i fins i tot les havia incrementat amb d\u2019altres que procedien de la desamortitzaci\u00f3. En la nova situaci\u00f3, les mans mortes del bosc p\u00fablic eren l\u2019Estat, que no cerca mai l\u2019autofinan\u00e7ament de les despeses de gesti\u00f3; els diners que manquin ja els posar\u00e0 l\u2019Estat. 9.\u2003DEFENSA I INTENTS DE RECUPERACI\u00d3 DELS B\u00c9NS COMUNALS DESAMORTITZATS El proc\u00e9s de centralitzaci\u00f3 no era senzill, perqu\u00e8, d\u2019una banda, la nova organitzaci\u00f3 apartava de la gesti\u00f3 moltes corporacions locals i molts ve\u00efns que l\u2019havien portada des de l\u2019edat mitjana, i, de l\u2019altra, era dif\u00edcil de coordinar la nova silvicultura amb moltes pr\u00e0ctiques forestals i drets tradicionals, com la pastura, fer llenya o tallar un arbre aqu\u00ed i un altre all\u00e0 quan tenia el gruix suficient, les pr\u00e0ctiques que s\u2019havien fet sempre. Les primeres passes de la nova organitzaci\u00f3 centralitzada varen tenir moltes dificultats en aquells indrets en qu\u00e8 els terrenys municipals i comunals tenien un paper important en l\u2019economia local. La desobedi\u00e8ncia a determinades normes imposades varen prendre formes diferents. Algunes institucions, com, per exemple, la Diputaci\u00f3 de Lleida, varen retardar la tramitaci\u00f3 d\u2019alguns expedients i varen evitar la venda de b\u00e9ns municipals. Molts pobles permeteren deixar que els ve\u00efns continuessin amb les seves pr\u00e0ctiques tradicionals, d\u2019altres varen boicotejar les subhastes d\u2019aprofitaments. L\u2019Estat va reaccionar encomanant a la Gu\u00e0rdia Civil el compliment de les noves directrius. Imposar el nou r\u00e8gim va costar a l\u2019Administraci\u00f3 un grapat d\u2019anys, per\u00f2 de mica en mica, amb molta, molta guarderia i gens de negociaci\u00f3, ho va aconseguir. La nova gesti\u00f3 estatal dels b\u00e9ns municipals va deixar, com hem comentat, molta gent sense uns recursos necessaris per a la superviv\u00e8ncia, sobre tot en \u00e0rees on predominaven les grans propietats, i on els pagesos sense terra treballaven de jornalers temporers. Aix\u00f2 va afavorir que, a bona part de les Espanyes, les primeres lluites camperoles de la segona meitat del segle xix defensessin la recuperaci\u00f3 dels comunals desamortitzats; per a molts aquella expropiaci\u00f3 i venda dirigida pels governs mon\u00e0rquics era la causa de molta mis\u00e8ria. D\u2019altres, m\u00e9s radicalitzats, varen entendre que l\u2019eliminaci\u00f3 de la propietat col\u00b7lectiva i la gesti\u00f3 estatal dels boscos no desamortitzats suposava una usurpaci\u00f3 pura i dura. En les zones m\u00e9s afectades per la desamortitzaci\u00f3 aix\u00f2 va donar lloc a un imaginari centrat en la defensa del comunal. La Segona Rep\u00fablica va arribar en una conjuntura econ\u00f2mica de crisi, generada pel crac del 1929. Al camp, aquesta situaci\u00f3 va produir una forta caiguda dels preus dels productes agraris i un increment important de l\u2019atur. QUADERNS AGRARIS 42\u2002(juny 2017), p. 105-126_\r\n\r\nI think that the main difference between the crashing samples and the rest is their length. Therefore, couldn't the length be causing the message errors? I hope with these samples you can identify what is causing the crashes considering that the 0.4.0 nlp library was loading them properly.","So we're using the csv reader to read text files because arrow doesn't have a text reader.\r\nTo workaround the fact that text files are just csv with one column, we want to set a delimiter that doesn't appear in text files.\r\nUntil now I thought that it would do the job but unfortunately it looks like even characters like \\a appear in text files.\r\n\r\nSo we have to option:\r\n- find another delimiter that does the job (maybe `\\x1b` esc or `\\x18` cancel)\r\n- don't use the csv reader from arrow but the text reader from pandas instead (or any other reader). The only important thing is that it must be fast (arrow's reader has a nice and fast multithreaded for csv that we're using now but hopefully we can find an alternative)\r\n\r\n\r\n\r\n> @lhoestq Can you ever be certain that a delimiter character is not present in a plain text file? In other formats (e.g. CSV) , rules are set of what is allowed and what isn't so that it actually constitutes a CSV file. In a text file you basically have \"anything goes\", so I don't think you can ever be entirely sure that the chosen delimiter does not exist in the text file, or am I wrong?\r\n\r\nAs long as the text file follows some encoding it wouldn't make sense to have characters such as the bell character. However I agree it can happen.\r\n\r\n> If I understand correctly you choose a delimiter that we hope does not exist in the file, so that when the CSV parser starts splitting into columns, it will only ever create one column? Why can't we use a newline character though?\r\n\r\nExactly. Arrow doesn't allow the newline character unfortunately.","> Okay, I have splitted the crashing shards into individual sentences and some examples of the inputs that are causing the crashes are the following ones\r\n\r\nThanks for digging into it !\r\n\r\nCharacters like \\a or \\b are not shown when printing the text, so as it is I can't tell if it contains unexpected characters.\r\nMaybe could could open the file in python and check if `\"\\b\" in open(\"path\/to\/file\", \"r\").read()` ?\r\n\r\n> I think that the main difference between the crashing samples and the rest is their length. Therefore, couldn't the length be causing the message errors? I hope with these samples you can identify what is causing the crashes considering that the 0.4.0 nlp library was loading them properly.\r\n\r\nTo check that you could try to run \r\n\r\n```python\r\nimport pyarrow as pa\r\nimport pyarrow.csv\r\n\r\nopen(\"dummy.txt\", \"w\").write(((\"a\" * 10_000) + \"\\n\") * 4) # 4 lines of 10 000 'a'\r\n\r\nparse_options = pa.csv.ParseOptions( \r\n delimiter=\"\\b\", \r\n quote_char=False, \r\n double_quote=False, \r\n escape_char=False, \r\n newlines_in_values=False, \r\n ignore_empty_lines=False, \r\n)\r\n\r\nread_options= pa.csv.ReadOptions(use_threads=True, column_names=[\"text\"])\r\n\r\npa_table = pa.csv.read_csv(\"dummy.txt\", read_options=read_options, parse_options=parse_options)\r\n```\r\n\r\non my side it runs without error though","That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have \"\\b\" at the end?","> That's true, It was my error printing the text that way. Maybe as a workaround, I can force all my input samples to have \"\\b\" at the end?\r\n\r\nI don't think it would work since we only want one column, and \"\\b\" is set to be the delimiter between two columns, so it will raise the same issue again. Pyarrow would think that there is more than one column if the delimiter is found somewhere.\r\n\r\nAnyway, I I'll work on a new text reader if we don't find the right workaround about this delimiter issue."],"created_at":1599914968000,"updated_at":1603883251000,"closed_at":1603883250000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Trying the following snippet, I get different problems on Linux and Windows.\r\n\r\n\r\n```python\r\ndataset = load_dataset(\"text\", data_files=\"data.txt\")\r\n# or \r\ndataset = load_dataset(\"text\", data_files=[\"data.txt\"])\r\n```\r\n\r\n(ps [This example](https:\/\/huggingface.co\/docs\/datasets\/loading_datasets.html#json-files) shows that you can use a string as input for data_files, but the signature is `Union[Dict, List]`.)\r\n\r\nThe problem on Linux is that the script crashes with a CSV error (even though it isn't a CSV file). On Windows the script just seems to freeze or get stuck after loading the config file.\r\n\r\nLinux stack trace:\r\n```\r\nPyTorch version 1.6.0+cu101 available.\r\nChecking \/home\/bram\/.cache\/huggingface\/datasets\/b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.\r\nFound main folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at \/home\/bram\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\r\nFound specific version folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at \/home\/bram\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\r\nFound script file from https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py to \/home\/bram\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\/text.py\r\nCouldn't find dataset infos file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/dataset_infos.json\r\nFound metadata file for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at \/home\/bram\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\/text.json\r\nUsing custom data configuration default\r\nGenerating dataset text (\/home\/bram\/.cache\/huggingface\/datasets\/text\/default-0907112cc6cd2a38\/0.0.0\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7)\r\nDownloading and preparing dataset text\/default-0907112cc6cd2a38 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/bram\/.cache\/huggingface\/datasets\/text\/default-0907112cc6cd2a38\/0.0.0\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7...\r\nDataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading took 0.0 min\r\nChecksum Computation took 0.0 min\r\nUnable to verify checksums.\r\nGenerating split train\r\nTraceback (most recent call last):\r\n File \"\/home\/bram\/Python\/projects\/dutch-simplification\/utils.py\", line 45, in prepare_data\r\n dataset = load_dataset(\"text\", data_files=dataset_f)\r\n File \"\/home\/bram\/.local\/share\/virtualenvs\/dutch-simplification-NcpPZtDF\/lib\/python3.8\/site-packages\/datasets\/load.py\", line 608, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/bram\/.local\/share\/virtualenvs\/dutch-simplification-NcpPZtDF\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 468, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/bram\/.local\/share\/virtualenvs\/dutch-simplification-NcpPZtDF\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 546, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/bram\/.local\/share\/virtualenvs\/dutch-simplification-NcpPZtDF\/lib\/python3.8\/site-packages\/datasets\/builder.py\", line 888, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False, disable=not_verbose):\r\n File \"\/home\/bram\/.local\/share\/virtualenvs\/dutch-simplification-NcpPZtDF\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/bram\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\/text.py\", line 100, in _generate_tables\r\n pa_table = pac.read_csv(\r\n File \"pyarrow\/_csv.pyx\", line 714, in pyarrow._csv.read_csv\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2\r\n```\r\n\r\nWindows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message:\r\n\r\n```\r\nChecking C:\\Users\\bramv\\.cache\\huggingface\\datasets\\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports.\r\nFound main folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\r\nFound specific version folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\r\nFound script file from https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py to C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\\text.py\r\nCouldn't find dataset infos file at https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\\dataset_infos.json\r\nFound metadata file for dataset https:\/\/raw.githubusercontent.com\/huggingface\/datasets\/1.0.1\/datasets\/text\/text.py at C:\\Users\\bramv\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\text\\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\\text.json\r\nUsing custom data configuration default\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/622\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/621","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/621\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/621\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/621\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/621","id":700171097,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg1ODQ3ODYz","number":621,"title":"[docs] Index: The native emoji looks kinda ugly in large size","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599904120000,"updated_at":1600150803000,"closed_at":1600150802000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/621","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/621","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/621.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/621.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/621\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/620","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/620\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/620\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/620\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/620","id":699815135,"node_id":"MDU6SXNzdWU2OTk4MTUxMzU=","number":620,"title":"map\/filter multiprocessing raises errors and corrupts datasets","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892857,"node_id":"MDU6TGFiZWwxOTM1ODkyODU3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/bug","name":"bug","color":"d73a4a","default":true,"description":"Something isn't working"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = cola.map(partial(tokenize, {'sentence': 'text_idxs'}),\r\n num_proc=2,)\r\n```\r\nand it outpus (exceprts)\r\n```\r\nConcatenating 2 shards from multiprocessing\r\nSet __getitem__(key) output type to python objects for ['idx', 'label', 'sentence', 'text_idxs'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nTesting the mapped function outputs\r\nTesting finished, running the mapping function on the dataset\r\nDone writing 532 indices in 4256 bytes .\r\nDone writing 531 indices in 4248 bytes .\r\nProcess #0 will write at \/home\/yisiang\/.cache\/huggingface\/datasets\/glue\/cola\/1.0.0\/930e9d141872db65102cabb9fa8ac01c11ffc8a1b72c2e364d8cdda4610df542\/tokenized_test_00000_of_00002.arrow\r\nProcess #1 will write at \/home\/yisiang\/.cache\/huggingface\/datasets\/glue\/cola\/1.0.0\/930e9d141872db65102cabb9fa8ac01c11ffc8a1b72c2e364d8cdda4610df542\/tokenized_test_00001_of_00002.arrow\r\nSpawning 2 processes\r\n```\r\nand then the program never stop.","same problem.\r\n`encoded_dataset = core_data.map(lambda examples: tokenizer(examples[\"query\"], examples[\"document\"], padding=True, truncation='longest_first', return_tensors=\"pt\", max_length=384), num_proc=16, keep_in_memory=True)`\r\nit outputs:\r\n```\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787500 indices in 25568400000 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nDone writing 1787499 indices in 25568385696 bytes .\r\nSet __getitem__(key) output type to python objects for ['document', 'is_random', 'query'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\nSpawning 16 processes\r\n```","Thanks for reporting.\r\n\r\n\r\nWhich tokenizers are you using ? What platform are you on ? Can you tell me which version of datasets and pyarrow you're using ? @timothyjlaurent @richarddwang @HuangLianzhe \r\n\r\nAlso if you're able to reproduce the issue on google colab that would be very helpful.\r\n\r\nI tried to run your code @richarddwang with the bert tokenizer and I wasn't able to reproduce","Hi, Sorry that I forgot to see what my version was.\r\nBut after updating datasets to master (editable install), and latest pyarrow. \r\nIt works now ~","Sorry, I just noticed this.\r\nI'm running this on MACOS the version of datasets I'm was 1.0.0 but I've also tried it on 1.0.2. `pyarrow==1.0.1`, Python 3.6\r\n\r\nConsider this code:\r\n```python\r\n\r\n loader_path = str(Path(__file__).parent \/ \"prodigy_dataset_builder.py\")\r\n ds = load_dataset(\r\n loader_path, name=\"prodigy-ds\", data_files=list(file_paths), cache_dir=cache_dir\r\n )[\"train\"]\r\n valid_relations = set(vocabulary.relation_types.keys())\r\n\r\n ds = ds.filter(filter_good_rows, fn_kwargs=dict(valid_rel_labels=valid_relations))\r\n\r\n ds = ds.map(map_bpe_encodings, batched=True, fn_kwargs=dict(tokenizer=vocabulary.tokenizer), num_proc=10)\r\n\r\n # add all feature data\r\n ner_ds: Dataset = ds.map(\r\n add_bio_tags,\r\n fn_kwargs=dict(ner_label_map=vocabulary.ner_labels, tokenizer=vocabulary.tokenizer),\r\n )\r\n rel_ds: Dataset = ner_ds.map(\r\n relation_ds_factory,\r\n batched=True,\r\n writer_batch_size=100,\r\n fn_kwargs=dict(tokenizer=vocabulary.tokenizer, vocabulary=vocabulary),\r\n )\r\n```\r\nThe loader is essentially a jsonloader with some extra error handling. The data is a jsonlines format with text field and a list of span objects and relation objects. \r\n\r\nIn the `ner_ds` a field, `ner_labels` is added, this is used in the downstream `relation_ds_factory`. It all runs fine in a single process but I get a KeyError error if run with num_proc set\r\n:\r\n\r\n```\r\n File \"\/Users\/timothy.laurent\/src\/inv-text2struct\/text2struct\/model\/dataset.py\", line 348, in relation_ds_factory\r\n ner_labels = example[\"ner_labels\"]\r\nKeyError: 'ner_labels'\r\n``` \r\n\r\nThis is just one example of what goes wrong. I've started just saving the dataset as arrow at the end because it takes a long time to map\/filter\/shuffle and the caching isn't working (tracked it down to byte differences in the pickled functions). \r\n\r\n^^ Interestingly if I heed the warning from Tokenizers and set the environment variable, `TOKENIZERS_PARALLELISM=true` the map just hangs:\r\n\r\n```\r\n[I 200921 21:43:18 filelock:318] Lock 5694118768 released on \/Users\/timothy.laurent\/.cache\/huggingface\/datasets\/_Users_timothy.laurent_.cache_huggingface_datasets_prodigy_dataset_builder_prodigy-ds-5f34378723c4e83f_0.0.0_e67d9b43d5cd82c50b1eae8f2097daf95b601a04dc03ddd504f2b234a5fa247a.lock\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1.34ba\/s]\r\n#0: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#1: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#2: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#3: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#4: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#5: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#6: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#7: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n#8: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\n```","Thank you, I was able to reproduce :)\r\nI'm on it","#659 should fix the `KeyError` issue. It was due to the formatting not getting updated the right way","Also maybe @n1t0 knows why setting `TOKENIZERS_PARALLELISM=true` creates deadlock issues when calling `map` with multiprocessing ?","@lhoestq \r\n\r\nThanks for taking a look. I pulled the master but I still see the key error.\r\n\r\n```\r\nTo disable this warning, you can either:\r\n - Avoid using `tokenizers` before the fork if possible\r\n - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n#0: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 21.56ba\/s]\r\n#1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 17.71ba\/s]\r\n#2: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 20.45ba\/s]\r\n#3: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 26.05ba\/s]\r\n#4: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 26.83ba\/s]\r\n#5: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 27.00ba\/s]\r\n#6: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 27.40ba\/s]\r\n#7: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 25.91ba\/s]\r\n#8: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 22.46ba\/s]\r\n#9: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 20.15ba\/s]\r\n#10: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 26.81ba\/s]\r\n#11: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 27.45ba\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 322\/322 [00:00<00:00, 1462.85ex\/s]\r\nTraceback (most recent call last): | 0\/1 [00:00<?, ?ba\/s]\r\n File \"text2struct\/run_model.py\", line 372, in <module>\r\n main()\r\n File \"text2struct\/run_model.py\", line 368, in main | 0\/1 [00:00<?, ?ba\/s]\r\n run_model(auto_envvar_prefix=\"GFB_CIES\") # pragma: no cover\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/core.py\", line 829, in __call__\r\n return self.main(*args, **kwargs) | 0\/1 [00:00<?, ?ba\/s]\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/core.py\", line 782, in main\r\n rv = self.invoke(ctx)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/core.py\", line 1236, in invoke\r\n return Command.invoke(self, ctx)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/core.py\", line 1066, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"text2struct\/run_model.py\", line 136, in run_model\r\n ctx.invoke(ctx.command.commands[config_dict[\"mode\"]])\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/core.py\", line 610, in invoke\r\n return callback(*args, **kwargs)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/click\/decorators.py\", line 21, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"text2struct\/run_model.py\", line 187, in train\r\n run_train_model(_parse_subcommand(ctx))\r\n File \"text2struct\/run_model.py\", line 241, in run_train_model\r\n checkpoint_steps=config.train.checkpoint_steps,\r\n File \"\/Users\/timothy.laurent\/src\/inv-text2struct\/text2struct\/model\/train.py\", line 153, in alternate_training\r\n max_len=config.model.dim.max_len,\r\n File \"\/Users\/timothy.laurent\/src\/inv-text2struct\/text2struct\/model\/dataset.py\", line 466, in load_prodigy_tf_datasets\r\n folder, file_patterns, vocabulary, cache_dir=cache_dir, test_pct=test_pct\r\n File \"\/Users\/timothy.laurent\/src\/inv-text2struct\/text2struct\/model\/dataset.py\", line 447, in load_prodigy_arrow_datasets\r\n fn_kwargs=dict(tokenizer=vocabulary.tokenizer, vocabulary=vocabulary),\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1224, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"\/Users\/timothy.laurent\/src\/inv-text2struct\/text2struct\/model\/dataset.py\", line 348, in relation_ds_factory\r\n ner_labels = example[\"ner_labels\"]\r\nKeyError: 'ner_labels'\r\n\r\n```","The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf https:\/\/github.com\/huggingface\/tokenizers\/issues\/187).\r\nSo if possible, the tokenizers shouldn't be used before the fork, so that each process can then make use of the parallelism. Otherwise using `TOKENIZERS_PARALLELISM=false` is the way to go.","> Thanks for taking a look. I pulled the master but I still see the key error.\r\n\r\nI am no longer able to get the error since #659 was merged. Not sure why you still have it @timothyjlaurent \r\nMaybe it is a cache issue ? Could you try to use `load_from_cache_file=False` in your `.map()` calls ?","> The parallelism is automatically disabled on `tokenizers` when the process gets forked, while we already used the parallelism capabilities of a tokenizer. We have to do it in order to avoid having the process hang, because we cannot safely fork a multithreaded process (cf [huggingface\/tokenizers#187](https:\/\/github.com\/huggingface\/tokenizers\/issues\/187)).\r\n> So if possible, the tokenizers shouldn't be used before the fork, so that each process can then make use of the parallelism. Otherwise using `TOKENIZERS_PARALLELISM=false` is the way to go.\r\n\r\nOk thanks :)\r\n\r\nIs there something we should do on the `datasets` side to avoid that that the program hangs ?\r\n\r\nAlso when doing `.map` with a tokenizer, the tokenizer is called once on the first examples of the dataset to check the function output before spawning the processes. Is that compatible with how tokenizers are supposed to be used with multiprocessing ?","#659 fixes the empty dict issue\r\n#688 fixes the hang issue","Hmmm I pulled the latest commit, `b93c5517f70a480533a44e0c42638392fd53d90`, and I'm still seeing both the hanging and the key error. ","Hi @timothyjlaurent \r\n\r\nThe hanging fix just got merged, that why you still had it.\r\n\r\nFor the key error it's possible that the code you ran reused cached datasets from where the KeyError bug was still there.\r\nCould you try to clear your cache or make sure that it doesn't reuse cached data with `.map(..., load_from_cache=False)` ?\r\nLet me know if it it helps","Hi @lhoestq , \r\n\r\nThanks for letting me know about the update.\r\n\r\nSo I don't think it's the caching - because hashing mechanism isn't stable for me -- but that's a different issue. In any case I `rm -rf ~\/.cache\/huggingface` to make a clean slate.\r\n\r\nI synced with master and I see the key error has gone away, I tried with and without the `TOKENIZERS_PARALLELISM` variable set and see the log line for setting the value false before the map.\r\n\r\nNow I'm seeing an issue with `.train_test_split()` on datasets that are the product of a multiprocess map.\r\n\r\nHere is the stack trace\r\n\r\n```\r\n File \"\/Users\/timothy.laurent\/src\/inv-text2struct\/text2struct\/model\/dataset.py\", line 451, in load_prodigy_arrow_datasets\r\n ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/src\/datasets\/src\/datasets\/arrow_dataset.py\", line 168, in wrapper\r\n dataset.set_format(**new_format)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/src\/datasets\/src\/datasets\/fingerprint.py\", line 163, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"\/Users\/timothy.laurent\/.virtualenvs\/inv-text2struct\/src\/datasets\/src\/datasets\/arrow_dataset.py\", line 794, in set_format\r\n list(filter(lambda col: col not in self._data.column_names, columns)), self._data.column_names\r\nValueError: Columns ['train', 'test'] not in the dataset. Current columns in the dataset: ['_input_hash', '_task_hash', '_view_id', 'answer', 'encoding__ids', 'encoding__offsets', 'encoding__overflowing', 'encoding__tokens', 'encoding__words', 'ner_ids', 'ner_labels', 'relations', 'spans', 'text', 'tokens']\r\n```\r\n\r\n\r\n","Thanks for reporting.\r\nI'm going to fix that and add a test case so that it doesn't happen again :) \r\nI'll let you know when it's done\r\n\r\nIn the meantime if you could make a google colab that reproduces the issue it would be helpful ! @timothyjlaurent ","Sure thing, @lhoestq.\r\n\r\nhttps:\/\/colab.research.google.com\/drive\/1lg4fbyrUO6m8ssQ2dNdVFaUqMUfA2zZ3?usp=sharing","Thanks @timothyjlaurent ! I just merged a fix on master. I also checked your notebook and it looks like it's working now.\r\nI added some tests to make sure it works as expected now :)","Great, @lhoestq . I'm trying to verify in the colab:\r\nchanged\r\n```\r\n!pip install datasets\r\n```\r\nto \r\n\r\n```\r\n!pip install git+https:\/\/github.com\/huggingface\/datasets@master\r\n```\r\n\r\nBut I'm still seeing the error - I wonder why?","It works on my side @timothyjlaurent on google colab.\r\nDid you try to uninstall datasets first, before updating it to master's version ?","I didn't -- it was a new sessions --- buuut - look like it's working today -- woot! I'll close this issue. Thanks @lhoestq "],"created_at":1599863406000,"updated_at":1602174707000,"closed_at":1602174706000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.\r\n\r\n```python\r\n ...\r\n ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)\r\n ner_ds_dict[\"validation\"] = ner_ds_dict[\"test\"]\r\n rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)\r\n rel_ds_dict[\"validation\"] = rel_ds_dict[\"test\"]\r\n return ner_ds_dict, rel_ds_dict\r\n```\r\n\r\nThe first train_test_split, `ner_ds`\/`ner_ds_dict`, returns a `train` and `test` split that are iterable.\r\nThe second, `rel_ds`\/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`.\r\n\r\nOk I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads.\r\n\r\nI also see errors with other map and filter functions when `num_proc` is set.\r\n\r\n```\r\nDone writing 67 indices in 536 bytes .\r\nDone writing 67 indices in 536 bytes .\r\nFatal Python error: PyCOND_WAIT(gil_cond) failed\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/620\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/619","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/619\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/619\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/619\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/619","id":699733612,"node_id":"MDU6SXNzdWU2OTk3MzM2MTI=","number":619,"title":"Mistakes in MLQA features names","user":{"login":"M-Salti","id":9285264,"node_id":"MDQ6VXNlcjkyODUyNjQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9285264?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/M-Salti","html_url":"https:\/\/github.com\/M-Salti","followers_url":"https:\/\/api.github.com\/users\/M-Salti\/followers","following_url":"https:\/\/api.github.com\/users\/M-Salti\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/M-Salti\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/M-Salti\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/M-Salti\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/M-Salti\/orgs","repos_url":"https:\/\/api.github.com\/users\/M-Salti\/repos","events_url":"https:\/\/api.github.com\/users\/M-Salti\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/M-Salti\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?"],"created_at":1599857183000,"updated_at":1600239559000,"closed_at":1600239559000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I think the following features in MLQA shouldn't be named the way they are:\r\n1. `questions` (should be `question`)\r\n2. `ids` (should be `id`)\r\n3. `start` (should be `answer_start`)\r\n\r\nThe reasons I'm suggesting these features be renamed are:\r\n* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA etc. and hence make it easier to concatenate multiple QA datasets.\r\n* The features names are not the same as the ones provided in the original MLQA datasets (it uses the names I suggested).\r\n\r\nI know these columns can be renamed using using `Dataset.rename_column_`, `questions` and `ids` can be easily renamed but `start` on the other hand is annoying to rename since it's nested inside the feature `answers`.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/619\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/618","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/618\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/618\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/618\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/618","id":699684831,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg1NDAxMzI5","number":618,"title":"sync logging utils with transformers","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also, some downloads and dataset processing can be quite long for large datasets like wikipedia\/pg19\/etc. We probably don't want to user to think that the library is hanging. Happy to reorganize logging between DEBUG\/INFO\/WARNING to make it less verbose by default though.","The problem is that `transformers` imports `datasets` and the latter starts logging on `import`: at least 3 info messages - apache beam\/torch\/tf available - so it injects noise whether one uses the library or not - i.e. no choice given to the user.\r\n\r\nWould you be open for me to changing this PR, to keep the initial level at INFO, but to keep the `DATASETS_VERBOSITY` env var it introduces, to let the user control the verbosity?\r\n\r\n","> Also, some downloads and dataset processing can be quite long for large datasets like wikipedia\/pg19\/etc. We probably don't want to user to think that the library is hanging.\r\n\r\nIf you're referring to tqdm progress reports, it's not affected by changing the logging levels. It's not using logging.","> The problem is that `transformers` imports `datasets` and the latter starts logging on `import`: at least 3 info messages - apache beam\/torch\/tf available - so it injects noise whether one uses the library or not - i.e. no choice given to the user.\r\n> \r\n> Would you be open for me to changing this PR, to keep the initial level at INFO, but to keep the `DATASETS_VERBOSITY` env var it introduces, to let the user control the verbosity?\r\n\r\nFor now we can do that, then I'll change some messages to warnings and set the default verbosity at warning as well at that point. Does it sound good to you ?\r\n\r\n> If you're referring to tqdm progress reports, it's not affected by changing the logging levels. It's not using logging.\r\n\r\nActually we configured some progress bars to be disabled depending on the logging level ^^'\r\n","> For now we can do that, then I'll change some messages to warnings and set the default verbosity at warning as well at that point. Does it sound good to you ?\r\n\r\nIf it is logical then by all means. \r\n\r\n> > If you're referring to tqdm progress reports, it's not affected by changing the logging levels. It's not using logging.\r\n> \r\n> Actually we configured some progress bars to be disabled depending on the logging level ^^'\r\n\r\nThis is very smart!\r\n\r\nI reverted s\/WARNINGS\/INFO\/.\r\n\r\nThank you!","Note that it\u2019s the same in `transformers` @stas00, tdqm are also controlled by the logging level there.","> Note that it\u2019s the same in `transformers` @stas00, tdqm are also controlled by the logging level there.\r\n\r\nThat's good to know, @thomwolf - thank you!\r\n\r\nI see that it's controlled in `trainer.py`, but in `examples` it's not - since that's where I usually see the progressbars (and they are great!). But I suppose they aren't API, so `examples` can behave differently.","BTW, this is what I'm talking about:\r\n```\r\npython -c \"import transformers\"\r\n2020-09-14 21:00:58.032658: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nPyTorch version 1.7.0.dev20200910 available.\r\nTensorFlow version 2.3.0 available.\r\nApache Beam available.\r\n```\r\nwhy does the user need to see this? Especially, if they aren't even using `datasets` directly?","Yes you are right, we should re-think the logging level of various elements.\r\nI also think that the `set_format` messages are confusing when they are the results of our internal operations (as mentioned [here](https:\/\/discuss.huggingface.co\/t\/pipeline-with-custom-dataset-tokenizer-when-to-save-load-manually\/1084\/7?u=thomwolf))","Actually I continued this PR in #635 to set the level to warning and update the logging level of some messages.\r\n\r\nLet me know if it sounds good to you","Closing this one sice #635 got merged","Awesome! Thank you!\r\n\r\nAny ideas how to eliminate this remaining log line from tensorflow (I know it's not `datasets` related, but perhaps you know).\r\n```\r\npython -c \"import transformers\"\r\n2020-09-17 08:38:34.718410: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n```"],"created_at":1599853573000,"updated_at":1600357259000,"closed_at":1600336427000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/618","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/618","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/618.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/618.patch"},"body":"sync the docs\/code with the recent changes in transformers' `logging` utils:\r\n1. change the default level to `WARNING`\r\n2. add `DATASETS_VERBOSITY` env var\r\n3. expand docs","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/618\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/617","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/617\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/617\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/617\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/617","id":699472596,"node_id":"MDU6SXNzdWU2OTk0NzI1OTY=","number":617,"title":"Compare different Rouge implementations ","user":{"login":"ibeltagy","id":2287797,"node_id":"MDQ6VXNlcjIyODc3OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2287797?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ibeltagy","html_url":"https:\/\/github.com\/ibeltagy","followers_url":"https:\/\/api.github.com\/users\/ibeltagy\/followers","following_url":"https:\/\/api.github.com\/users\/ibeltagy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ibeltagy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ibeltagy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ibeltagy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ibeltagy\/orgs","repos_url":"https:\/\/api.github.com\/users\/ibeltagy\/repos","events_url":"https:\/\/api.github.com\/users\/ibeltagy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ibeltagy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Updates - the differences between the following three\r\n(1) https:\/\/github.com\/bheinzerling\/pyrouge (previously popular. The one I trust the most)\r\n(2) https:\/\/github.com\/google-research\/google-research\/tree\/master\/rouge\r\n(3) https:\/\/github.com\/pltrdy\/files2rouge (used in fairseq)\r\ncan be explained by two things, stemming and handling multiple sentences.\r\n\r\nStemming: \r\n(1), (2): default is no stemming. (3): default is with stemming ==> No stemming is the correct default as you did [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/rouge\/rouge.py#L84)\r\n\r\nMultiple sentences:\r\n(1) `rougeL` splits text using `\\n`\r\n(2) `rougeL` ignores `\\n`. \r\n(2) `rougeLsum` splits text using `\\n`\r\n(3) `rougeL` splits text using `.`\r\n\r\nFor (2), `rougeL` and `rougeLsum` are identical if the sequence doesn't contain `\\n`. With `\\n`, it is `rougeLsum` that matches (1) not `rougeL`. \r\n\r\nOverall, and as far as I understand, for your implementation here https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/rouge\/rouge.py#L65 to match the default, you only need to change `rougeL` [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/rouge\/rouge.py#L86) to `rougeLsum` to correctly compute metrics for text with newlines.\r\n\r\nTagging @sshleifer who might be interested.","Thanks for the clarification !\r\nWe're adding Rouge Lsum in #701 ","This is a real issue, sorry for missing the mention @ibeltagy\r\n\r\nWe implemented a more involved [solution](https:\/\/github.com\/huggingface\/transformers\/blob\/99cb924bfb6c4092bed9232bea3c242e27c6911f\/examples\/seq2seq\/utils.py#L481) that enforces that sentences are split with `\\n` so that rougeLsum scores match papers even if models don't generate newlines. \r\n\r\nUnfortunately, the best\/laziest way I found to do this introduced an `nltk` dependency (For sentence splitting, all sentences don't end in `.`!!!), but this might be avoidable with some effort.\r\n\r\n#### Sidebar: Wouldn't Deterministic Be Better?\r\n\r\n`rouge_scorer.scoring.BootstrapAggregator` is well named but is not deterministic which I would like to change for my mental health, unless there is some really good reason to sample 500 observations before computing f-scores.\r\n\r\nI have a fix on a branch, but I wanted to get some context before introducting a 4th way to compute rouge. Scores are generally within .03 Rouge2 of boostrap after multiplying by 100, e.g 22.05 vs 22.08 Rouge2.\r\n\r\n","> This is a real issue, sorry for missing the mention @ibeltagy\r\n> \r\n> We implemented a more involved [solution](https:\/\/github.com\/huggingface\/transformers\/blob\/99cb924bfb6c4092bed9232bea3c242e27c6911f\/examples\/seq2seq\/utils.py#L481) that enforces that sentences are split with `\\n` so that rougeLsum scores match papers even if models don't generate newlines.\r\n> \r\n> Unfortunately, the best\/laziest way I found to do this introduced an `nltk` dependency (For sentence splitting, all sentences don't end in `.`!!!), but this might be avoidable with some effort.\r\n\r\nThanks for the details, I didn't know about that. Maybe we should consider adding this processing step or at least mention it somewhere in the library or the documentation\r\n\r\n> #### Sidebar: Wouldn't Deterministic Be Better?\r\n> `rouge_scorer.scoring.BootstrapAggregator` is well named but is not deterministic which I would like to change for my mental health, unless there is some really good reason to sample 500 observations before computing f-scores.\r\n> \r\n> I have a fix on a branch, but I wanted to get some context before introducting a 4th way to compute rouge. Scores are generally within .03 Rouge2 of boostrap after multiplying by 100, e.g 22.05 vs 22.08 Rouge2.\r\n\r\nI think the default `n_samples` of the aggregator is 1000. We could increase it or at least allow users to change it if they want more precise results.","Hi, thanks for the solution. \r\n\r\nI am not sure if this is a bug, but on line [510](https:\/\/github.com\/huggingface\/transformers\/blob\/99cb924bfb6c4092bed9232bea3c242e27c6911f\/examples\/seq2seq\/utils.py#L510), are pred, tgt supposed to be swapped?","This looks like a bug in an old version of the examples in `transformers`"],"created_at":1599839372000,"updated_at":1617211713000,"closed_at":1601632338000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I used RougeL implementation provided in `datasets` [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/metrics\/rouge\/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https:\/\/arxiv.org\/pdf\/1909.03186.pdf) for example.\r\nCan you make sure the google-research implementation you are using matches the official perl implementation? \r\nThere are a couple of python wrappers around the perl implementation, [this](https:\/\/pypi.org\/project\/pyrouge\/) has been commonly used, and [this](https:\/\/github.com\/pltrdy\/files2rouge) is used in fairseq). \r\nThere's also a python reimplementation [here](https:\/\/github.com\/pltrdy\/rouge) but its RougeL numbers are way off. \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/617\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/616","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/616\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/616\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/616\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/616","id":699462293,"node_id":"MDU6SXNzdWU2OTk0NjIyOTM=","number":616,"title":"UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have the same issue","Same issue here when Trying to load a dataset from disk.","I am also experiencing this issue, and don't know if it's affecting my training.","Same here. I hope the dataset is not being modified in-place.","I think the only way to avoid this warning would be to do a copy of the numpy array before providing it.\r\n\r\nThis would slow down a bit the iteration over the dataset but maybe it would be safer. We could disable the copy with a flag on the `set_format` command.\r\n\r\nIn most typical cases of training a NLP model, PyTorch shouldn't modify the input so it's ok to have a non-writable array but I can understand the warning is a bit scary so maybe we could choose the side of non-warning\/slower by default and have an option to speedup.\r\n\r\nWhat do you think @lhoestq ? ","@thomwolf Would it be possible to have the array look writeable, but raise an error if it is actually written to?\r\n\r\nI would like to keep my code free of warning, but I also wouldn't like to slow down the program because of unnecessary copy operations. ","@AndreasMadsen probably not I would guess (no free lunch hahah)","@thomwolf Why not? Writable is checked with `arr.flags.writeable`, and writing is done via magic methods.","Well because I don't know the internal of numpy as well as you I guess hahahah, do you want to try to open a PR proposing a solution?","@thomwolf @AndreasMadsen I think this is a terrible idea, n\/o, and I am very much against it. Modifying internals of an array in such a hacky way is bound to run into other (user) issues down the line. To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) but then they get a warning that the array is not writeable. That's extremely confusing. \r\n\r\nIf your only goal is to get rid of warnings in your code, then you can just use a [simplefilter](https:\/\/docs.python.org\/3.8\/library\/warnings.html#temporarily-suppressing-warnings) for UserWarnings in your own code. Changing the code-base in such an intuitive way does not seem like a good way to go and sets a bad precedent, imo. \r\n\r\n(Feel free to disagree, of course.)\r\n\r\nIMO a warning can stay (as they can be filtered by users anyway), but it can be clarified why the warning takes place.","> To users it would not be clear at all what is going on e.g. when they check for write access (which will return True) but then they get a warning that the array is not writeable. That's extremely confusing.\r\n\r\nConfusion can be resolved with a helpful error message. In this case, that error message can be controlled by huggingface\/datasets. The right argument here is that if code depends on `.flags.writable` being truthful (not just for warnings), then it will cause unavoidable errors. Although, I can't imagine such a use-case.\r\n\r\n> If your only goal is to get rid of warnings in your code, then you can just use a simplefilter for UserWarnings in your own code. Changing the code-base in such an intuitive way does not seem like a good way to go and sets a bad precedent, imo.\r\n\r\nI don't want to ignore all `UserWarnings`, nor all warnings regarding non-writable arrays. Ignoring warnings leads to hard to debug issues.\r\n\r\n> IMO a warning can stay (as they can be filtered by users anyway), but it can be clarified why the warning takes place.\r\n\r\nPlain use cases should really not generate warnings. It teaches developers to ignore warnings which is a terrible practice.\r\n\r\n---\r\n\r\nThe best solution would be to allow non-writable arrays in `DataLoader`, but that is a PyTorch issue.","> The right argument here is that if code depends on `.flags.writable` being truthful (not just for warnings), then it will cause unavoidable errors. Although, I can't imagine such a use-case.\r\n\r\nThat's exactly the argument in my first sentence. Too often someone \"cannot think of a use-case\", but you can not foresee the use-cases of a whole research community.\r\n \r\n> I don't want to ignore all `UserWarnings`, nor all warnings regarding non-writable arrays. Ignoring warnings leads to hard to debug issues.\r\n\r\nThat's fair.\r\n\r\n> Plain use cases should really not generate warnings. It teaches developers to ignore warnings which is a terrible practice.\r\n\r\nBut this is not a plain use-case (because Pytorch does not support these read-only tensors). Manually setting the flag to writable will solve the issue on the surface but is basically just a hack to compensate for something that is not allowed in another library. \r\n\r\nWhat about an \"ignore_warnings\" flag in `set_format` that when True wraps the offending code in a block to ignore userwarnings at that specific step in [_convert_outputs](https:\/\/github.com\/huggingface\/datasets\/blob\/880c2c76a8223a00c303eab2909371e857113063\/src\/datasets\/arrow_dataset.py#L821)? Something like:\r\n\r\n```python\r\ndef _convert_outputs(..., ignore_warnings=True):\r\n ...\r\n with warnings.catch_warnings():\r\n if ignore_warnings:\r\n warnings.simplefilter(\"ignore\", UserWarning)\r\n return torch.tensor(...)\r\n# continues without warning filter after context manager...\r\n```","> But this is not a plain use-case (because Pytorch does not support these read-only tensors).\r\n\r\nBy \"plain\", I mean the recommended way to use `datasets` with PyTorch according to the `datasets` documentation.","This error is what I see when I run the first lines of the Pytorch Quickstart. It should also say that it should be ignored and\/or how to fix it. BTW, this is a Pytorch error message -- not a Huggingface error message. My code runs anyway."],"created_at":1599838756000,"updated_at":1626988341000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this strange Userwarning without a stack trace:\r\n\r\n> Set __getitem__(key) output type to torch for ['input_ids', 'sembedding'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n> C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\datasets\\arrow_dataset.py:835: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\\torch\\csrc\\utils\\tensor_numpy.cpp:141.)\r\n> return torch.tensor(x, **format_kwargs)\r\n\r\nThe first one might not be related to the warning, but it is odd that it is shown, too. It is unclear whether that is something that I should do or something that that the program is doing at that moment.\r\n\r\nSnippet:\r\n```\r\n dataset = Dataset.from_dict(torch.load(\"data\/dummy.pt.pt\"))\r\n print(dataset)\r\n tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\r\n keys_to_retain = {\"input_ids\", \"sembedding\"}\r\n dataset = dataset.map(lambda example: tokenizer(example[\"text\"], padding='max_length'), batched=True)\r\n dataset.remove_columns_(set(dataset.column_names) - keys_to_retain)\r\n\r\n dataset.set_format(type=\"torch\", columns=[\"input_ids\", \"sembedding\"])\r\n dataloader = torch.utils.data.DataLoader(dataset, batch_size=2)\r\n\r\n print(next(iter(dataloader)))\r\n```\r\n\r\nPS: the input type for `remove_columns_` should probably be an Iterable rather than just a List.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/616\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/615","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/615\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/615\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/615\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/615","id":699410773,"node_id":"MDU6SXNzdWU2OTk0MTA3NzM=","number":615,"title":"Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Related: https:\/\/issues.apache.org\/jira\/browse\/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_indices` is not None, this breaks indexing by slice. E.g. `dset.shuffle()[:1]` breaks.\r\n\r\nLuckily so far I haven't seen `_indices.column(0).take` break, which means it doesn't break `select` or anything like that which is where the speed really matters, it's just `_getitem`. So I'm currently working around it by just doing the arrow v0 method in `_getitem`:\r\n```\r\n#if PYARROW_V0:\r\ndata_subset = pa.concat_tables(\r\n self._data.slice(indices_array[i].as_py(), 1) for i in range(len(indices_array))\r\n)\r\n#else:\r\n #data_subset = self._data.take(indices_array)\r\n```","Let me know if you meet other offset overflow issues @joeddav "],"created_at":1599835838000,"updated_at":1600534060000,"closed_at":1600533991000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"How to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nwiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\nwiki[[0]]\r\n\r\n---------------------------------------------------------------------------\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-13-381aedc9811b> in <module>\r\n----> 1 wikipedia[[0]]\r\n\r\n~\/Desktop\/hf\/nlp\/src\/datasets\/arrow_dataset.py in __getitem__(self, key)\r\n 1069 format_columns=self._format_columns,\r\n 1070 output_all_columns=self._output_all_columns,\r\n-> 1071 format_kwargs=self._format_kwargs,\r\n 1072 )\r\n 1073 \r\n\r\n~\/Desktop\/hf\/nlp\/src\/datasets\/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)\r\n 1037 )\r\n 1038 else:\r\n-> 1039 data_subset = self._data.take(indices_array)\r\n 1040 \r\n 1041 if format_type is not None:\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.take()\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/compute.py in take(data, indices, boundscheck)\r\n 266 \"\"\"\r\n 267 options = TakeOptions(boundscheck)\r\n--> 268 return call_function('take', [data, indices], options)\r\n 269 \r\n 270 \r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/_compute.pyx in pyarrow._compute.call_function()\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/_compute.pyx in pyarrow._compute.Function.call()\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: offset overflow while concatenating arrays\r\n```\r\n\r\nIt seems to work fine with small datasets or with pyarrow 0.17.1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/615\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/614","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/614\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/614\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/614\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/614","id":699177110,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg0OTQ2MzA1","number":614,"title":"[doc] Update deploy.sh","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599822373000,"updated_at":1600073359000,"closed_at":1600073357000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/614","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/614","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/614.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/614.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/614\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/613","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/613\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/613\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/613\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/613","id":699117070,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg0ODkyMTUx","number":613,"title":"Add CoNLL-2003 shared task dataset","user":{"login":"vblagoje","id":458335,"node_id":"MDQ6VXNlcjQ1ODMzNQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/458335?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vblagoje","html_url":"https:\/\/github.com\/vblagoje","followers_url":"https:\/\/api.github.com\/users\/vblagoje\/followers","following_url":"https:\/\/api.github.com\/users\/vblagoje\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vblagoje\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vblagoje\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vblagoje\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vblagoje\/orgs","repos_url":"https:\/\/api.github.com\/users\/vblagoje\/repos","events_url":"https:\/\/api.github.com\/users\/vblagoje\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vblagoje\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think we should somewhere mention, that is the dataset in IOB2 tagging scheme, whereas the original dataset uses IOB1 :)","Indeed this is something we want to mention.\r\n\r\nIf would want to add more details about the IOB1->2 change, feel free to ignore my suggestions and edit the description + update the dataset_info","@lhoestq do you want me to update it or you'll update it. I am ok either way","The best would be to mention this change in the description and then update the dataset_info.json file.\r\nCould you do that if you don't mind ?\r\n\r\nThen it should be ready to merge :)\r\n\r\nThanks again for adding this dataset !","No problem @lhoestq I'll do the update","@lhoestq please check if 847addf is exactly what we want","Is the German task also part of this? If not, can it be accessed via the Datasets library?"],"created_at":1599818550000,"updated_at":1601894585000,"closed_at":1600338998000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/613","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/613","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/613.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/613.patch"},"body":"Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https:\/\/github.com\/huggingface\/transformers\/pull\/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also for syntactic chunking and part-of-speech (POS) tagging. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/613\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/612","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/612\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/612\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/612\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/612","id":699008644,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg0Nzk2Mjg5","number":612,"title":"add multi-proc to dataset dict","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599812293000,"updated_at":1599819613000,"closed_at":1599819611000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/612","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/612","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/612.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/612.patch"},"body":"Add multi-proc to `DatasetDict`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/612\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/611","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/611\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/611\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/611\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/611","id":698863988,"node_id":"MDU6SXNzdWU2OTg4NjM5ODg=","number":611,"title":"ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648","user":{"login":"sangyx","id":32364921,"node_id":"MDQ6VXNlcjMyMzY0OTIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32364921?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sangyx","html_url":"https:\/\/github.com\/sangyx","followers_url":"https:\/\/api.github.com\/users\/sangyx\/followers","following_url":"https:\/\/api.github.com\/users\/sangyx\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sangyx\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sangyx\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sangyx\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sangyx\/orgs","repos_url":"https:\/\/api.github.com\/users\/sangyx\/repos","events_url":"https:\/\/api.github.com\/users\/sangyx\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sangyx\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you give us stats\/information on your pandas DataFrame?","```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n 2 start_price float64\r\n 3 shipping_fee float64\r\n 4 picture_url object \r\n 5 embeddings object \r\ndtypes: float64(2), int64(1), object(3)\r\nmemory usage: 915.2+ MB\r\n```","Thanks and some more on the `embeddings` and `picture_url` would be nice as well (type and max lengths of the elements)","`embedding` is `np.array` of shape `(128,)`. `picture_url` is url, such as 'https:\/\/i.ebayimg.com\/00\/s\/MTE5OVgxNjAw\/z\/ZOsAAOSwAG9fHQq5\/$_12.JPG?set_id=880000500F;https:\/\/i.ebayimg.com\/00\/s\/MTE5OVgxNjAw\/z\/OSgAAOSwokBfHQq8\/$_12.JPG?set_id=880000500F'","It looks like a Pyarrow limitation.\r\nI was able to reproduce the error with \r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\n n = 1713614\r\ndf = pd.DataFrame.from_dict({\"a\": list(np.zeros((n, 128))), \"b\": range(n)})\r\npa.Table.from_pandas(df)\r\n```\r\n\r\nI also tried with 50% of the dataframe and it actually works.\r\nI created an issue on Apache Arrow's JIRA [here](https:\/\/issues.apache.org\/jira\/browse\/ARROW-9976)\r\n\r\nOne way to fix that would be to chunk the dataframe and concatenate arrow tables.","It looks like it's going to be fixed in pyarrow 2.0.0 :)\r\n\r\nIn the meantime I suggest to chunk big dataframes to create several small datasets, and then concatenate them using [concatenate_datasets](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?highlight=concatenate#datasets.concatenate_datasets)"],"created_at":1599802152000,"updated_at":1601046895000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I'm trying to load a dataset from Dataframe, but I get the error:\r\n```bash\r\n---------------------------------------------------------------------------\r\nArrowCapacityError Traceback (most recent call last)\r\n<ipython-input-7-146b6b495963> in <module>\r\n----> 1 dataset = Dataset.from_pandas(emb)\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py in from_pandas(cls, df, features, info, split)\r\n 223 info.features = features\r\n 224 pa_table: pa.Table = pa.Table.from_pandas(\r\n--> 225 df=df, schema=pa.schema(features.type) if features is not None else None\r\n 226 )\r\n 227 return cls(pa_table, info=info, split=split)\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/pyarrow\/table.pxi in pyarrow.lib.Table.from_pandas()\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/pyarrow\/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe)\r\n 591 for i, maybe_fut in enumerate(arrays):\r\n 592 if isinstance(maybe_fut, futures.Future):\r\n--> 593 arrays[i] = maybe_fut.result()\r\n 594 \r\n 595 types = [x.type for x in arrays]\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/concurrent\/futures\/_base.py in result(self, timeout)\r\n 426 raise CancelledError()\r\n 427 elif self._state == FINISHED:\r\n--> 428 return self.__get_result()\r\n 429 \r\n 430 self._condition.wait(timeout)\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/concurrent\/futures\/_base.py in __get_result(self)\r\n 382 def __get_result(self):\r\n 383 if self._exception:\r\n--> 384 raise self._exception\r\n 385 else:\r\n 386 return self._result\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/concurrent\/futures\/thread.py in run(self)\r\n 55 \r\n 56 try:\r\n---> 57 result = self.fn(*self.args, **self.kwargs)\r\n 58 except BaseException as exc:\r\n 59 self.future.set_exception(exc)\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/pyarrow\/pandas_compat.py in convert_column(col, field)\r\n 557 \r\n 558 try:\r\n--> 559 result = pa.array(col, type=type_, from_pandas=True, safe=safe)\r\n 560 except (pa.ArrowInvalid,\r\n 561 pa.ArrowNotImplementedError,\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._ndarray_to_array()\r\n\r\n~\/miniconda3\/envs\/dev\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\nMy code is :\r\n```python\r\nfrom nlp import Dataset\r\ndataset = Dataset.from_pandas(emb)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/611\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/610","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/610\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/610\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/610\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/610","id":698349388,"node_id":"MDU6SXNzdWU2OTgzNDkzODg=","number":610,"title":"Load text file for RoBERTa pre-training. ","user":{"login":"chiyuzhang94","id":33407613,"node_id":"MDQ6VXNlcjMzNDA3NjEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33407613?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chiyuzhang94","html_url":"https:\/\/github.com\/chiyuzhang94","followers_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/followers","following_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/orgs","repos_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/repos","events_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chiyuzhang94\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you try\r\n```python\r\nload_dataset('text', data_files='test.txt',cache_dir=\".\/\", split=\"train\")\r\n```\r\n?\r\n\r\n`load_dataset` returns a dictionary by default, like {\"train\": your_dataset}","Hi @lhoestq\r\nThanks for your suggestion.\r\n\r\nI tried \r\n```\r\ndataset = load_dataset('text', data_files='test.txt',cache_dir=\".\/\", split=\"train\")\r\nprint(dataset)\r\ndataset.set_format(type='torch',columns=[\"text\"])\r\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=8)\r\nnext(iter(dataloader))\r\n```\r\n\r\nBut it still doesn't work and got error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-7-388aca337e2f> in <module>\r\n----> 1 next(iter(dataloader))\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/dataloader.py in __next__(self)\r\n 361 \r\n 362 def __next__(self):\r\n--> 363 data = self._next_data()\r\n 364 self._num_yielded += 1\r\n 365 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/dataloader.py in _next_data(self)\r\n 401 def _next_data(self):\r\n 402 index = self._next_index() # may raise StopIteration\r\n--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 404 if self._pin_memory:\r\n 405 data = _utils.pin_memory.pin_memory(data)\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py in fetch(self, possibly_batched_index)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py in <listcomp>(.0)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\n\/Library\/Python\/3.7\/site-packages\/datasets-0.4.0-py3.7.egg\/datasets\/arrow_dataset.py in __getitem__(self, key)\r\n 1069 format_columns=self._format_columns,\r\n 1070 output_all_columns=self._output_all_columns,\r\n-> 1071 format_kwargs=self._format_kwargs,\r\n 1072 )\r\n 1073 \r\n\r\n\/Library\/Python\/3.7\/site-packages\/datasets-0.4.0-py3.7.egg\/datasets\/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)\r\n 1056 format_columns=format_columns,\r\n 1057 output_all_columns=output_all_columns,\r\n-> 1058 format_kwargs=format_kwargs,\r\n 1059 )\r\n 1060 return outputs\r\n\r\n\/Library\/Python\/3.7\/site-packages\/datasets-0.4.0-py3.7.egg\/datasets\/arrow_dataset.py in _convert_outputs(self, outputs, format_type, format_columns, output_all_columns, format_kwargs)\r\n 872 continue\r\n 873 if format_columns is None or k in format_columns:\r\n--> 874 v = map_nested(command, v, **map_nested_kwargs)\r\n 875 output_dict[k] = v\r\n 876 return output_dict\r\n\r\n\/Library\/Python\/3.7\/site-packages\/datasets-0.4.0-py3.7.egg\/datasets\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 214 # Singleton\r\n 215 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 216 return function(data_struct)\r\n 217 \r\n 218 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)\r\n\r\n\/Library\/Python\/3.7\/site-packages\/datasets-0.4.0-py3.7.egg\/datasets\/arrow_dataset.py in command(x)\r\n 833 if x.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects\r\n 834 return [map_nested(command, i, **map_nested_kwargs) for i in x]\r\n--> 835 return torch.tensor(x, **format_kwargs)\r\n 836 \r\n 837 elif format_type == \"tensorflow\":\r\n\r\nTypeError: new(): invalid data type 'str'\r\n```\r\n\r\nI found type can be ['numpy', 'torch', 'tensorflow', 'pandas'] only, how can I deal with the string type?","You need to tokenize the string inputs to convert them in integers before you can feed them to a pytorch dataloader.\r\n\r\nYou can read the quicktour of the datasets or the transformers libraries to know more about that:\r\n- transformers: https:\/\/huggingface.co\/transformers\/quicktour.html\r\n- dataset: https:\/\/huggingface.co\/docs\/datasets\/quicktour.html","Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).\r\nBut finally got it working. This is what I did after looking into the documentation.\r\n\r\n1. split the whole dataset file into smaller files\r\n```bash\r\nmkdir .\/shards\r\nsplit -a 4 -l 256000 -d full_raw_corpus.txt .\/shards\/shard_\r\n````\r\n2. Pass paths of small data files to `load_dataset`\r\n```python\r\nfiles = glob.glob('shards\/*')\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('text', data_files=files, split='train')\r\n```\r\n(On passing the whole dataset file (11GB) directly to `load_dataset` was resulting into RAM issue)\r\n\r\n3. Tokenization\r\n```python\r\ndef encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')\r\ndataset = dataset.map(encode, batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n```\r\n Now you can pass `dataset` to `Trainer` or `pytorch DataLoader`\r\n```python\r\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\nnext(iter(dataloader))\r\n```\r\nHope this helps\r\n","Thanks, @thomwolf and @sipah00 ,\r\n\r\nI tried to implement your suggestions in my scripts. \r\nNow, I am facing some connection time-out error. I am using my local file, I have no idea why the module request s3 database.\r\n\r\nThe log is:\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/requests\/adapters.py\", line 449, in send\r\n raise err\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/util\/connection.py\", line 74, in create_connection\r\n timeout=timeout\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 720, in urlopen\r\n sock.connect(sa)\r\nTimeoutError: [Errno 110] Connection timed out\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 672, in urlopen\r\n method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/util\/retry.py\", line 436, in increment\r\n chunked=chunked,\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 376, in _make_request\r\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\r\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: \/datasets.huggingface.co\/datasets\/datasets\/text\/text.py (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection obj\r\nect at 0x7fff401e0e48>: Failed to establish a new connection: [Errno 110] Connection timed out',))\r\n\r\nTraceback (most recent call last):\r\n File \"\/scratch\/roberta_emohash\/run_language_modeling.py\", line 1019, in <module>\r\n main()\r\n File \"\/scratch\/roberta_emohash\/run_language_modeling.py\", line 962, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"\/scratch\/roberta_emohash\/run_language_modeling.py\", line 177, in load_and_cache_examples\r\n return HG_Datasets(tokenizer, file_path, args)\r\n File \"\/scratch\/roberta_emohash\/run_language_modeling.py\", line 117, in HG_Datasets\r\n dataset = load_dataset('text', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\n File \"\/arc\/project\/evn_py36\/datasets\/datasets\/src\/datasets\/load.py\", line 590, in load_dataset\r\n self._validate_conn(conn)\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/connectionpool.py\", line 994, in _validate_conn\r\n conn.connect()\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line 300, in connect\r\n conn = self._new_conn()\r\n File \"\/home\/.local\/lib\/python3.6\/site-packages\/urllib3\/connection.py\", line 169, in _new_conn\r\n self, \"Failed to establish a new connection: %s\" % e\r\nurllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7fff401e0da0>: Failed to establish a new connection: [Errno 110] Connection timed out\r\n\r\n``` \r\n\r\nDo you have any experience on this issue?","No, I didn't encounter this problem, it seems to me a network problem","I noticed this is because I use a cloud server where does not provide for connections from our standard compute nodes to outside resources. \r\n\r\nFor the `datasets` package, it seems that if the loading script is not already cached in the library it will attempt to connect to an AWS resource to download the dataset loading script. \r\n\r\nI am wondering why the package works in this way. Do you have any suggestions to solve this issue? ","I solved the above issue by downloading text.py manually and passing the path to the `load_dataset` function. \r\n\r\nNow, I have a new issue with the Read-only file system.\r\n\r\nThe error is: \r\n```\r\nI0916 22:14:38.453380 140737353971520 filelock.py:274] Lock 140734268996072 acquired on \/scratch\/chiyuzh\/roberta\/text.py.lock\r\nFound main folder for dataset \/scratch\/chiyuzh\/roberta\/text.py at \/home\/chiyuzh\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\r\nCreating specific version folder for dataset \/scratch\/chiyuzh\/roberta\/text.py at \/home\/chiyuzh\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014\r\nI0916 22:14:38.530371 140737353971520 filelock.py:318] Lock 140734268996072 released on \/scratch\/chiyuzh\/roberta\/text.py.lock\r\nTraceback (most recent call last):\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling_hg.py\", line 1019, in <module>\r\n main()\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling_hg.py\", line 962, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling_hg.py\", line 177, in load_and_cache_examples\r\n return HG_Datasets(tokenizer, file_path, args)\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling_hg.py\", line 117, in HG_Datasets\r\n dataset = load_dataset('\/scratch\/chiyuzh\/roberta\/text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\n File \"\/arc\/project\/chiyuzh\/evn_py36\/datasets\/src\/datasets\/load.py\", line 590, in load_dataset\r\n path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True\r\n File \"\/arc\/project\/chiyuzh\/evn_py36\/datasets\/src\/datasets\/load.py\", line 385, in prepare_module\r\n os.makedirs(hash_folder_path)\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\nOSError: [Errno 30] Read-only file system: '\/home\/chiyuzh\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/text\/512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7fcc649178b014'\r\n\r\n```\r\n\r\nI installed datasets at \/project\/chiyuzh\/evn_py36\/datasets\/src where is a writable directory.\r\nI also tried change the environment variables to the writable directory:\r\n`export HF_MODULES_PATH=\/project\/chiyuzh\/evn_py36\/datasets\/cache_dir\/`\r\n`export HF_DATASETS_CACHE=\/project\/chiyuzh\/evn_py36\/datasets\/cache_dir\/`\r\n \r\nIn my scripts, I also changed to:\r\n`dataset = load_dataset('\/scratch\/chiyuzh\/roberta\/text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")`\r\n`data_cache_dir = $TMPDIR\/data\/` that also a writable directory.\r\n \r\nBut it still try to make directory at \/home\/chiyuzh\/.cache\/huggingface\/modules\/.\r\nDo you have any idea about this issue? @thomwolf \r\n","> Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).\r\n> But finally got it working. This is what I did after looking into the documentation.\r\n> \r\n> 1. split the whole dataset file into smaller files\r\n> \r\n> ```shell\r\n> mkdir .\/shards\r\n> split -a 4 -l 256000 -d full_raw_corpus.txt .\/shards\/shard_\r\n> ```\r\n> \r\n> 1. Pass paths of small data files to `load_dataset`\r\n> \r\n> ```python\r\n> files = glob.glob('shards\/*')\r\n> from datasets import load_dataset\r\n> dataset = load_dataset('text', data_files=files, split='train')\r\n> ```\r\n> \r\n> (On passing the whole dataset file (11GB) directly to `load_dataset` was resulting into RAM issue)\r\n> \r\n> 1. Tokenization\r\n> \r\n> ```python\r\n> def encode(examples):\r\n> return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> dataset = dataset.map(encode, batched=True)\r\n> dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n> ```\r\n> \r\n> Now you can pass `dataset` to `Trainer` or `pytorch DataLoader`\r\n> \r\n> ```python\r\n> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\n> next(iter(dataloader))\r\n> ```\r\n> \r\n> Hope this helps\r\n\r\nWhen I run 'dataset = dataset.map(encode, batched=True)',\r\nI encountered a problem like this:\r\n\r\n> Testing the mapped function outputs\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 300, in map\r\n for k, dataset in self.items()\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 300, in <dictcomp>\r\n for k, dataset in self.items()\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1224, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"<stdin>\", line 3, in encode\r\nTypeError: __init__() takes 1 positional argument but 2 were given","> > Hey @chiyuzhang94, I was also having trouble in loading a large text file (11GB).\r\n> > But finally got it working. This is what I did after looking into the documentation.\r\n> > \r\n> > 1. split the whole dataset file into smaller files\r\n> > \r\n> > ```shell\r\n> > mkdir .\/shards\r\n> > split -a 4 -l 256000 -d full_raw_corpus.txt .\/shards\/shard_\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > 1. Pass paths of small data files to `load_dataset`\r\n> > \r\n> > ```python\r\n> > files = glob.glob('shards\/*')\r\n> > from datasets import load_dataset\r\n> > dataset = load_dataset('text', data_files=files, split='train')\r\n> > ```\r\n> > \r\n> > \r\n> > (On passing the whole dataset file (11GB) directly to `load_dataset` was resulting into RAM issue)\r\n> > \r\n> > 1. Tokenization\r\n> > \r\n> > ```python\r\n> > def encode(examples):\r\n> > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > dataset = dataset.map(encode, batched=True)\r\n> > dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n> > ```\r\n> > \r\n> > \r\n> > Now you can pass `dataset` to `Trainer` or `pytorch DataLoader`\r\n> > ```python\r\n> > dataloader = torch.utils.data.DataLoader(dataset, batch_size=4)\r\n> > next(iter(dataloader))\r\n> > ```\r\n> > \r\n> > \r\n> > Hope this helps\r\n> \r\n> When I run 'dataset = dataset.map(encode, batched=True)',\r\n> I encountered a problem like this:\r\n> \r\n> > Testing the mapped function outputs\r\n> > Traceback (most recent call last):\r\n> > File \"\", line 1, in \r\n> > File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 300, in map\r\n> > for k, dataset in self.items()\r\n> > File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/dataset_dict.py\", line 300, in \r\n> > for k, dataset in self.items()\r\n> > File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1224, in map\r\n> > update_data = does_function_return_dict(test_inputs, test_indices)\r\n> > File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1195, in does_function_return_dict\r\n> > function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n> > File \"\", line 3, in encode\r\n> > TypeError: **init**() takes 1 positional argument but 2 were given\r\n\r\nWhat is your encoder function?","> ```python\r\n> def encode(examples):\r\n> return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> ```\r\n\r\nIt is the same as suggested:\r\n\r\n> def encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')","> > ```python\r\n> > def encode(examples):\r\n> > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > ```\r\n> \r\n> It is the same as suggested:\r\n> \r\n> > def encode(examples):\r\n> > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n\r\nDo you use this function in a `class` object? \r\n\r\ninit() takes 1 positional argument but 2 were given. I guess the additional argument is self?","> > > ```python\r\n> > > def encode(examples):\r\n> > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > > ```\r\n> > \r\n> > \r\n> > It is the same as suggested:\r\n> > > def encode(examples):\r\n> > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> \r\n> Do you use this function in a `class` object?\r\n> \r\n> init() takes 1 positional argument but 2 were given. I guess the additional argument is self?\r\n\r\nThanks for your reply.\r\nCould you provide some simple example here?\r\nCurrently, I do not use this function in a class object. \r\nI think you are right and I was wondering how to construct this class.\r\nI try to modify it based on transformers' LineByLineTextDataset. Am I correct?\r\n\r\n> class LineByLineTextDataset(Dataset):\r\n \"\"\"\r\n This will be superseded by a framework-agnostic approach\r\n soon.\r\n \"\"\"\r\n\r\n def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):\r\n assert os.path.isfile(file_path), f\"Input file path {file_path} not found\"\r\n # Here, we do not cache the features, operating under the assumption\r\n # that we will soon use fast multithreaded tokenizers from the\r\n # `tokenizers` repo everywhere =)\r\n #logger.info(\"Creating features from dataset file at %s\", file_path)\r\n #with open(file_path, encoding=\"utf-8\") as f:\r\n # lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]\r\n #batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)\r\n\r\n\timport glob\r\n\tfiles = glob.glob('\/home\/mtzhang111\/fairseq\/cs_doc\/shards\/shard_003*')\r\n\tfrom datasets import load_dataset\r\n\tdataset = load_dataset('text', data_files=files)\r\n batch_encoding= dataset.map(encode, batched=True)\r\n self.examples = batch_encoding[\"input_ids\"]\r\n\t\r\n\r\n def encode(examples):\r\n return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n\r\n def __len__(self):\r\n return len(self.examples)\r\n\r\n def __getitem__(self, i) -> torch.Tensor:\r\n return torch.tensor(self.examples[i], dtype=torch.long)\r\n","> > > > ```python\r\n> > > > def encode(examples):\r\n> > > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > > > ```\r\n> > > \r\n> > > \r\n> > > It is the same as suggested:\r\n> > > > def encode(examples):\r\n> > > > return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> > \r\n> > \r\n> > Do you use this function in a `class` object?\r\n> > init() takes 1 positional argument but 2 were given. I guess the additional argument is self?\r\n> \r\n> Thanks for your reply.\r\n> Could you provide some simple example here?\r\n> Currently, I do not use this function in a class object.\r\n> I think you are right and I was wondering how to construct this class.\r\n> I try to modify it based on transformers' LineByLineTextDataset. Am I correct?\r\n> \r\n> > class LineByLineTextDataset(Dataset):\r\n> > \"\"\"\r\n> > This will be superseded by a framework-agnostic approach\r\n> > soon.\r\n> > \"\"\"\r\n> \r\n> ```\r\n> def __init__(self, tokenizer: PreTrainedTokenizer, file_path: str, block_size: int):\r\n> assert os.path.isfile(file_path), f\"Input file path {file_path} not found\"\r\n> # Here, we do not cache the features, operating under the assumption\r\n> # that we will soon use fast multithreaded tokenizers from the\r\n> # `tokenizers` repo everywhere =)\r\n> #logger.info(\"Creating features from dataset file at %s\", file_path)\r\n> #with open(file_path, encoding=\"utf-8\") as f:\r\n> # lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]\r\n> #batch_encoding = tokenizer(lines, add_special_tokens=True, truncation=True, max_length=block_size)\r\n> \r\n> import glob\r\n> files = glob.glob('\/home\/mtzhang111\/fairseq\/cs_doc\/shards\/shard_003*')\r\n> from datasets import load_dataset\r\n> dataset = load_dataset('text', data_files=files)\r\n> batch_encoding= dataset.map(encode, batched=True)\r\n> self.examples = batch_encoding[\"input_ids\"]\r\n> \r\n> \r\n> def encode(examples):\r\n> return tokenizer(examples['text'], truncation=True, padding='max_length')\r\n> \r\n> def __len__(self):\r\n> return len(self.examples)\r\n> \r\n> def __getitem__(self, i) -> torch.Tensor:\r\n> return torch.tensor(self.examples[i], dtype=torch.long)\r\n> ```\r\n\r\nI am also struggling with this adaptation. \r\nI am not sure whether I am right.\r\n\r\nI think you don't need to construct `class LazyLineByLineTextDataset(Dataset)` at all. \r\ntorch.utils.data.Dataset is a generator. \r\n\r\nNow, we use `dataset = dataset.map(encode, batched=True)` as a generator. So we just pass dataset to torch.utils.data.DataLoader. ","@chiyuzhang94 Thanks for your reply. After some changes, currently, I managed to make the data loading process running.\r\nI published it in case you might want to take a look. Thanks for your help!\r\nhttps:\/\/github.com\/shizhediao\/Transformers_TPU","Hi @shizhediao ,\r\n\r\nThanks! It looks great!\r\n\r\nBut my problem still is the cache directory is a read-only file system. \r\n[As I mentioned](https:\/\/github.com\/huggingface\/datasets\/issues\/610#issuecomment-693912285), I tried to change the cache directory but it didn't work. \r\n\r\nDo you have any suggestions?\r\n\r\n","> I installed datasets at \/project\/chiyuzh\/evn_py36\/datasets\/src where is a writable directory.\r\n> I also tried change the environment variables to the writable directory:\r\n> `export HF_MODULES_PATH=\/project\/chiyuzh\/evn_py36\/datasets\/cache_dir\/`\r\n\r\nI think it is `HF_MODULES_CACHE` and not `HF_MODULES_PATH` @chiyuzhang94 .\r\nCould you try again and let me know if it fixes your issue ?\r\n","We should probably add a section in the doc on the caching system with the env variables in particular.","Hi @thomwolf , @lhoestq ,\r\n\r\nThanks for your suggestions. With the latest version of this package, I can load text data without Internet. \r\n\r\nBut I found the speed of dataset loading is very slow. \r\n\r\nMy scrips like this: \r\n```\r\n def token_encode(examples):\r\n tokenizer_out = tokenizer(examples['text'], truncation=True, padding=\"max_length\", add_special_tokens=True, max_length=args.block_size)\r\n return tokenizer_out\r\n \r\n path = Path(file_path)\r\n files = sorted(path.glob('*'))\r\n dataset = load_dataset('.\/text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\n dataset = dataset.map(token_encode, batched=True)\r\n\r\n dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n```\r\n\r\nI have 1,123,870,657 lines in my input directory. \r\nI can find the processing speed as following. It is very slow. \r\n```\r\n| 13\/1123871 [00:02<62:37:39, 4.98ba\/s]^M 0%| \r\n| 14\/1123871 [00:03<61:27:31, 5.08ba\/s]^M 0%| \r\n| 15\/1123871 [00:03<66:34:19, 4.69ba\/s]^M 0%| \r\n| 16\/1123871 [00:03<68:25:01, 4.56ba\/s]^M 0%| \r\n| 17\/1123871 [00:03<72:00:03, 4.34ba\/s]^M 0%| \r\n```\r\nDo you have any suggestions to accelerate this loading process?","You can use multiprocessing by specifying `num_proc=` in `.map()`\r\n\r\nAlso it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total.\r\nAm I right ?","> You can use multiprocessing by specifying `num_proc=` in `.map()`\r\n> \r\n> Also it looks like you have `1123871` batches of 1000 elements (default batch size), i.e. 1,123,871,000 lines in total.\r\n> Am I right ?\r\n\r\nHi @lhoestq ,\r\n\r\nThanks. I will try it.\r\n\r\nYou are right. I have 1,123,870,657 lines totally in the path. I split the large file into 440 small files. Each file has 2,560,000 lines.\r\n\r\nI have another question. Because I am using a cloud server where only allows running a job up to 7 days. Hence, I need to resume my model every week. If the script needs to load and process the dataset every time. It is very low efficient based on the current processing speed. Is it possible that I process the dataset once and use the process cache to in the future work? \r\n","Hi @lhoestq ,\r\n\r\nI tried to use multi-processor, but I got errors as follow: \r\nBecause I am using python distributed training, it seems some conflicts with the distributed job. \r\n\r\nDo you have any suggestions?\r\n```\r\nI0925 10:19:35.603023 140737353971520 filelock.py:318] Lock 140737229443368 released on \/tmp\/pbs.1120510.pbsha.ib.sockeye\/cache\/_tmp_pbs.1120510.pbsha.ib.sockeye_cache_text_default-7fb934ed6fac5d01_0.0.0_512f465342e4f4cd07a8791428a629c043bb89d55ad7817cbf7\r\nfcc649178b014.lock\r\nTraceback (most recent call last):\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling.py\", line 1024, in <module>\r\n main()\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling.py\", line 967, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling.py\", line 180, in load_and_cache_examples\r\n return HG_Datasets(tokenizer, file_path, args)\r\n File \"\/scratch\/chiyuzh\/roberta\/run_language_modeling.py\", line 119, in HG_Datasets\r\n dataset = dataset.map(token_encode, batched=True, batch_size = 10000, num_proc = 16)\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1287, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/site-packages\/datasets\/arrow_dataset.py\", line 1287, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/multiprocessing\/pool.py\", line 644, in get\r\n raise self._value\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/multiprocessing\/pool.py\", line 424, in _handle_tasks\r\n put(task)\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/multiprocessing\/connection.py\", line 206, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"\/project\/chiyuzh\/evn_py36\/lib\/python3.6\/multiprocessing\/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\nAttributeError: Can't pickle local object 'HG_Datasets.<locals>.token_encode'\r\n```","For multiprocessing, the function given to `map` must be picklable.\r\nMaybe you could try to define `token_encode` outside `HG_Datasets` ?\r\n\r\nAlso maybe #656 could make functions defined locally picklable for multiprocessing, once it's merged.","> I have another question. Because I am using a cloud server where only allows running a job up to 7 days. Hence, I need to resume my model every week. If the script needs to load and process the dataset every time. It is very low efficient based on the current processing speed. Is it possible that I process the dataset once and use the process cache to in the future work?\r\n\r\nFeel free to save your processed dataset using `dataset.save_to_disk(\"path\/to\/save\/directory\")`.\r\n\r\nThen you'll be able to reload it again using\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(\"path\/to\/save\/directory\")\r\n```","Hi @lhoestq ,\r\n\r\nThanks for your suggestion. \r\nI tried to process the dataset and save it to disk. \r\nI have 1.12B samples in the raw dataset. I used 16 processors.\r\nI run this process job for 7 days. But it didn't finish. I don't why the processing is such slow. \r\n\r\nThe log shows that some processors (\\#12, \\#14, \\#15) are very slow. The different processor has a different speed. These slow processors look like a bottleneck. \r\n\r\nCould you please give me any suggestion to improve the processing speed? \r\n\r\nThanks. \r\nChiyu\r\n\r\nHere is my code:\r\n```\r\ndef token_encode(examples):\r\n tokenizer_out = tokenizer(examples['text'], truncation=True, padding=\"max_length\", add_special_tokens=True, max_length=args.block_size)\r\n return tokenizer_out\r\n\r\npath = Path(file_path)\r\nfiles = sorted(path.glob('*'))\r\ndataset = load_dataset('.\/text.py', data_files=files, cache_dir = args.data_cache_dir, split=\"train\")\r\ndataset = dataset.map(token_encode, batched=True, batch_size = 16384, num_proc = 16)\r\ndataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\ndataset.save_to_disk(output_dir)\r\n```\r\nHere is the log. \r\n```\r\n^M#6: 1%|\u258f | 59\/4288 [55:10<66:11:58, 56.35s\/ba]\r\n^M#1: 8%|\u258a | 356\/4288 [55:39<10:40:02, 9.77s\/ba]\r\n^M#2: 5%|\u258d | 210\/4288 [55:33<17:47:19, 15.70s\/ba]\r\n^M#0: 19%|\u2588\u2589 | 836\/4288 [55:53<4:08:56, 4.33s\/ba]\r\n^M#0: 20%|\u2588\u2589 | 837\/4288 [55:57<4:01:52, 4.21s\/ba]\r\n^M#1: 8%|\u258a | 357\/4288 [55:48<10:38:09, 9.74s\/ba]\r\n^M#0: 20%|\u2588\u2589 | 838\/4288 [56:01<4:02:56, 4.23s\/ba]\r\n^M#3: 4%|\u258e | 155\/4288 [55:43<24:41:20, 21.51s\/ba]\r\n^M#0: 20%|\u2588\u2589 | 839\/4288 [56:05<4:04:48, 4.26s\/ba]\r\n^M#12: 1%| | 29\/4288 [54:50<133:20:53, 112.72s\/ba]\r\n^M#2: 5%|\u258d | 211\/4288 [55:48<17:40:33, 15.61s\/ba]\r\n^M#14: 0%| | 2\/4288 [04:24<157:17:50, 132.12s\/ba]\r\n^M#15: 0%| | 1\/4288 [02:24<172:11:37, 144.60s\/ba]\r\n```","Hi !\r\n\r\nAs far as I can tell, there could be several reasons for your processes to have different speeds:\r\n- some parts of your dataset have short passages while some have longer passages, that take more time to be processed\r\n- OR there are other processes running that prevent some of them to run at full speed\r\n- OR the value of `num_proc` is higher than the number of actual processes that you can run in parallel at full speed.\r\n\r\nSo I'd suggest you to check that you have nothing else running in parallel to your processing job, and also maybe take a look at the slow parts of the datasets.\r\nWhen doing multiprocessing, the dataset is sharded in `num_proc` contiguous parts that are processed individually in each process. If you want to take a look at the dataset processed in the 12th shard of 16 for example, you can do:\r\n\r\n```python\r\nmy_shard = dataset.shard(num_shards=16, index=12, contiguous=True)\r\n```\r\n\r\nHope this helps, let me know if you find what is causing this slow down.","Do you use a fast or a slow tokenizer from the `transformers` library @chiyuzhang94?","> Do you use a fast or a slow tokenizer from the `transformers` library @chiyuzhang94?\r\n\r\nHi @thomwolf ,\r\n I use this: \r\n```\r\nfrom transformers import\r\nAutoTokenizer.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir)\r\n```\r\n\r\nI guess this is a slow one, let me explore the fast tokenizer. ","> Hi !\r\n> \r\n> As far as I can tell, there could be several reasons for your processes to have different speeds:\r\n> \r\n> * some parts of your dataset have short passages while some have longer passages, that take more time to be processed\r\n> * OR there are other processes running that prevent some of them to run at full speed\r\n> * OR the value of `num_proc` is higher than the number of actual processes that you can run in parallel at full speed.\r\n> \r\n> So I'd suggest you to check that you have nothing else running in parallel to your processing job, and also maybe take a look at the slow parts of the datasets.\r\n> When doing multiprocessing, the dataset is sharded in `num_proc` contiguous parts that are processed individually in each process. If you want to take a look at the dataset processed in the 12th shard of 16 for example, you can do:\r\n> \r\n> ```python\r\n> my_shard = dataset.shard(num_shards=16, index=12, contiguous=True)\r\n> ```\r\n> \r\n> Hope this helps, let me know if you find what is causing this slow down.\r\n\r\nHi @lhoestq ,\r\n\r\nThanks for your suggestions. \r\nI don't think my problem is due to any one of these seasons. \r\n\r\n1. I have 1,123,870,657 lines totally in the path. I split the large file into 440 small files. Each file has 2,560,000 lines. The last file is smaller a little bit. But they are similar. I randomly shuffled all the 1,123,870,657 lines. Hence, the sequences should also be similar across all the files. \r\n\r\n2. I run this script on the entire node. I requested all the resources on the nodes (40 CPUs, 384GB memory). Hence, these were not any other processes. \r\n\r\n 3. As I say, the node has 40 CPUs, but I set num_proc = 16. This should not be a problem.","Hi @thomwolf \r\nI am using `RobertaTokenizerFast` now. \r\n\r\nBut the speed is still imbalanced, some processors are still slow. \r\nHere is the part of the log. #0 is always much fast than lower rank processors. \r\n\r\n```\r\n#15: 3%|\u258e | 115\/3513 [3:18:36<98:01:33, 103.85s\/ba]\r\n#2: 24%|\u2588\u2588\u258d | 847\/3513 [3:20:43<11:06:49, 15.01s\/ba]\r\n#1: 37%|\u2588\u2588\u2588\u258b | 1287\/3513 [3:20:52<6:19:02, 10.22s\/ba]\r\n#0: 72%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258f | 2546\/3513 [3:20:52<1:51:03, 6.89s\/ba]\r\n#3: 18%|\u2588\u258a | 617\/3513 [3:20:36<15:50:30, 19.69s\/ba]\r\n#0: 73%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 2547\/3513 [3:20:59<1:50:25, 6.86s\/ba]\r\n#1: 37%|\u2588\u2588\u2588\u258b | 1288\/3513 [3:21:02<6:21:13, 10.28s\/ba]\r\n#7: 7%|\u258b | 252\/3513 [3:20:09<44:09:03, 48.74s\/ba]\r\n#12: 4%|\u258d | 144\/3513 [3:19:19<78:00:54, 83.36s\/ba]\r\n#4: 14%|\u2588\u258d | 494\/3513 [3:20:37<20:46:06, 24.77s\/ba]\r\n#0: 73%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 2548\/3513 [3:21:06<1:49:26, 6.80s\/ba]\r\n#2: 24%|\u2588\u2588\u258d | 848\/3513 [3:20:58<11:06:17, 15.00s\/ba]\r\n```\r\nHere is my script related to the datasets processing, \r\n\r\n```\r\ntokenizer = RobertaTokenizerFast.from_pretrained(args.model_name_or_path, cache_dir=args.cache_dir)\r\n\r\ndef token_encode(examples):\r\n tokenizer_out = tokenizer(examples['text'], truncation=True, padding=\"max_length\", add_special_tokens=True, max_length=128)\r\n return tokenizer_out\r\n\r\ndef HG_Datasets(tokenizer, file_path, args):\r\n \r\n path = Path(file_path)\r\n files = sorted(path.glob('*'))\r\n dataset = load_dataset('.\/text.py', data_files=files, cache_dir = \"\".\/, split=\"train\")\r\n dataset = dataset.map(token_encode, batched=True, batch_size = 20000, num_proc = 16)\r\n\r\n dataset.set_format(type='torch', columns=['input_ids', 'attention_mask'])\r\n return dataset\r\n\r\n```\r\nI have 1,123,870,657 lines totally in the path. I split the large file into 440 small files. Each file has 2,560,000 lines.\r\n\r\nCould you please give any suggestion? Thanks very much!!"],"created_at":1599763298000,"updated_at":1618044244000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I migrate my question from https:\/\/github.com\/huggingface\/transformers\/pull\/4009#issuecomment-690039444\r\n\r\nI tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. \r\nAccording to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file. This test.txt is a simple sample where each line is a sentence.\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('text', data_files='test.txt',cache_dir=\".\/\")\r\ndataset.set_format(type='torch',columns=[\"text\"])\r\ndataloader = torch.utils.data.DataLoader(dataset, batch_size=8)\r\nnext(iter(dataloader))\r\n```\r\n\r\nBut dataload cannot yield sample and error is:\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-12-388aca337e2f> in <module>\r\n----> 1 next(iter(dataloader))\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/dataloader.py in __next__(self)\r\n 361 \r\n 362 def __next__(self):\r\n--> 363 data = self._next_data()\r\n 364 self._num_yielded += 1\r\n 365 if self._dataset_kind == _DatasetKind.Iterable and \\\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/dataloader.py in _next_data(self)\r\n 401 def _next_data(self):\r\n 402 index = self._next_index() # may raise StopIteration\r\n--> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n 404 if self._pin_memory:\r\n 405 data = _utils.pin_memory.pin_memory(data)\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py in fetch(self, possibly_batched_index)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\n\/Library\/Python\/3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py in <listcomp>(.0)\r\n 42 def fetch(self, possibly_batched_index):\r\n 43 if self.auto_collation:\r\n---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]\r\n 45 else:\r\n 46 data = self.dataset[possibly_batched_index]\r\n\r\nKeyError: 0\r\n```\r\n\r\n`dataset.set_format(type='torch',columns=[\"text\"])` returns a log says:\r\n```\r\nSet __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n```\r\n\r\nI noticed the dataset is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`.\r\nEach sample can be accessed by `dataset[\"train\"][\"text\"]` instead of `dataset[\"text\"]`. \r\n\r\nCould you please give me any suggestions on how to modify this code to load the text file?\r\n\r\nVersions:\r\nPython version 3.7.3\r\nPyTorch version 1.6.0 \r\nTensorFlow version 2.3.0 \r\ndatasets version: 1.0.1","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/610\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/609","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/609\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/609\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/609\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/609","id":698323989,"node_id":"MDExOlB1bGxSZXF1ZXN0NDg0MTc4Nzky","number":609,"title":"Update GLUE URLs (now hosted on FB)","user":{"login":"jeswan","id":57466294,"node_id":"MDQ6VXNlcjU3NDY2Mjk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57466294?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeswan","html_url":"https:\/\/github.com\/jeswan","followers_url":"https:\/\/api.github.com\/users\/jeswan\/followers","following_url":"https:\/\/api.github.com\/users\/jeswan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeswan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeswan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeswan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeswan\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeswan\/repos","events_url":"https:\/\/api.github.com\/users\/jeswan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeswan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for opening this PR :) \r\n\r\nWe changed the name of the lib from nlp to datasets yesterday.\r\nCould you rebase from master and re-generate the dataset_info.json file to fix the name changes ?","Rebased changes here: https:\/\/github.com\/huggingface\/datasets\/pull\/626"],"created_at":1599761792000,"updated_at":1600110362000,"closed_at":1600110361000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/609","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/609","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/609.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/609.patch"},"body":"NYU is switching dataset hosting from Google to FB. This PR closes https:\/\/github.com\/huggingface\/datasets\/issues\/608 and is necessary for https:\/\/github.com\/jiant-dev\/jiant\/issues\/161. This PR updates the data URLs based on changes made in https:\/\/github.com\/nyu-mll\/jiant\/pull\/1112.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/609\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/608","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/608\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/608\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/608\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/608","id":698291156,"node_id":"MDU6SXNzdWU2OTgyOTExNTY=","number":608,"title":"Don't use the old NYU GLUE dataset URLs","user":{"login":"jeswan","id":57466294,"node_id":"MDQ6VXNlcjU3NDY2Mjk0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/57466294?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jeswan","html_url":"https:\/\/github.com\/jeswan","followers_url":"https:\/\/api.github.com\/users\/jeswan\/followers","following_url":"https:\/\/api.github.com\/users\/jeswan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jeswan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jeswan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jeswan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jeswan\/orgs","repos_url":"https:\/\/api.github.com\/users\/jeswan\/repos","events_url":"https:\/\/api.github.com\/users\/jeswan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jeswan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !"],"created_at":1599760022000,"updated_at":1600239198000,"closed_at":1600239198000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https:\/\/github.com\/jeswan\/nlp\/commit\/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR?\r\n\r\nSee: https:\/\/github.com\/jiant-dev\/jiant\/issues\/161 and https:\/\/github.com\/nyu-mll\/jiant\/pull\/1112","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/608\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/607","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/607\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/607\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/607\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/607","id":698094442,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzOTcyMDg4","number":607,"title":"Add transmit_format wrapper and tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599750230000,"updated_at":1599751308000,"closed_at":1599751307000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/607","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/607","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/607.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/607.patch"},"body":"Same as #605 but using a decorator on-top of dataset transforms that are not in place","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/607\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/606","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/606\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/606\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/606\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/606","id":698050442,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzOTMzMDA1","number":606,"title":"Quick fix :)","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[":heart:"],"created_at":1599748326000,"updated_at":1599754712000,"closed_at":1599754710000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/606","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/606","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/606.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/606.patch"},"body":"`nlp` => `datasets`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/606\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/605","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/605\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/605\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/605\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/605","id":697887401,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzNzg1Mjc1","number":605,"title":"[Datasets] Transmit format to children","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing as #607 was merged"],"created_at":1599741018000,"updated_at":1599754521000,"closed_at":1599754521000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/605","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/605","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/605.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/605.patch"},"body":"Transmit format to children obtained when processing a dataset.\r\n\r\nAdded a test.\r\n\r\nWhen concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/605\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/604","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/604\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/604\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/604\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/604","id":697774581,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzNjgxNTc0","number":604,"title":"Update bucket prefix","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599735673000,"updated_at":1599741933000,"closed_at":1599741932000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/604","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/604","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/604.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/604.patch"},"body":"cc @julien-c ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/604\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/603","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/603\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/603\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/603\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/603","id":697758750,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzNjY2ODk5","number":603,"title":"Set scripts version to master","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599734864000,"updated_at":1599735725000,"closed_at":1599735724000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/603","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/603","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/603.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/603.patch"},"body":"By default the scripts version is master, so that if the library is installed with \r\n```\r\npip install git+http:\/\/github.com\/huggingface\/nlp.git\r\n```\r\nor\r\n```\r\ngit clone http:\/\/github.com\/huggingface\/nlp.git\r\npip install -e .\/nlp\r\n```\r\n\r\nwill use the latest scripts, and not the ones from the previous version.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/603\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/602","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/602\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/602\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/602\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/602","id":697636605,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzNTU3NDM0","number":602,"title":"apply offset to indices in multiprocessed map","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599728070000,"updated_at":1599735819000,"closed_at":1599735817000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/602","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/602","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/602.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/602.patch"},"body":"Fix #597 \r\n\r\nI fixed the indices by applying an offset.\r\nI added the case to our tests to make sure it doesn't happen again.\r\n\r\nI also added the message proposed by @thomwolf in #597 \r\n\r\n```python\r\n>>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False)\r\nDone writing 10 indices in 80 bytes .\r\nTesting the mapped function outputs\r\n[0, 1]\r\nTesting finished, running the mapping function on the dataset\r\nDone writing 5 indices in 41 bytes .\r\nDone writing 5 indices in 41 bytes .\r\nSpawning 2 processes\r\n[0, 1, 2, 3, 4]\r\n[5, 6, 7, 8, 9]\r\n#0: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 377.90ba\/s]\r\n#1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 378.92ba\/s]\r\nConcatenating 2 shards from multiprocessing\r\n\r\n# Dataset(features: {'label': ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None), 'text': Value(dtype='string', id=None)}, num_rows: 10)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/602\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/601","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/601\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/601\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/601\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/601","id":697574848,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzNTAzMjAw","number":601,"title":"check if trasnformers has PreTrainedTokenizerBase","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599724496000,"updated_at":1599735697000,"closed_at":1599735696000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/601","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/601","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/601.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/601.patch"},"body":"Fix #598 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/601\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/600","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/600\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/600\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/600\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/600","id":697496913,"node_id":"MDU6SXNzdWU2OTc0OTY5MTM=","number":600,"title":"Pickling error when loading dataset","user":{"login":"kandorm","id":17310286,"node_id":"MDQ6VXNlcjE3MzEwMjg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17310286?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kandorm","html_url":"https:\/\/github.com\/kandorm","followers_url":"https:\/\/api.github.com\/users\/kandorm\/followers","following_url":"https:\/\/api.github.com\/users\/kandorm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kandorm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kandorm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kandorm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kandorm\/orgs","repos_url":"https:\/\/api.github.com\/users\/kandorm\/repos","events_url":"https:\/\/api.github.com\/users\/kandorm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kandorm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["When I change from python3.6 to python3.8, it works! ","Does it work when you install `nlp` from source on python 3.6?","No, still the pickling error.","I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also tried nlp 0.4.0)\r\n\r\nIf I try\r\n\r\n```python\r\nfrom datasets import load_dataset # or from nlp\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=512), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nIt runs without error","Closing since it looks like it's working on >= 3.6.9\r\nFeel free to re-open if you have other questions :)"],"created_at":1599719288000,"updated_at":1601044314000,"closed_at":1601044314000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI modified line 136 in the original [run_language_modeling.py](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_language_modeling.py) as:\r\n\r\n```\r\n# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\nreturn dataset\r\n```\r\n\r\nWhen I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"src\/run_language_modeling.py\", line 319, in <module>\r\n main()\r\n File \"src\/run_language_modeling.py\", line 248, in main\r\n get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n File \"src\/run_language_modeling.py\", line 139, in get_dataset\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True)\r\n File \"\/data\/nlp\/src\/nlp\/arrow_dataset.py\", line 1136, in map\r\n new_fingerprint=new_fingerprint,\r\n File \"\/data\/nlp\/src\/nlp\/fingerprint.py\", line 158, in wrapper\r\n self._fingerprint, transform, kwargs_for_fingerprint\r\n File \"\/data\/nlp\/src\/nlp\/fingerprint.py\", line 105, in update_fingerprint\r\n hasher.update(transform_args[key])\r\n File \"\/data\/nlp\/src\/nlp\/fingerprint.py\", line 57, in update\r\n self.m.update(self.hash(value).encode(\"utf-8\"))\r\n File \"\/data\/nlp\/src\/nlp\/fingerprint.py\", line 53, in hash\r\n return cls.hash_default(value)\r\n File \"\/data\/nlp\/src\/nlp\/fingerprint.py\", line 46, in hash_default\r\n return cls.hash_bytes(dumps(value))\r\n File \"\/data\/nlp\/src\/nlp\/utils\/py_utils.py\", line 362, in dumps\r\n dump(obj, file)\r\n File \"\/data\/nlp\/src\/nlp\/utils\/py_utils.py\", line 339, in dump\r\n Pickler(file, recurse=True).dump(obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 446, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 409, in dump\r\n self.save(obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 1438, in save_function\r\n obj.__dict__, fkwdefaults), obj=obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 610, in save_reduce\r\n save(args)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 751, in save_tuple\r\n save(element)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 736, in save_tuple\r\n save(element)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 1170, in save_cell\r\n pickler.save_reduce(_create_cell, (f,), obj=obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 610, in save_reduce\r\n save(args)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 736, in save_tuple\r\n save(element)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 521, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 605, in save_reduce\r\n save(cls)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 1365, in save_type\r\n obj.__bases__, _dict), obj=obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 610, in save_reduce\r\n save(args)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 751, in save_tuple\r\n save(element)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 933, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 821, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 847, in _batch_setitems\r\n save(v)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 476, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/site-packages\/dill\/_dill.py\", line 933, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 821, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 847, in _batch_setitems\r\n save(v)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 507, in save\r\n self.save_global(obj, rv)\r\n File \"\/root\/miniconda3\/envs\/py3.6\/lib\/python3.6\/pickle.py\", line 927, in save_global\r\n (obj, module_name, name))\r\n_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/600\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/599","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/599\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/599\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/599\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/599","id":697377786,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgzMzI3ODQ5","number":599,"title":"Add MATINF dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! sorry for the late response\r\n\r\nCould you try to rebase from master ? We changed the named of the library last week so you have to include this change in your code.\r\n\r\nCan you give me more details about the error you get when running the cli command ?\r\n\r\nNote that in case of a manual download you have to specify the directory where you downloaded the data with `--data_dir <path\/to\/the\/directory>`","I fucked up the Git rebase lol. Closing it."],"created_at":1599708669000,"updated_at":1600345045000,"closed_at":1600345045000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/599","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/599","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/599.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/599.patch"},"body":"@lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :(","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/599\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/598","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/598\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/598\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/598\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/598","id":697156501,"node_id":"MDU6SXNzdWU2OTcxNTY1MDE=","number":598,"title":"The current version of the package on github has an error when loading dataset","user":{"login":"zeyuyun1","id":43428393,"node_id":"MDQ6VXNlcjQzNDI4Mzkz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43428393?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zeyuyun1","html_url":"https:\/\/github.com\/zeyuyun1","followers_url":"https:\/\/api.github.com\/users\/zeyuyun1\/followers","following_url":"https:\/\/api.github.com\/users\/zeyuyun1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zeyuyun1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zeyuyun1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zeyuyun1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zeyuyun1\/orgs","repos_url":"https:\/\/api.github.com\/users\/zeyuyun1\/repos","events_url":"https:\/\/api.github.com\/users\/zeyuyun1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zeyuyun1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class","I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time. Didn't realize loading the data part requires using tokenizer.\r\n","Yes it shouldn\u2019t fail with older version of transformers since this is only a special feature to make caching more efficient when using transformers for tokenization.\r\nWe\u2019ll update this."],"created_at":1599685403000,"updated_at":1599719121000,"closed_at":1599692248000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):\r\n\r\nTo recreate the error: \r\nFirst, installing nlp directly from source:\r\n```\r\ngit clone https:\/\/github.com\/huggingface\/nlp.git\r\ncd nlp\r\npip install -e .\r\n```\r\nThen run:\r\n```\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train') \r\n```\r\nwill give error:\r\n\r\n```\r\n>>> dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')\r\nChecking \/home\/zeyuy\/.cache\/huggingface\/datasets\/84a754b488511b109e2904672d809c041008416ae74e38f9ee0c80a8dffa1383.2e21f48d63b5572d19c97e441fbb802257cf6a4c03fbc5ed8fae3d2c2273f59e.py for additional imports.\r\nFound main folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/nlp\/0.4.0\/datasets\/wikitext\/wikitext.py at \/home\/zeyuy\/.cache\/huggingface\/modules\/nlp_modules\/datasets\/wikitext\r\nFound specific version folder for dataset https:\/\/raw.githubusercontent.com\/huggingface\/nlp\/0.4.0\/datasets\/wikitext\/wikitext.py at \/home\/zeyuy\/.cache\/huggingface\/modules\/nlp_modules\/datasets\/wikitext\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\r\nFound script file from https:\/\/raw.githubusercontent.com\/huggingface\/nlp\/0.4.0\/datasets\/wikitext\/wikitext.py to \/home\/zeyuy\/.cache\/huggingface\/modules\/nlp_modules\/datasets\/wikitext\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\/wikitext.py\r\nFound dataset infos file from https:\/\/raw.githubusercontent.com\/huggingface\/nlp\/0.4.0\/datasets\/wikitext\/dataset_infos.json to \/home\/zeyuy\/.cache\/huggingface\/modules\/nlp_modules\/datasets\/wikitext\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\/dataset_infos.json\r\nFound metadata file for dataset https:\/\/raw.githubusercontent.com\/huggingface\/nlp\/0.4.0\/datasets\/wikitext\/wikitext.py at \/home\/zeyuy\/.cache\/huggingface\/modules\/nlp_modules\/datasets\/wikitext\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\/wikitext.json\r\nLoading Dataset Infos from \/home\/zeyuy\/.cache\/huggingface\/modules\/nlp_modules\/datasets\/wikitext\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\r\nOverwrite dataset info from restored data version.\r\nLoading Dataset info from \/home\/zeyuy\/.cache\/huggingface\/datasets\/wikitext\/wikitext-2-v1\/1.0.0\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\r\nReusing dataset wikitext (\/home\/zeyuy\/.cache\/huggingface\/datasets\/wikitext\/wikitext-2-v1\/1.0.0\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d)\r\nConstructing Dataset for split train, from \/home\/zeyuy\/.cache\/huggingface\/datasets\/wikitext\/wikitext-2-v1\/1.0.0\/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/load.py\", line 600, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/builder.py\", line 611, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/utils\/py_utils.py\", line 216, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/builder.py\", line 631, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/builder.py\", line 704, in _as_dataset\r\n return Dataset(**dataset_kwargs)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/arrow_dataset.py\", line 188, in __init__\r\n self._fingerprint = generate_fingerprint(self)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/fingerprint.py\", line 91, in generate_fingerprint\r\n hasher.update(key)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/fingerprint.py\", line 57, in update\r\n self.m.update(self.hash(value).encode(\"utf-8\"))\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/fingerprint.py\", line 53, in hash\r\n return cls.hash_default(value)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/fingerprint.py\", line 46, in hash_default\r\n return cls.hash_bytes(dumps(value))\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/utils\/py_utils.py\", line 361, in dumps\r\n with _no_cache_fields(obj):\r\n File \"\/home\/zeyuy\/miniconda3\/lib\/python3.8\/contextlib.py\", line 113, in __enter__\r\n return next(self.gen)\r\n File \"\/home\/zeyuy\/transformers\/examples\/language-modeling\/nlp\/src\/nlp\/utils\/py_utils.py\", line 348, in _no_cache_fields\r\n if isinstance(obj, tr.PreTrainedTokenizerBase) and hasattr(obj, \"cache\") and isinstance(obj.cache, dict):\r\nAttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'\r\n\r\n```\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/598\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/597","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/597\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/597\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/597\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/597","id":697112029,"node_id":"MDU6SXNzdWU2OTcxMTIwMjk=","number":597,"title":"Indices incorrect with multiprocessing","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?","Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we are testing the function otherwise its confusing for the user to see two outputs I think. Proposal (see the \"Testing the mapped function outputs:\" lines):\r\n```\r\n>>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2)\r\nDone writing 10 indices in 80 bytes .\r\nDone writing 5 indices in 41 bytes .\r\nDone writing 5 indices in 41 bytes .\r\nSpawning 2 processes\r\nTesting the mapped function outputs:\r\ninds: [0, 1]\r\ninds: [0, 1]\r\nTesting finished, running the mapped function on the dataset:\r\n#0: 0%| | 0\/1 [00:00<?, ?ba\/s]\r\ninds: [0, 1, 2, 3, 4] inds: [0, 1, 2, 3, 4] | 0\/1 [00:00<?, ?ba\/s]\r\n#0: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1321.04ba\/s]\r\n#1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 1841.22ba\/s]\r\nConcatenating 2 shards from multiprocessing\r\nDataset(features: {'text': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None)}, num_rows: 10)\r\n```"],"created_at":1599681056000,"updated_at":1599735817000,"closed_at":1599735817000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When `num_proc` > 1, the indices argument passed to the map function is incorrect:\r\n\r\n```python\r\nd = load_dataset('imdb', split='test[:1%]')\r\n\r\ndef fn(x, inds):\r\n print(inds)\r\n return x\r\n\r\nd.select(range(10)).map(fn, with_indices=True, batched=True)\r\n# [0, 1]\r\n# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\n\r\nd.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2)\r\n# [0, 1]\r\n# [0, 1]\r\n# [0, 1, 2, 3, 4]\r\n# [0, 1, 2, 3, 4]\r\n```\r\n\r\nAs you can see, the subset passed to each thread is indexed from 0 to N which doesn't reflect their positions in `d`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/597\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/596","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/596\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/596\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/596\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/596","id":696928139,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgyOTM5MTgw","number":596,"title":"[style\/quality] Moving to isort 5.0.0 + style\/quality on datasets and metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ready for review @lhoestq, just updated a few 156 files here"],"created_at":1599666441000,"updated_at":1599732304000,"closed_at":1599732303000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/596","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/596","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/596.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/596.patch"},"body":"Move the repo to isort 5.0.0.\r\n\r\nAlso start testing style\/quality on datasets and metrics.\r\n\r\nSpecific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies.\r\nMaybe we could add this in datasets but while cleaning this I've seen many example of really unused imports in dataset so maybe it's better to have it as a line-by-line nova instead of a general rule like in metrics.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/596\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/595","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/595\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/595\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/595\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/595","id":696892304,"node_id":"MDU6SXNzdWU2OTY4OTIzMDQ=","number":595,"title":"`Dataset`\/`DatasetDict` has no attribute 'save_to_disk'","user":{"login":"sudarshan85","id":488428,"node_id":"MDQ6VXNlcjQ4ODQyOA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/488428?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sudarshan85","html_url":"https:\/\/github.com\/sudarshan85","followers_url":"https:\/\/api.github.com\/users\/sudarshan85\/followers","following_url":"https:\/\/api.github.com\/users\/sudarshan85\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sudarshan85\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sudarshan85\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sudarshan85\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sudarshan85\/orgs","repos_url":"https:\/\/api.github.com\/users\/sudarshan85\/repos","events_url":"https:\/\/api.github.com\/users\/sudarshan85\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sudarshan85\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`pip install git+https:\/\/github.com\/huggingface\/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?","> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\n\r\nThanks.\r\n"],"created_at":1599663712000,"updated_at":1599668419000,"closed_at":1599668418000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nAs the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https:\/\/github.com\/huggingface\/nlp\/blob\/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1\/src\/nlp\/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.py` which is saved after `pip install nlp -U` in my `conda` environment DOES NOT contain the `save_to_disk` method. I even tried `pip install git+https:\/\/github.com\/huggingface\/nlp.git ` and still no luck. Do I need to install the library in another way?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/595\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/594","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/594\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/594\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/594\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/594","id":696816893,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgyODQ1OTc5","number":594,"title":"Fix germeval url","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599658175000,"updated_at":1599658475000,"closed_at":1599658474000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/594","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/594","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/594.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/594.patch"},"body":"Continuation of #593 but without the dummy data hack","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/594\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/593","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/593\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/593\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/593\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/593","id":696679182,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgyNzI5NTgw","number":593,"title":"GermEval 2014: new download urls","user":{"login":"stefan-it","id":20651387,"node_id":"MDQ6VXNlcjIwNjUxMzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20651387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stefan-it","html_url":"https:\/\/github.com\/stefan-it","followers_url":"https:\/\/api.github.com\/users\/stefan-it\/followers","following_url":"https:\/\/api.github.com\/users\/stefan-it\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stefan-it\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stefan-it\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stefan-it\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stefan-it\/orgs","repos_url":"https:\/\/api.github.com\/users\/stefan-it\/repos","events_url":"https:\/\/api.github.com\/users\/stefan-it\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stefan-it\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\/cc: @vblagoje","Closing this one as #594 is merged (same changes except the dummy data hack)","Awesome @stefan-it ! @lhoestq how soon can I use the fixed GermEval dataset in HF token classification examples?","I've manually updated the script on S3, so you can actually use it right now with\r\n```python\r\nfrom nlp import load_dataset\r\n\r\ngermeval = load_dataset(\"germeval_14\")\r\n```\r\n\r\nnot sure if it's used in token classification examples already","Awesome. Not used yet but I am going to use it now. I've been working on an update for token classification examples and this was a missing piece. Thanks @stefan-it @lhoestq "],"created_at":1599646049000,"updated_at":1599661014000,"closed_at":1599658515000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/593","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/593","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/593.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/593.patch"},"body":"Hi,\r\n\r\nunfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive.\r\n\r\nI changed the URLs and bump version from 1.0.0 to 2.0.0.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/593\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/592","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/592\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/592\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/592\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/592","id":696619986,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgyNjc4MDkw","number":592,"title":"Test in memory and on disk","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599641970000,"updated_at":1599659404000,"closed_at":1599659403000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/592","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/592","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/592.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/592.patch"},"body":"I added test parameters to do every test both in memory and on disk.\r\nI also found a bug in concatenate_dataset thanks to the new tests and fixed it.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/592\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/591","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/591\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/591\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/591\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/591","id":696530413,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgyNjAxMzc1","number":591,"title":"fix #589 (backward compat)","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599636793000,"updated_at":1599641876000,"closed_at":1599641875000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/591","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/591","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/591.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/591.patch"},"body":"Fix #589","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/591\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/590","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/590\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/590\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/590\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/590","id":696501827,"node_id":"MDU6SXNzdWU2OTY1MDE4Mjc=","number":590,"title":"The process cannot access the file because it is being used by another process (windows)","user":{"login":"saareliad","id":22762845,"node_id":"MDQ6VXNlcjIyNzYyODQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22762845?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saareliad","html_url":"https:\/\/github.com\/saareliad","followers_url":"https:\/\/api.github.com\/users\/saareliad\/followers","following_url":"https:\/\/api.github.com\/users\/saareliad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saareliad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saareliad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saareliad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saareliad\/orgs","repos_url":"https:\/\/api.github.com\/users\/saareliad\/repos","events_url":"https:\/\/api.github.com\/users\/saareliad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saareliad\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, which version of `nlp` are you using?\r\n\r\nBy the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes).\r\nYou can see more informations here #545 and try it by installing from source from the master branch.","I'm using version 0.4.0.\r\n\r\n","Ok, it's probably fixed on master. Otherwise if you can give me a fully self-contained exemple to reproduce the error, I can try to investigate.","I get the same behavior, on Windows, when `map`ping a function to a loaded dataset. \r\nThe error doesn't occur if I re-run the cell a second time though! \r\nI'm on version 1.0.1.","This is going to be fixed by #644 ","@saareliad I got the same issue that troubled me quite a while. Unfortunately, there are no good answers to this issue online, I tried it on Linux and that's absolutely fine. After hacking the source code, I solved this problem as follows.\r\n\r\nIn the source code file: arrow_dataset.py -> _map_single(...)\r\n\r\nchange\r\n```python\r\nif update_data and tmp_file is not None:\r\n shutil.move(tmp_file.name, cache_file_name)\r\n```\r\nto\r\n```python\r\ntmp_file.close()\r\nif update_data and tmp_file is not None:\r\n shutil.move(tmp_file.name, cache_file_name)\r\n```\r\n\r\nThen it works without needing multiple times runs to avoid the permission error.\r\nI know this solution is unusual since it changes the source code. Hopefully, the lib's contributors can have better solutions in the future.\r\n","@wangcongcong123 thanks for sharing.\n(BTW I also solved it locally on windows by putting the problematic line under try except and not using cache... On windows I just needed 1% of the dataset anyway)"],"created_at":1599634896000,"updated_at":1601042548000,"closed_at":1601042548000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I consistently get the following error when developing in my PC (windows 10):\r\n\r\n```\r\n train_dataset = train_dataset.map(convert_to_features, batched=True)\r\n File \"C:\\Users\\saareliad\\AppData\\Local\\Continuum\\miniconda3\\envs\\py38\\lib\\site-packages\\nlp\\arrow_dataset.py\", line 970, in map\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\saareliad\\AppData\\Local\\Continuum\\miniconda3\\envs\\py38\\lib\\shutil.py\", line 803, in move\r\n os.unlink(src)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\saareliad\\\\.cache\\\\huggingface\\\\datasets\\\\squad\\\\plain_text\\\\1.0.0\\\\408a8fa46a1e2805445b793f1022e743428ca739a34809fce872f0c7f17b44ab\\\\tmpsau1bep1'\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/590\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/589","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/589\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/589\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/589\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/589","id":696488447,"node_id":"MDU6SXNzdWU2OTY0ODg0NDc=","number":589,"title":"Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging'","user":{"login":"ksjae","id":17930170,"node_id":"MDQ6VXNlcjE3OTMwMTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17930170?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ksjae","html_url":"https:\/\/github.com\/ksjae","followers_url":"https:\/\/api.github.com\/users\/ksjae\/followers","following_url":"https:\/\/api.github.com\/users\/ksjae\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ksjae\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ksjae\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ksjae\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ksjae\/orgs","repos_url":"https:\/\/api.github.com\/users\/ksjae\/repos","events_url":"https:\/\/api.github.com\/users\/ksjae\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ksjae\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599634013000,"updated_at":1599641874000,"closed_at":1599641874000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 533, in load_dataset\r\n builder_cls = import_main_class(module_path, dataset=True)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 61, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.7\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"\/root\/anaconda3\/envs\/pytorch\/lib\/python3.7\/site-packages\/nlp\/datasets\/text\/5dc629379536c4037d9c2063e1caa829a1676cf795f8e030cd90a537eba20c08\/text.py\", line 9, in <module>\r\n logger = nlp.utils.logging.get_logger(__name__)\r\nAttributeError: module 'nlp.utils' has no attribute 'logging'\r\n```\r\n\r\nOccurs on the following code, or any code including the load_dataset('text'):\r\n```\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\nreturn dataset\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/589\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/588","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/588\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/588\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/588\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/588","id":695249809,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxNTE5NzQx","number":588,"title":"Support pathlike obj in load dataset ","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599495201000,"updated_at":1599551119000,"closed_at":1599551118000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/588","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/588","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/588.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/588.patch"},"body":"Fix #582 \r\n\r\n(I recreated the PR, I got an issue with git)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/588\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/587","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/587\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/587\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/587\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/587","id":695246018,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxNTE2Mzkx","number":587,"title":"Support pathlike obj in load dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599494956000,"updated_at":1599495035000,"closed_at":1599495035000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/587","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/587","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/587.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/587.patch"},"body":"Fix #582 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/587\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/586","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/586\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/586\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/586\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/586","id":695237999,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxNTA5MzU1","number":586,"title":"Better message when data files is empty","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599494397000,"updated_at":1599642009000,"closed_at":1599642008000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/586","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/586","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/586.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/586.patch"},"body":"Fix #581 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/586\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/585","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/585\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/585\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/585\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/585","id":695191209,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxNDY4NTM4","number":585,"title":"Fix select for pyarrow < 1.0.0","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599490972000,"updated_at":1599550997000,"closed_at":1599550995000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/585","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/585","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/585.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/585.patch"},"body":"Fix #583 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/585\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/584","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/584\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/584\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/584\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/584","id":695186652,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxNDY0NjEz","number":584,"title":"Use github versioning","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I noticed that datasets like `cnn_dailymail` need the `version` parameter to be passed to its `config_kwargs`.\r\nShall we rename the `version` paramater in `load_dataset` ? Maybe `repo_version` or `script_version` ?"],"created_at":1599490695000,"updated_at":1599658655000,"closed_at":1599658654000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/584","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/584","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/584.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/584.patch"},"body":"Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset\/metric script version.\r\n\r\nTo fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certain version of the lib, as in #562 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/584\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/583","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/583\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/583\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/583\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/583","id":695166265,"node_id":"MDU6SXNzdWU2OTUxNjYyNjU=","number":583,"title":"ArrowIndexError on Dataset.select","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599489389000,"updated_at":1599550995000,"closed_at":1599550995000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0\r\n\r\nExample:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nmnli = load_dataset(\"glue\", \"mnli\", split=\"train\")\r\nshuffled = mnli.shuffle(seed=42)\r\nmnli.select(list(range(len(mnli))))\r\n```\r\n\r\nraises:\r\n```python\r\n---------------------------------------------------------------------------\r\nArrowIndexError Traceback (most recent call last)\r\n<ipython-input-64-006a5d38d418> in <module>\r\n----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli))))\r\n\r\n~\/Desktop\/hf\/nlp\/src\/nlp\/fingerprint.py in wrapper(*args, **kwargs)\r\n 161 # Call actual function\r\n 162 \r\n--> 163 out = func(self, *args, **kwargs)\r\n 164 \r\n 165 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n~\/Desktop\/hf\/nlp\/src\/nlp\/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)\r\n 1653 if self._indices is not None:\r\n 1654 if PYARROW_V0:\r\n-> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array)\r\n 1656 else:\r\n 1657 indices_array = self._indices.column(0).take(indices_array)\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.Array.take()\r\n\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowIndexError: take index out of bounds\r\n```\r\n\r\nThis is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements).\r\n\r\nShall we change that to use \r\n```python\r\npa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array)\r\n```\r\ninstead of `take` ? @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/583\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/582","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/582\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/582\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/582\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/582","id":695126456,"node_id":"MDU6SXNzdWU2OTUxMjY0NTY=","number":582,"title":"Allow for PathLike objects","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599486891000,"updated_at":1599551117000,"closed_at":1599551117000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.\r\n\r\n```python\r\nfiles = list(Path(r\"D:\\corpora\\yourcorpus\").glob(\"*.txt\"))\r\ndataset = load_dataset(\"text\", data_files=files)\r\n```\r\n\r\nTraceback:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\/dev\/python\/dutch-simplification\/main.py\", line 7, in <module>\r\n dataset = load_dataset(\"text\", data_files=files)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 470, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 564, in _save_info\r\n self.info.write_to_directory(self._cache_dir)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\info.py\", line 149, in write_to_directory\r\n self._dump_info(f)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\info.py\", line 156, in _dump_info\r\n file.write(json.dumps(asdict(self)).encode(\"utf-8\"))\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\json\\__init__.py\", line 231, in dumps\r\n return _default_encoder.encode(obj)\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\json\\encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\json\\encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\nTypeError: keys must be str, int, float, bool or None, not WindowsPath\r\n```\r\n\r\nWe have to cast to a string explicitly to make this work. It would be nicer if we could actually use PathLike objects.\r\n\r\n```python\r\nfiles = [str(f) for f in Path(r\"D:\\corpora\\wablieft\").glob(\"*.txt\")]\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/582\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/581","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/581\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/581\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/581\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/581","id":695120517,"node_id":"MDU6SXNzdWU2OTUxMjA1MTc=","number":581,"title":"Better error message when input file does not exist","user":{"login":"BramVanroy","id":2779410,"node_id":"MDQ6VXNlcjI3Nzk0MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2779410?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BramVanroy","html_url":"https:\/\/github.com\/BramVanroy","followers_url":"https:\/\/api.github.com\/users\/BramVanroy\/followers","following_url":"https:\/\/api.github.com\/users\/BramVanroy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BramVanroy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BramVanroy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BramVanroy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BramVanroy\/orgs","repos_url":"https:\/\/api.github.com\/users\/BramVanroy\/repos","events_url":"https:\/\/api.github.com\/users\/BramVanroy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BramVanroy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599486479000,"updated_at":1599642007000,"closed_at":1599642007000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and\/or whether the argument is not false-y.\r\n\r\n```python\r\ndataset = load_dataset(\"text\", data_files=[])\r\n```\r\n\r\nExample error trace.\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset text\/default-d18f9b6611eb8e16 (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to C:\\Users\\bramv\\.cache\\huggingface\\datasets\\text\\default-d18f9b6611eb8e16\\0.0.0\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b...\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 424, in incomplete_dir\r\n yield tmp_dir\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 537, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 813, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\arrow_writer.py\", line 217, in finalize\r\n self.pa_writer.close()\r\nAttributeError: 'NoneType' object has no attribute 'close'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\/dev\/python\/dutch-simplification\/main.py\", line 7, in <module>\r\n dataset = load_dataset(\"text\", data_files=files)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 470, in download_and_prepare\r\n self._save_info()\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\bramv\\.virtualenvs\\dutch-simplification-nbNdqK9u\\lib\\site-packages\\nlp\\builder.py\", line 430, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\shutil.py\", line 737, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\shutil.py\", line 615, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"c:\\users\\bramv\\appdata\\local\\programs\\python\\python38\\lib\\shutil.py\", line 613, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\\\Users\\\\bramv\\\\.cache\\\\huggingface\\\\datasets\\\\text\\\\default-d18f9b6611eb8e16\\\\0.0.0\\\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b.incomplete\\\\text-train.arrow'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/581\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/580","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/580\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/580\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/580\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/580","id":694954551,"node_id":"MDU6SXNzdWU2OTQ5NTQ1NTE=","number":580,"title":"nlp re-creates already-there caches when using a script, but not within a shell","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Couln't reproduce on my side :\/ \r\nlet me know if you manage to reproduce on another env (colab for example)","Fixed with a clean re-install!"],"created_at":1599474230000,"updated_at":1599491949000,"closed_at":1599488801000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.\r\n\r\nExample: try running\r\n\r\n```\r\nimport nlp\r\n\r\nhans_easy_data = nlp.load_dataset('hans', split=\"validation\").filter(lambda x: x['label'] == 0)\r\nhans_hard_data = nlp.load_dataset('hans', split=\"validation\").filter(lambda x: x['label'] == 1)\r\n```\r\n\r\ntwice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell\/`ipython` commands, `nlp` will correctly re-use the cache.\r\nAs observed with @lhoestq.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/580\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/579","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/579\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/579\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/579\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/579","id":694947599,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxMjU1OTI5","number":579,"title":"Doc metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599473724000,"updated_at":1599743171000,"closed_at":1599743170000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/579","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/579","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/579.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/579.patch"},"body":"Adding documentation on metrics loading\/using\/sharing","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/579\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/578","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/578\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/578\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/578\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/578","id":694849940,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgxMTczNDE0","number":578,"title":"Add CommonGen Dataset","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599466637000,"updated_at":1599479429000,"closed_at":1599479347000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/578","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/578","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/578.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/578.patch"},"body":"CC Authors:\r\n@yuchenlin @MichaelZhouwang","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/578\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/577","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/577\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/577\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/577\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/577","id":694607148,"node_id":"MDU6SXNzdWU2OTQ2MDcxNDg=","number":577,"title":"Some languages in wikipedia dataset are not loading","user":{"login":"gaguilar","id":5833357,"node_id":"MDQ6VXNlcjU4MzMzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5833357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gaguilar","html_url":"https:\/\/github.com\/gaguilar","followers_url":"https:\/\/api.github.com\/users\/gaguilar\/followers","following_url":"https:\/\/api.github.com\/users\/gaguilar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gaguilar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gaguilar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gaguilar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gaguilar\/orgs","repos_url":"https:\/\/api.github.com\/users\/gaguilar\/repos","events_url":"https:\/\/api.github.com\/users\/gaguilar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gaguilar\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for languages with hundreds of MB of xml.\r\n\r\nLet me know if you encounter an error or if you feel that is is taking too long for you.\r\nWe could process those that really take too much time","Ok, thanks for clarifying, that makes sense. I will time those examples later today and post back here.\r\n\r\nAlso, it seems that not all dumps should use the same date. For instance, I was checking the Spanish dump doing the following:\r\n```\r\ndata = nlp.load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner', split='train')\r\n```\r\n\r\nI got the error below because this URL does not exist: https:\/\/dumps.wikimedia.org\/eswiki\/20200501\/dumpstatus.json. So I checked the actual available dates here https:\/\/dumps.wikimedia.org\/eswiki\/ and there is no 20200501. If one tries for a date available in the link, then the nlp library does not allow such a request because is not in the list of expected datasets.\r\n\r\n```\r\nDownloading and preparing dataset wikipedia\/20200501.es (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to \/home\/gaguilar\/.cache\/huggingface\/datasets\/wikipedia\/20200501.es\/1.0.0\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/builder.py\", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/builder.py\", line 965, in _download_and_prepare\r\n super(BeamBasedBuilder, self)._download_and_prepare(\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/builder.py\", line 518, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/datasets\/wikipedia\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50\/wikipedia.py\", line 422, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract({\"info\": info_url})\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/download_manager.py\", line 220, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/download_manager.py\", line 155, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/py_utils.py\", line 163, in map_nested\r\n return {\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/py_utils.py\", line 164, in <dictcomp>\r\n k: map_nested(\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/py_utils.py\", line 191, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/download_manager.py\", line 156, in <lambda>\r\n lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/file_utils.py\", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/home\/gaguilar\/.conda\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/utils\/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/dumps.wikimedia.org\/eswiki\/20200501\/dumpstatus.json\r\n```","Thanks ! This will be very helpful.\r\n\r\nAbout the date issue, I think it's possible to use another date with\r\n\r\n```python\r\nload_dataset(\"wikipedia\", language=\"es\", date=\"...\", beam_runner=\"...\")\r\n```\r\n\r\nHowever we've not processed wikipedia dumps for other dates than 20200501 (yet ?)\r\n\r\nOne more thing that is specific to 20200501.es: it was available once but the `mwparserfromhell` was not able to parse it for some reason, so we didn't manage to get a processed version of 20200501.es (see #321 )","Cool! Thanks for the trick regarding different dates!\r\n\r\nI checked the download\/processing time for retrieving the Arabic Wikipedia dump, and it took about 3.2 hours. I think that this may be a bit impractical when it comes to working with multiple languages (although I understand that storing those datasets in your Google storage may not be very appealing either). \r\n\r\nFor the record, here's what I did:\r\n```python\r\nimport nlp\r\nimport time\r\n\r\ndef timeit(filename):\r\n elapsed = time.time()\r\n data = nlp.load_dataset('wikipedia', filename, beam_runner='DirectRunner', split='train')\r\n elapsed = time.time() - elapsed\r\n print(f\"Loading the '{filename}' data took {elapsed:,.1f} seconds...\")\r\n return data\r\n\r\ndata = timeit('20200501.ar')\r\n```\r\n\r\nHere's the output:\r\n```\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13.0k\/13.0k [00:00<00:00, 8.34MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 28.7k\/28.7k [00:00<00:00, 954kB\/s]\r\nDownloading and preparing dataset wikipedia\/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to \/home\/gaguil20\/.cache\/huggingface\/datasets\/wikipedia\/20200501.ar\/1.0.0\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 47.4k\/47.4k [00:00<00:00, 1.40MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 79.8M\/79.8M [00:15<00:00, 5.13MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 171M\/171M [00:33<00:00, 5.13MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 103M\/103M [00:20<00:00, 5.14MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 227M\/227M [00:44<00:00, 5.06MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 140M\/140M [00:28<00:00, 4.96MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 160M\/160M [00:30<00:00, 5.20MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 97.5M\/97.5M [00:19<00:00, 5.06MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 222M\/222M [00:42<00:00, 5.21MB\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [03:16<00:00, 196.39s\/sources]\r\nDataset wikipedia downloaded and prepared to \/home\/gaguil20\/.cache\/huggingface\/datasets\/wikipedia\/20200501.ar\/1.0.0\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50. Subsequent calls will reuse this data.\r\nLoading the '20200501.ar' data took 11,582.7 seconds...\r\n````","> About the date issue, I think it's possible to use another date with\r\n> ```python\r\n> load_dataset(\"wikipedia\", language=\"es\", date=\"...\", beam_runner=\"...\")\r\n> ```\r\n\r\nI tried your suggestion about the date and the function does not accept the language and date keywords. I tried both on `nlp` v0.4 and the new `datasets` library (v1.0.2):\r\n```\r\nload_dataset(\"wikipedia\", language=\"es\", date=\"20200601\", beam_runner='DirectRunner', split='train')\r\n```\r\nFor now, my quick workaround to keep things moving was to simply change the date inside the library at this line: [https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wikipedia\/wikipedia.py#L403](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/datasets\/wikipedia\/wikipedia.py#L403)\r\n\r\nNote that the date and languages are valid: [https:\/\/dumps.wikimedia.org\/eswiki\/20200601\/dumpstatus.json](https:\/\/dumps.wikimedia.org\/eswiki\/20200601\/dumpstatus.json)\r\n\r\nAny suggestion is welcome :) @lhoestq \r\n\r\n\r\n## **[UPDATE]**\r\n\r\nThe workaround I mentioned fetched the data, but then I faced another issue (even the log says to report this as bug):\r\n```\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\n```\r\n\r\nHere's the full stack (which says that there is a key error caused by this key: `KeyError: '000nbsp'`):\r\n\r\n```Downloading and preparing dataset wikipedia\/20200601.es (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to \/home\/gustavoag\/.cache\/huggingface\/datasets\/wikipedia\/20200601.es\/1.0.0\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 74.7k\/74.7k [00:00<00:00, 1.53MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 232M\/232M [00:48<00:00, 4.75MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 442M\/442M [01:39<00:00, 4.44MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 173M\/173M [00:33<00:00, 5.12MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 344M\/344M [01:14<00:00, 4.59MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 541M\/541M [01:59<00:00, 4.52MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 476M\/476M [01:31<00:00, 5.18MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 545M\/545M [02:02<00:00, 4.46MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 299M\/299M [01:01<00:00, 4.89MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 9.60M\/9.60M [00:01<00:00, 4.84MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 423M\/423M [01:36<00:00, 4.38MB\/s]\r\nWARNING:apache_beam.options.pipeline_options:Discarding unparseable args: ['--lang', 'es', '--date', '20200601', '--tokenizer', 'bert-base-multilingual-cased', '--cache', 'train', 'valid', '--max_dataset_length', '200000', '10000']\r\n\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.\r\nTraceback (most recent call last):\r\n File \"apache_beam\/runners\/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1095, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/datasets\/wikipedia\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50\/wikipedia.py\", line 500, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/datasets\/wikipedia\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50\/wikipedia.py\", line 556, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/mwparserfromhell\/wikicode.py\", line 643, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 63, in __strip__\r\n return self.normalize()\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 178, in normalize\r\n return chrfunc(htmlentities.name2codepoint[self.value])\r\nKeyError: '000nbsp'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"\/raid\/data\/gustavoag\/projects\/char2subword\/research\/preprocessing\/split_wiki.py\", line 96, in <module>\r\n main()\r\n File \"\/raid\/data\/gustavoag\/projects\/char2subword\/research\/preprocessing\/split_wiki.py\", line 65, in main\r\n data = nlp.load_dataset('wikipedia', f'{args.date}.{args.lang}', beam_runner='DirectRunner', split='train')\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/builder.py\", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/builder.py\", line 969, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/pipeline.py\", line 534, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/direct\/direct_runner.py\", line 119, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 172, in run_pipeline\r\n self._latest_run_result = self.run_via_runner_api(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 183, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 338, in run_stages\r\n stage_results = self._run_stage(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 512, in _run_stage\r\n last_result, deferred_inputs, fired_timers = self._run_bundle(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 556, in _run_bundle\r\n result, splits = bundle_manager.process_bundle(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 940, in process_bundle\r\n for result, split_result in executor.map(execute, zip(part_inputs, # pylint: disable=zip-builtin-not-iterating\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/concurrent\/futures\/_base.py\", line 611, in result_iterator\r\n yield fs.pop().result()\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/concurrent\/futures\/_base.py\", line 439, in result\r\n return self.__get_result()\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/concurrent\/futures\/_base.py\", line 388, in __get_result\r\n raise self._exception\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/utils\/thread_pool_executor.py\", line 44, in run\r\n self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 932, in execute\r\n return bundle_manager.process_bundle(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 837, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/worker_handlers.py\", line 352, in push\r\n response = self.worker.do_instruction(request)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 479, in do_instruction\r\n return getattr(self, request_type)(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 515, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 977, in process_bundle\r\n input_op_by_transform_id[element.transform_id].process_encoded(\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 218, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 330, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 332, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1030, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\/runners\/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1030, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\/runners\/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1122, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 195, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 670, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 671, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 963, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1045, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/future\/utils\/__init__.py\", line 446, in raise_with_traceback\r\n raise exc.with_traceback(traceback)\r\n File \"apache_beam\/runners\/common.py\", line 961, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 553, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1095, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/datasets\/wikipedia\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50\/wikipedia.py\", line 500, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/nlp\/datasets\/wikipedia\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50\/wikipedia.py\", line 556, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/mwparserfromhell\/wikicode.py\", line 643, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 63, in __strip__\r\n return self.normalize()\r\n File \"\/home\/gustavoag\/anaconda3\/envs\/pytorch\/lib\/python3.8\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 178, in normalize\r\n return chrfunc(htmlentities.name2codepoint[self.value])\r\nKeyError: \"000nbsp [while running 'train\/Clean content']\"```","@lhoestq Any updates on this? I have similar issues with the Romanian dump, tnx.","Hey @gaguilar ,\r\n\r\nI just found the [\"char2subword\" paper](https:\/\/arxiv.org\/pdf\/2010.12730.pdf) and I'm really interested in trying it out on own vocabs\/datasets like for historical texts (I've already [trained some lms](https:\/\/github.com\/stefan-it\/europeana-bert) on newspaper articles with OCR errors).\r\n\r\nDo you plan to release the code for your paper or is it possible to get the implementation \ud83e\udd14 Many thanks :hugs: ","Hi @stefan-it! Thanks for your interest in our work! We do plan to release the code, but we will make it available once the paper has been published at a conference. Sorry for the inconvenience!\r\n\r\nHi @lhoestq, do you have any insights for this issue by any chance? Thanks!","This is an issue on the `mwparserfromhell` side. You could try to update `mwparserfromhell` and see if it fixes the issue. If it doesn't we'll have to create an issue on their repo for them to fix it.\r\nBut first let's see if the latest version of `mwparserfromhell` does the job.","I think the work around as suggested in the issue [#886] is not working for several languages, such as `id`. For example, I tried all the dates to download dataset for `id` langauge from the following link: (https:\/\/github.com\/huggingface\/datasets\/pull\/886) [https:\/\/dumps.wikimedia.org\/idwiki\/](https:\/\/dumps.wikimedia.org\/idwiki\/ )\r\n\r\n> >>> dataset = load_dataset('wikipedia', language='id', date=\"20210501\", beam_runner='DirectRunner')\r\nWARNING:datasets.builder:Using custom data configuration 20210501.id-date=20210501,language=id\r\nDownloading and preparing dataset wikipedia\/20210501.id (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/Users\/.cache\/huggingface\/datasets\/wikipedia\/20210501.id-date=20210501,language=id\/0.0.0\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/load.py\", line 745, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 1139, in _download_and_prepare\r\n super(BeamBasedBuilder, self)._download_and_prepare(\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/builder.py\", line 630, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/Users\/.cache\/huggingface\/modules\/datasets_modules\/datasets\/wikipedia\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1\/wikipedia.py\", line 420, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract({\"info\": info_url})\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/download_manager.py\", line 287, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/download_manager.py\", line 195, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 203, in map_nested\r\n mapped = [\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 204, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/py_utils.py\", line 142, in _single_map_nested\r\n return function(data_struct)\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/download_manager.py\", line 218, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 281, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/Users\/opt\/anaconda3\/envs\/proj\/lib\/python3.9\/site-packages\/datasets\/utils\/file_utils.py\", line 623, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/dumps.wikimedia.org\/idwiki\/20210501\/dumpstatus.json\r\n\r\nMoreover the downloading speed for `non-en` language is very very slow. And interestingly the download stopped after approx a couple minutes due to the read time-out. I tried numerous times and the results is same. Is there any feasible way to download non-en language using huggingface?\r\n\r\n> File \"\/Users\/miislamg\/opt\/anaconda3\/envs\/proj-semlm\/lib\/python3.9\/site-packages\/requests\/models.py\", line 760, in generate\r\n raise ConnectionError(e)\r\nrequests.exceptions.ConnectionError: HTTPSConnectionPool(host='dumps.wikimedia.org', port=443): Read timed out.\r\nDownloading: 7%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 10.2M\/153M [03:35<50:07, 47.4kB\/s]","Hi ! The link https:\/\/dumps.wikimedia.org\/idwiki\/20210501\/dumpstatus.json seems to be working fine for me.\r\n\r\nRegarding the time outs, it must come either from an issue on the wikimedia host side, or from your internet connection.\r\nFeel free to try again several times.","I was trying to download dataset for `es` language, however I am getting the following error:\r\n```\r\ndataset = load_dataset('wikipedia', language='es', date=\"20210320\", beam_runner='DirectRunner') \r\n```\r\n\r\n```\r\nDownloading and preparing dataset wikipedia\/20210320.es (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/scratch\/user_name\/datasets\/wikipedia\/20210320.es-date=20210320,language=es\/0.0.0\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1...\r\nTraceback (most recent call last):\r\n File \"apache_beam\/runners\/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1368, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"\/scratch\/user_name\/modules\/datasets_modules\/datasets\/wikipedia\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1\/wikipedia.py\", line 492, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"\/scratch\/user_name\/modules\/datasets_modules\/datasets\/wikipedia\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1\/wikipedia.py\", line 548, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/mwparserfromhell\/wikicode.py\", line 639, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 60, in __strip__\r\n return self.normalize()\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 150, in normalize\r\n return chr(htmlentities.name2codepoint[self.value])\r\nKeyError: '000nbsp'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_dataset_all.py\", line 8, in <module>\r\n dataset = load_dataset('wikipedia', language=language, date=\"20210320\", beam_runner='DirectRunner') \r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/load.py\", line 748, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 575, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/datasets\/builder.py\", line 1152, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/pipeline.py\", line 564, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/direct\/direct_runner.py\", line 131, in run_pipeline\r\n return runner.run_pipeline(pipeline, options)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 190, in run_pipeline\r\n pipeline.to_runner_api(default_environment=self._default_environment))\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 200, in run_via_runner_api\r\n return self.run_stages(stage_context, stages)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 366, in run_stages\r\n bundle_context_manager,\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 562, in _run_stage\r\n bundle_manager)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 602, in _run_bundle\r\n data_input, data_output, input_timers, expected_timer_output)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/fn_runner.py\", line 903, in process_bundle\r\n result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/portability\/fn_api_runner\/worker_handlers.py\", line 378, in push\r\n response = self.worker.do_instruction(request)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 610, in do_instruction\r\n getattr(request, request_type), request.instruction_id)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/worker\/sdk_worker.py\", line 647, in process_bundle\r\n bundle_processor.process_bundle(instruction_id))\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 1001, in process_bundle\r\n element.data)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/apache_beam\/runners\/worker\/bundle_processor.py\", line 229, in process_encoded\r\n self.output(decoded_value)\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 356, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 358, in apache_beam.runners.worker.operations.Operation.output\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1300, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\/runners\/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1395, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1300, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"apache_beam\/runners\/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1395, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 220, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 717, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/worker\/operations.py\", line 718, in apache_beam.runners.worker.operations.DoOperation.process\r\n File \"apache_beam\/runners\/common.py\", line 1235, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 1315, in apache_beam.runners.common.DoFnRunner._reraise_augmented\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/future\/utils\/__init__.py\", line 446, in raise_with_traceback\r\n raise exc.with_traceback(traceback)\r\n File \"apache_beam\/runners\/common.py\", line 1233, in apache_beam.runners.common.DoFnRunner.process\r\n File \"apache_beam\/runners\/common.py\", line 581, in apache_beam.runners.common.SimpleInvoker.invoke_process\r\n File \"apache_beam\/runners\/common.py\", line 1368, in apache_beam.runners.common._OutputProcessor.process_outputs\r\n File \"\/scratch\/user_name\/modules\/datasets_modules\/datasets\/wikipedia\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1\/wikipedia.py\", line 492, in _clean_content\r\n text = _parse_and_clean_wikicode(raw_content, parser=mwparserfromhell)\r\n File \"\/scratch\/user_name\/modules\/datasets_modules\/datasets\/wikipedia\/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1\/wikipedia.py\", line 548, in _parse_and_clean_wikicode\r\n section_text.append(section.strip_code().strip())\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/mwparserfromhell\/wikicode.py\", line 639, in strip_code\r\n stripped = node.__strip__(**kwargs)\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 60, in __strip__\r\n return self.normalize()\r\n File \"\/opt\/conda\/lib\/python3.7\/site-packages\/mwparserfromhell\/nodes\/html_entity.py\", line 150, in normalize\r\n return chr(htmlentities.name2codepoint[self.value])\r\nKeyError: \"000nbsp [while running 'train\/Clean content']\"\r\n```","Hi ! This looks related to this issue: https:\/\/github.com\/huggingface\/datasets\/issues\/1994\r\nBasically the parser that is used (mwparserfromhell) has some issues for some pages in `es`.\r\nWe already reported some issues for `es` on their repo at https:\/\/github.com\/earwig\/mwparserfromhell\/issues\/247 but it looks like there are still a few issues. Might be a good idea to open a new issue on the mwparserfromhell repo"],"created_at":1599441389000,"updated_at":1626364526000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:\r\n\r\n```\r\nimport nlp\r\n\r\nlangs = ['ar'. 'af', 'an']\r\n\r\nfor lang in langs:\r\n data = nlp.load_dataset('wikipedia', f'20200501.{lang}', beam_runner='DirectRunner', split='train') \r\n print(lang, len(data))\r\n```\r\n\r\nHere's what I see for 'ar' (it gets stuck there):\r\n```\r\nDownloading and preparing dataset wikipedia\/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to \/home\/gaguilar\/.cache\/huggingface\/datasets\/wikipedia\/20200501.ar\/1.0.0\/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50...\r\n```\r\n\r\nNote that those languages are indeed in the list of expected languages. Any suggestions on how to work around this? Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/577\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/576","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/576\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/576\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/576\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/576","id":694348645,"node_id":"MDExOlB1bGxSZXF1ZXN0NDgwNzM3NDQ1","number":576,"title":"Fix the code block in doc","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["thanks :)"],"created_at":1599392455000,"updated_at":1599464252000,"closed_at":1599464238000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/576","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/576","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/576.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/576.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/576\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/575","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/575\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/575\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/575\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/575","id":693691611,"node_id":"MDU6SXNzdWU2OTM2OTE2MTE=","number":575,"title":"Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.","user":{"login":"sudarshan85","id":488428,"node_id":"MDQ6VXNlcjQ4ODQyOA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/488428?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sudarshan85","html_url":"https:\/\/github.com\/sudarshan85","followers_url":"https:\/\/api.github.com\/users\/sudarshan85\/followers","following_url":"https:\/\/api.github.com\/users\/sudarshan85\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sudarshan85\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sudarshan85\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sudarshan85\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sudarshan85\/orgs","repos_url":"https:\/\/api.github.com\/users\/sudarshan85\/repos","events_url":"https:\/\/api.github.com\/users\/sudarshan85\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sudarshan85\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.","Thanks for the report, I'll give a look!","I am also seeing a similar error when running the following:\r\n\r\n```\r\nimport nlp\r\ndataset = load_dataset('cola')\r\n```\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/js11133\/.conda\/envs\/jiant\/lib\/python3.8\/site-packages\/nlp\/load.py\", line 509, in load_dataset\r\n module_path = prepare_module(path, download_config=download_config, dataset=True)\r\n File \"\/home\/js11133\/.conda\/envs\/jiant\/lib\/python3.8\/site-packages\/nlp\/load.py\", line 248, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/home\/js11133\/.conda\/envs\/jiant\/lib\/python3.8\/site-packages\/nlp\/utils\/file_utils.py\", line 191, in cached_path\r\n output_path = get_from_cache(\r\n File \"\/home\/js11133\/.conda\/envs\/jiant\/lib\/python3.8\/site-packages\/nlp\/utils\/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cola\/cola.py\r\n```","@jeswan `\"cola\"` is not a valid dataset identifier (you can check the up-to-date list on https:\/\/huggingface.co\/datasets) but you can find cola inside glue.","Ah right. Thanks!","Hi. Closing this one since #626 updated the glue urls.\r\n\r\n> 1. Why is it still blocking? Is it still downloading?\r\n\r\nAfter downloading it generates the arrow file by iterating through the examples.\r\nThe number of examples processed by second is shown during the processing (not sure why it was not the case for you)\r\n\r\n> 2. I specified split as train, so why is the test folder being populated?\r\n\r\nIt downloads every split\r\n\r\n\r\n\r\n"],"created_at":1599255985000,"updated_at":1600771296000,"closed_at":1600771296000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm following the [quick tour](https:\/\/huggingface.co\/nlp\/quicktour.html) and tried to load the glue dataset:\r\n```\r\n>>> from nlp import load_dataset\r\n>>> dataset = load_dataset('glue', 'mrpc', split='train')\r\n```\r\n\r\nHowever, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines):\r\n```\r\n\r\n\/net\/vaosl01\/opt\/NFS\/su0\/miniconda3\/envs\/hf\/lib\/python3.7\/site-packages\/nlp\/utils\/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)\r\n 354 \" to False.\"\r\n 355 )\r\n--> 356 raise ConnectionError(\"Couldn't reach {}\".format(url))\r\n 357 \r\n 358 # From now on, connected is True.\r\n\r\nConnectionError: Couldn't reach https:\/\/firebasestorage.googleapis.com\/v0\/b\/mtl-sentence-representations.appspot.com\/o\/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc\r\n```\r\n\r\nI tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2.\r\n\r\nSince this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset:\r\n```\r\nds = load_dataset('imdb', split='train')\r\n```\r\nThis downloads the data, but it just blocks after that:\r\n```\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.56k\/4.56k [00:00<00:00, 1.38MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.07k\/2.07k [00:00<00:00, 1.15MB\/s]\r\nDownloading and preparing dataset imdb\/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to \/net\/vaosl01\/opt\/NFS\/su0\/huggingface\/datasets\/imdb\/plain_text\/1.0.0\/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 84.1M\/84.1M [00:07<00:00, 11.1MB\/s]\r\n```\r\n\r\nI checked the folder `$HF_HOME\/datasets\/downloads\/extracted\/<id>\/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are:\r\n1. Why is it still blocking? Is it still downloading?\r\n2. I specified split as train, so why is the test folder being populated?\r\n3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here?\r\n\r\nThanks.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/575\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/574","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/574\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/574\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/574\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/574","id":693364853,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc5ODU5NzQy","number":574,"title":"Add modules cache","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["All the tests pass on my side. Not sure if it is a cache issue or a pytest issue or a circleci issue.\r\nEDIT: I have the same error on google colab. Trying to fix that","I think I fixed it (sorry didn't notice you were on it as well)"],"created_at":1599237003000,"updated_at":1600770428000,"closed_at":1599469295000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/574","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/574","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/574.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/574.patch"},"body":"As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions.\r\n\r\nI added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`.\r\nIn this directory, a module `nlp_modules` is created so that datasets can be added to `nlp_modules.datasets` and metrics to `nlp_modules.metrics`. `nlp_modules` doesn't exist on Pypi.\r\n\r\nIf someone using cloudpickle still wants to have the downloaded dataset\/metrics scripts to be inside the nlp directory, it is still possible to change the environment variable HF_MODULES_CACHE to be a path inside the nlp lib.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/574\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/573","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/573\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/573\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/573\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/573","id":693091790,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc5NjE4Mzc2","number":573,"title":"Faster caching for text dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599220714000,"updated_at":1599224004000,"closed_at":1599224003000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/573","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/573","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/573.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/573.patch"},"body":"As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time.\r\n\r\nTo make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each file to get a hash.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/573\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/572","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/572\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/572\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/572\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/572","id":692598231,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc5MTgyNDU3","number":572,"title":"Add CLUE Benchmark (11 datasets)","user":{"login":"JetRunner","id":22514219,"node_id":"MDQ6VXNlcjIyNTE0MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22514219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JetRunner","html_url":"https:\/\/github.com\/JetRunner","followers_url":"https:\/\/api.github.com\/users\/JetRunner\/followers","following_url":"https:\/\/api.github.com\/users\/JetRunner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JetRunner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JetRunner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JetRunner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JetRunner\/orgs","repos_url":"https:\/\/api.github.com\/users\/JetRunner\/repos","events_url":"https:\/\/api.github.com\/users\/JetRunner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JetRunner\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https:\/\/github.com\/huggingface\/nlp\/pull\/572\/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? ","I believe CI failure is unrelated.","Great job! "],"created_at":1599184660000,"updated_at":1599472751000,"closed_at":1599472750000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/572","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/572","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/572.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/572.patch"},"body":"Add 11 tasks of [CLUE](https:\/\/github.com\/CLUEbenchmark\/CLUE).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/572\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/571","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/571\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/571\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/571\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/571","id":692109287,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc4NzQ2MjMz","number":571,"title":"Serialization","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've added save\/load for dataset dicts.\r\n\r\nI agree that in the future we should also have a way to save indexes too, and also the in-place history of transforms.\r\n\r\nAlso I understand that it would be cool to have the load function directly at the root of the library, but I'm not sure this should be inside `load_dataset` that loads dataset scripts and data from the dataset repository. Maybe something like `load_from_disk` ?","Yes `load_from_disk` and `save_to_disk` could work as well.","I renamed save\/load to save_to_dick\/load_from_disk, and I added `nlp.load_from_disk`\r\n\r\n`nlp.load_from_disk` can load either a Dataset or a DatasetDict.","Awesome! Let's add them to the doc and we're good to go!"],"created_at":1599150098000,"updated_at":1599464768000,"closed_at":1599464767000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/571","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/571","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/571.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/571.patch"},"body":"I added `save` and `load` method to serialize\/deserialize a dataset object in a folder.\r\nIt moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`.\r\n\r\nExample:\r\n\r\n```python\r\nimport nlp\r\n\r\nsquad = nlp.load_dataset(\"squad\", split=\"train\")\r\nsquad.save(\"tmp\/squad\")\r\nsquad = nlp.Dataset.load(\"tmp\/squad\")\r\n```\r\n\r\n`ls tmp\/squad`\r\n```\r\ndataset_info.json squad-train.arrow state.json\r\n```\r\n\r\n`cat tmp\/squad\/state.json`\r\n```json\r\n{\r\n \"_data\": null,\r\n \"_data_files\": [\r\n {\r\n \"filename\": \"squad-train.arrow\",\r\n \"skip\": 0,\r\n \"take\": 87599\r\n }\r\n ],\r\n \"_fingerprint\": \"61f452797a686bc1\",\r\n \"_format_columns\": null,\r\n \"_format_kwargs\": {},\r\n \"_format_type\": null,\r\n \"_indexes\": {},\r\n \"_indices\": null,\r\n \"_indices_data_files\": [],\r\n \"_inplace_history\": [\r\n {\r\n \"transforms\": []\r\n }\r\n ],\r\n \"_output_all_columns\": false,\r\n \"_split\": \"train\"\r\n}\r\n```\r\n\r\n`cat tmp\/squad\/dataset_info.json`\r\n```json\r\n{\r\n \"builder_name\": \"squad\",\r\n \"citation\": \"@article{2016arXiv160605250R,\\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\\n Konstantin and {Liang}, Percy},\\n title = \\\"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\\\",\\n journal = {arXiv e-prints},\\n year = 2016,\\n eid = {arXiv:1606.05250},\\n pages = {arXiv:1606.05250},\\narchivePrefix = {arXiv},\\n eprint = {1606.05250},\\n}\\n\",\r\n \"config_name\": \"plain_text\",\r\n \"dataset_size\": 89789763,\r\n \"description\": \"Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\\n\",\r\n \"download_checksums\": {\r\n \"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/dataset\/dev-v1.1.json\": {\r\n \"checksum\": \"95aa6a52d5d6a735563366753ca50492a658031da74f301ac5238b03966972c9\",\r\n \"num_bytes\": 4854279\r\n },\r\n \"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/dataset\/train-v1.1.json\": {\r\n \"checksum\": \"3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955\",\r\n \"num_bytes\": 30288272\r\n }\r\n },\r\n \"download_size\": 35142551,\r\n \"features\": {\r\n \"answers\": {\r\n \"_type\": \"Sequence\",\r\n \"feature\": {\r\n \"answer_start\": {\r\n \"_type\": \"Value\",\r\n \"dtype\": \"int32\",\r\n \"id\": null\r\n },\r\n \"text\": {\r\n \"_type\": \"Value\",\r\n \"dtype\": \"string\",\r\n \"id\": null\r\n }\r\n },\r\n \"id\": null,\r\n \"length\": -1\r\n },\r\n \"context\": {\r\n \"_type\": \"Value\",\r\n \"dtype\": \"string\",\r\n \"id\": null\r\n },\r\n \"id\": {\r\n \"_type\": \"Value\",\r\n \"dtype\": \"string\",\r\n \"id\": null\r\n },\r\n \"question\": {\r\n \"_type\": \"Value\",\r\n \"dtype\": \"string\",\r\n \"id\": null\r\n },\r\n \"title\": {\r\n \"_type\": \"Value\",\r\n \"dtype\": \"string\",\r\n \"id\": null\r\n }\r\n },\r\n \"homepage\": \"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/\",\r\n \"license\": \"\",\r\n \"post_processed\": {\r\n \"features\": null,\r\n \"resources_checksums\": {\r\n \"train\": {},\r\n \"train[:10%]\": {}\r\n }\r\n },\r\n \"post_processing_size\": 0,\r\n \"size_in_bytes\": 124932314,\r\n \"splits\": {\r\n \"train\": {\r\n \"dataset_name\": \"squad\",\r\n \"name\": \"train\",\r\n \"num_bytes\": 79317110,\r\n \"num_examples\": 87599\r\n },\r\n \"validation\": {\r\n \"dataset_name\": \"squad\",\r\n \"name\": \"validation\",\r\n \"num_bytes\": 10472653,\r\n \"num_examples\": 10570\r\n }\r\n },\r\n \"supervised_keys\": null,\r\n \"version\": {\r\n \"description\": \"New split API (https:\/\/tensorflow.org\/datasets\/splits)\",\r\n \"major\": 1,\r\n \"minor\": 0,\r\n \"nlp_version_to_prepare\": null,\r\n \"patch\": 0,\r\n \"version_str\": \"1.0.0\"\r\n }\r\n}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/571\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/570","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/570\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/570\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/570\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/570","id":691846397,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc4NTI3OTQz","number":570,"title":"add reuters21578 dataset","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599128747000,"updated_at":1599130012000,"closed_at":1599130011000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/570","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/570","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/570.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/570.patch"},"body":"Reopen a PR this the merge.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/570\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/569","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/569\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/569\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/569\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/569","id":691832720,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc4NTE2Mzc2","number":569,"title":"Revert \"add reuters21578 dataset\"","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599127576000,"updated_at":1599127633000,"closed_at":1599127632000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/569","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/569","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/569.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/569.patch"},"body":"Reverts huggingface\/nlp#471","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/569\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/568","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/568\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/568\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/568\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/568","id":691638656,"node_id":"MDU6SXNzdWU2OTE2Mzg2NTY=","number":568,"title":"`metric.compute` throws `ArrowInvalid` error","user":{"login":"ibeltagy","id":2287797,"node_id":"MDQ6VXNlcjIyODc3OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2287797?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ibeltagy","html_url":"https:\/\/github.com\/ibeltagy","followers_url":"https:\/\/api.github.com\/users\/ibeltagy\/followers","following_url":"https:\/\/api.github.com\/users\/ibeltagy\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ibeltagy\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ibeltagy\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ibeltagy\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ibeltagy\/orgs","repos_url":"https:\/\/api.github.com\/users\/ibeltagy\/repos","events_url":"https:\/\/api.github.com\/users\/ibeltagy\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ibeltagy\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hmm might be related to what we are solving in #564","Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ","Closing this one as it was fixed in #654 \r\nFeel free to re-open if you have other questions"],"created_at":1599109017000,"updated_at":1601915633000,"closed_at":1601915633000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`\r\n\r\n```\r\n File \"\/home\/beltagy\/trainer.py\", line 92, in validation_step\r\n rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL'])\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 224, in compute\r\n self.finalize(timeout=timeout)\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 213, in finalize\r\n self.data = Dataset(**reader.read_files(node_files))\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/nlp\/arrow_reader.py\", line 217, in read_files\r\n dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions)\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/nlp\/arrow_reader.py\", line 162, in _read_files\r\n pa_table: pa.Table = self._get_dataset_from_filename(f_dict)\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/nlp\/arrow_reader.py\", line 276, in _get_dataset_from_filename\r\n f = pa.ipc.open_stream(mmap)\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/pyarrow\/ipc.py\", line 173, in open_stream\r\n return RecordBatchStreamReader(source)\r\n File \"\/home\/beltagy\/miniconda3\/envs\/allennlp\/lib\/python3.7\/site-packages\/pyarrow\/ipc.py\", line 64, in __init__\r\n self._open(source)\r\n File \"pyarrow\/ipc.pxi\", line 469, in pyarrow.lib._RecordBatchStreamReader._open\r\n File \"pyarrow\/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/568\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/567","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/567\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/567\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/567\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/567","id":691430245,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc4MTc2Njgx","number":567,"title":"Fix BLEURT metrics for backward compatibility","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599081755000,"updated_at":1599118192000,"closed_at":1599118190000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/567","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/567","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/567.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/567.patch"},"body":"Fix #565","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/567\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/566","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/566\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/566\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/566\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/566","id":691160208,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3OTM2NTIz","number":566,"title":"Remove logger pickling to fix gg colab issues","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599063381000,"updated_at":1599150713000,"closed_at":1599150712000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/566","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/566","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/566.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/566.patch"},"body":"A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.\r\nIt creates some issues in google colab right now.\r\n\r\nIndeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http:\/\/pastebin.fr\/64330)):\r\n\r\n```python\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/zmq\/backend\/cython\/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__()\r\n\r\nTypeError: no default __reduce__ due to non-trivial __cinit__\r\n```\r\n\r\nTo fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/566\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/565","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/565\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/565\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/565\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/565","id":691039121,"node_id":"MDU6SXNzdWU2OTEwMzkxMjE=","number":565,"title":"No module named 'nlp.logging'","user":{"login":"melody-ju","id":66633754,"node_id":"MDQ6VXNlcjY2NjMzNzU0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66633754?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/melody-ju","html_url":"https:\/\/github.com\/melody-ju","followers_url":"https:\/\/api.github.com\/users\/melody-ju\/followers","following_url":"https:\/\/api.github.com\/users\/melody-ju\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/melody-ju\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/melody-ju\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/melody-ju\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/melody-ju\/orgs","repos_url":"https:\/\/api.github.com\/users\/melody-ju\/repos","events_url":"https:\/\/api.github.com\/users\/melody-ju\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/melody-ju\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder from github ([this one](https:\/\/github.com\/huggingface\/nlp\/tree\/0.4.0\/metrics\/bleurt)) and do\r\n\r\n```python\r\nfrom nlp import load_metric\r\n\r\nbleurt = load_metric(\"path\/to\/bleurt\/folder\")\r\n```\r\n\r\nTo download it you can either clone the repo or download the `bleurt.py` file and place it in a folder named `bleurt` ","Actually we can fix this on our side, this script didn't had to be updated. I'll do it in a few minutes"],"created_at":1599054590000,"updated_at":1599118190000,"closed_at":1599118190000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?\r\n\r\n```\r\n>>> import nlp\r\n2020-09-02 13:47:09.210310: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n>>> bleurt = nlp.load_metric(\"bleurt\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/melody\/anaconda3\/envs\/transformers\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 443, in load_metric\r\n metric_cls = import_main_class(module_path, dataset=False)\r\n File \"\/home\/melody\/anaconda3\/envs\/transformers\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 61, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"\/home\/melody\/anaconda3\/envs\/transformers\/lib\/python3.6\/importlib\/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 678, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"\/home\/melody\/anaconda3\/envs\/transformers\/lib\/python3.6\/site-packages\/nlp\/metrics\/bleurt\/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5\/bleurt.py\", line 20, in <module>\r\n from nlp.logging import get_logger\r\nModuleNotFoundError: No module named 'nlp.logging'\r\n```\r\n\r\nJust to show once again that I can't import the logging module:\r\n\r\n```\r\n>>> import nlp\r\n2020-09-02 13:48:38.190621: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n>>> nlp.__version__\r\n'0.4.0'\r\n>>> from nlp.logging import get_logger\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'nlp.logging'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/565\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/564","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/564\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/564\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/564\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/564","id":691000020,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3ODAyMTk2","number":564,"title":"Wait for writing in distributed metrics","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I agree this fix the problem for the CI where the files are always created in a new and clean temporary directory.\r\n\r\nHowever, in a general setting of a succession of fast distributed operation, the files could already exist from previous metrics runs but one process may still finish before another has even started in which case it would mix results from separate operations.\r\n\r\nI feel like the most robust way to solve this is to setup a rendez-vous on the first time we write on files and where each process will test and only finish its operation when it cannot acquire a lock on all the other processes (meaning they all have started).\r\n\r\nWhat do you think?","What do you think of this @thomwolf ? I check all the locks before finalizing","Ok on my side @lhoestq (cannot add you as a reviewer)","The test doesn't pass if I add:\r\n```python\r\n import time\r\n if self.process_id == 1:\r\n time.sleep(0.5)\r\n```\r\nright before `self.add_batch` in `Metric.compute`.\r\n\r\nI'm investigating why it doesn't work in that case","It looks like the process 1 runs `_check_all_processes_locks` correctly and then finishes and releases its lock before process 0 even managed to to run `_check_all_processes_locks` correctly.","Strange!","I changed the way the rendez-vous is done @thomwolf , let me know what you think.\r\nThe idea is that the master process has an additional lock `rendez_vous_lock` to tell every other process to wait for everyone to be ready before starting to write"],"created_at":1599051530000,"updated_at":1599642803000,"closed_at":1599642802000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/564","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/564","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/564.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/564.patch"},"body":"There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing.\r\n\r\nTo fix that I added a custom locking mechanism that waits for the file to exist before trying to read it","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/564\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/563","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/563\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/563\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/563\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/563","id":690908674,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3NzI2MTEz","number":563,"title":"[Large datasets] Speed up download and processing","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`","you're da best"],"created_at":1599042714000,"updated_at":1599642213000,"closed_at":1599642212000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/563","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/563","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/563.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/563.patch"},"body":"Various improvements to speed-up creation and processing of large scale datasets.\r\n\r\nCurrently:\r\n- distributed downloads\r\n- remove etag from datafiles hashes to spare a request when restarting a failed download","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/563\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/562","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/562\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/562\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/562\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/562","id":690907604,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3NzI1MjMx","number":562,"title":"[Reproductibility] Allow to pin versions of datasets\/metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Closing this one in favor of #584 "],"created_at":1599042613000,"updated_at":1599656694000,"closed_at":1599656694000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/562","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/562","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/562.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/562.patch"},"body":"Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts:\r\n```\r\ndataset = nlp.load_dataset('squad', version='1.0.0')\r\nmetric = nlp.load_metric('squad', version='1.0.0')\r\n```\r\n\r\nNotes:\r\n- version number are the release version of the library\r\n- currently only possible for canonical datasets\/metrics, ie. integrated in the GitHub repo of the library","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/562\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/561","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/561\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/561\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/561\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/561","id":690871415,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3Njk1NDQy","number":561,"title":"Made `share_dataset` more readable","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1599039288000,"updated_at":1599123630000,"closed_at":1599123629000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/561","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/561","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/561.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/561.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/561\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/560","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/560\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/560\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/560\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/560","id":690488764,"node_id":"MDU6SXNzdWU2OTA0ODg3NjQ=","number":560,"title":"Using custom DownloadConfig results in an error","user":{"login":"ynouri","id":1789921,"node_id":"MDQ6VXNlcjE3ODk5MjE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1789921?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ynouri","html_url":"https:\/\/github.com\/ynouri","followers_url":"https:\/\/api.github.com\/users\/ynouri\/followers","following_url":"https:\/\/api.github.com\/users\/ynouri\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ynouri\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ynouri\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ynouri\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ynouri\/orgs","repos_url":"https:\/\/api.github.com\/users\/ynouri\/repos","events_url":"https:\/\/api.github.com\/users\/ynouri\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ynouri\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\n\r\nSee:\r\n* https:\/\/github.com\/huggingface\/nlp\/blob\/5fb61e1012bda724a9b6b847307d90a1380abfa5\/src\/nlp\/load.py#L227\r\n* https:\/\/github.com\/huggingface\/nlp\/blob\/5fb61e1012bda724a9b6b847307d90a1380abfa5\/src\/nlp\/builder.py#L388\r\n\r\nMaybe a cleaner solution would be to always instantiate a default `DownloadConfig` object at the top-level, have it as non-optional for the lower-level functions and treat it as immutable. ","Thanks for the report, I'll take a look.\r\n\r\nWhat is your specific use-case for providing a DownloadConfig object?\r\n","Thanks. Our use case involves running a training job behind a corporate firewall with no access to any external resources (S3, GCP or other web resources).\r\n\r\nI was thinking about a 2-steps process:\r\n1) Download the resources \/ artifacts using some secure corporate channel, ie run `nlp.load_dataset()` without a specific `DownloadConfig`. After that, collect the files from the `$HF_HOME` folder\r\n2) Copy the `$HF_HOME` folder in the firewalled environment. Run `nlp.load_dataset()` with a custom config `DownloadConfig(local_files_only=True)`\r\n\r\nHowever this ends up a bit clunky in practice, even when solving the `DownloadConfig` issue above. For example, the `filename` hash computed in `get_from_cache()` differs in the `local_files_only=False` vs `local_files_only=True` case (local case defaults `etag` to `None`, which results in a different hash). So effectively step 2) above doesn't work because the hash computed differs from the hash in the cache folder. Some hacks \/ workaround are possible but this solution becomes very convoluted.\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/c214aa5a4430c1df1bcd0619fd94d6abdf9d2da7\/src\/nlp\/utils\/file_utils.py#L417\r\n\r\nWould you recommend a different path?\r\n","I see.\r\n\r\nProbably the easiest way for you would be that we add simple serialization\/deserialization methods to the Dataset and DatasetDict objects once the data files have been downloaded and all the dataset is processed.\r\n\r\nWhat do you think @lhoestq ?","This use-case will be solved with #571 ","Thank you very much @thomwolf and @lhoestq we will give it a try"],"created_at":1598998982000,"updated_at":1599508257000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"## Version \/ Environment\r\n\r\nUbuntu 18.04\r\nPython 3.6.8\r\nnlp 0.4.0\r\n\r\n## Description\r\n\r\nLoading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.\r\n\r\n## How to reproduce\r\n\r\n### Example without DownloadConfig --> works\r\n\r\n```python\r\nimport os\r\n\r\nos.environ[\"HF_HOME\"] = \"\/data\/hf-test-without-dl-config-01\/\"\r\n\r\nimport logging\r\nimport nlp\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\nif __name__ == \"__main__\":\r\n imdb = nlp.load_dataset(path=\"imdb\")\r\n```\r\n\r\n### Example with DownloadConfig --> doesn't work\r\n\r\n```python\r\nimport os\r\n\r\nos.environ[\"HF_HOME\"] = \"\/data\/hf-test-with-dl-config-01\/\"\r\n\r\nimport logging\r\nimport nlp\r\nfrom nlp.utils import DownloadConfig\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\nif __name__ == \"__main__\":\r\n download_config = DownloadConfig()\r\n imdb = nlp.load_dataset(path=\"imdb\", download_config=download_config)\r\n```\r\n\r\nError traceback:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/...\/example_with_dl_config.py\", line 13, in <module>\r\n imdb = nlp.load_dataset(path=\"imdb\", download_config=download_config)\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/load.py\", line 549, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/builder.py\", line 463, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/builder.py\", line 518, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/datasets\/imdb\/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743\/imdb.py\", line 86, in _split_generators\r\n arch_path = dl_manager.download_and_extract(_DOWNLOAD_URL)\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/utils\/download_manager.py\", line 220, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/utils\/download_manager.py\", line 158, in download\r\n self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/utils\/download_manager.py\", line 108, in _record_sizes_checksums\r\n self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n File \"\/...\/python3.6\/python3.6\/site-packages\/nlp\/utils\/info_utils.py\", line 79, in get_size_checksum_dict\r\n with open(path, \"rb\") as f:\r\nIsADirectoryError: [Errno 21] Is a directory: '\/data\/hf-test-with-dl-config-01\/datasets\/extracted\/b6802c5b61824b2c1f7dbf7cda6696b5f2e22214e18d171ce1ed3be90c931ce5'\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/560\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/559","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/559\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/559\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/559\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/559","id":690411263,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MzAzOTM2","number":559,"title":"Adding the KILT knowledge source and tasks","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Feel free to merge when you are happy with it @yjernite :-)"],"created_at":1598990713000,"updated_at":1599242747000,"closed_at":1599242747000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/559","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/559","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/559.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/559.patch"},"body":"This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with:\r\n```\r\nimport nlp\r\n\r\nkilt_wikipedia = nlp.load_dataset('kilt_wikipedia')\r\n\r\nkilt_tasks = nlp.load_dataset('kilt_tasks')\r\ntriviaqa = nlp.load_dataset('trivia_qa', 'unfiltered.nocontext')\r\ntriviaqa_map = {}\r\nfor k in ['train', 'validation', 'test']:\r\n triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(triviaqa[k]['question_id'])])\r\n kilt_tasks[k + '_triviaqa'] = kilt_tasks[k + '_triviaqa'].filter(lambda x: x['id'] in triviaqa_map)\r\n kilt_tasks[k + '_triviaqa'].map(lambda x: {'input': triviaqa[split][triviaqa_map[x['id']]]['question']})\r\n```\r\n\r\nIt would be great to have the dataset by Monday, which is when the paper should land on Arxiv and @fabiopetroni is planning on tweeting about the paper and `facebookresearch` repository for the datasett","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/559\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/558","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/558\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/558\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/558\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/558","id":690318105,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MjI2ODA0","number":558,"title":"Rerun pip install -e","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598981079000,"updated_at":1598981091000,"closed_at":1598981090000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/558","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/558","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/558.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/558.patch"},"body":"Hopefully it fixes the github actions","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/558\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/557","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/557\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/557\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/557\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/557","id":690220135,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MTQ1NjAx","number":557,"title":"Fix a few typos","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598972604000,"updated_at":1599032348000,"closed_at":1599032347000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/557","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/557","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/557.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/557.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/557\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/556","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/556\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/556\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/556\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/556","id":690218423,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MTQ0MTky","number":556,"title":"Add DailyDialog","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598972475000,"updated_at":1599147723000,"closed_at":1599147519000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/556","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/556","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/556.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/556.patch"},"body":"http:\/\/yanran.li\/dailydialog.html\r\n\r\nhttps:\/\/arxiv.org\/pdf\/1710.03957.pdf\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/556\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/555","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/555\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/555\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/555\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/555","id":690197725,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MTI2OTIy","number":555,"title":"Upgrade pip in benchmark github action","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598971046000,"updated_at":1598973976000,"closed_at":1598973975000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/555","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/555","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/555.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/555.patch"},"body":"It looks like it fixes the `import nlp` issue we have","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/555\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/554","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/554\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/554\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/554\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/554","id":690173214,"node_id":"MDU6SXNzdWU2OTAxNzMyMTQ=","number":554,"title":"nlp downloads to its module path","user":{"login":"danieldk","id":49398,"node_id":"MDQ6VXNlcjQ5Mzk4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/49398?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/danieldk","html_url":"https:\/\/github.com\/danieldk","followers_url":"https:\/\/api.github.com\/users\/danieldk\/followers","following_url":"https:\/\/api.github.com\/users\/danieldk\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/danieldk\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/danieldk\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/danieldk\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/danieldk\/orgs","repos_url":"https:\/\/api.github.com\/users\/danieldk\/repos","events_url":"https:\/\/api.github.com\/users\/danieldk\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/danieldk\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?","> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are installing in a virtual environment?\r\n\r\nThen it would work, because the package is in a writable path.","If it's fine for you then this is the recommended way to solve this issue.","> If it's fine for you then this is the recommended way to solve this issue.\r\n\r\nI don't want to use a virtual environment, because Nix is fully reproducible, and virtual environments are not. And I am the maintainer of the `transformers` in nixpkgs, so sooner or later I will have to package `nlp`, since it is becoming a dependency of `transformers` ;).","Ok interesting. We could have another check to see if it's possible to download and import the datasets script at another location than the module path. I think this would probably involve tweaking the python system path dynamically.\r\n\r\nI don't know anything about Nix so if you want to give this a try your self we can guide you or you can give us more information on your general project and how this works.\r\n\r\nRegarding `nlp` and `transformers`, we are not sure `nlp` will become a required dependency for `transformers`. It will probably be used a lot in the examples but I think it probably won't be a required dependency for the main package since we try to keep it as light as possible in terms of deps.\r\n\r\nHappy to help you make all these things work better for your use-case ","@danieldk modules are now installed in a different location (by default in the cache directory of the lib, in `~\/.cache\/huggingface\/modules`). You can also change that using the environment variable `HF_MODULES_PATH`\r\n\r\nFeel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\nWe plan to do a release in the next coming days","Awesome! I\u2019ll hopefully have some time in the coming days to try this.","> Feel free to play with this change from the master branch for now, and let us know if it sounds good for you :)\r\n> We plan to do a release in the next coming days\r\n\r\nThanks for making this change! I just packaged the latest commit on master and it works like a charm now! :partying_face: "],"created_at":1598969174000,"updated_at":1599805164000,"closed_at":1599805164000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:\r\n\r\n```>>> import nlp\r\n>>> squad_dataset = nlp.load_dataset('squad')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/nix\/store\/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env\/lib\/python3.8\/site-packages\/nlp\/load.py\", line 530, in load_dataset\r\n module_path, hash = prepare_module(path, download_config=download_config, dataset=True)\r\n File \"\/nix\/store\/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env\/lib\/python3.8\/site-packages\/nlp\/load.py\", line 329, in prepare_module\r\n os.makedirs(main_folder_path, exist_ok=True)\r\n File \"\/nix\/store\/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5\/lib\/python3.8\/os.py\", line 223, in makedirs\r\n mkdir(name, mode)\r\nOSError: [Errno 30] Read-only file system: '\/nix\/store\/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env\/lib\/python3.8\/site-packages\/nlp\/datasets\/squad'\r\n```\r\n\r\nDo you have any suggested workaround for this issue?\r\n\r\nPerhaps overriding the default value for `force_local_path` of `prepare_module`?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/554\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/553","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/553\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/553\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/553\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/553","id":690143182,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MDgxNTg2","number":553,"title":"[Fix GitHub Actions] test adding tmate","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598966883000,"updated_at":1620239078000,"closed_at":1599123673000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/553","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/553","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/553.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/553.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/553\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/552","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/552\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/552\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/552\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/552","id":690079429,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc3MDI4MzMx","number":552,"title":"Add multiprocessing","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Logging looks like\r\n\r\n```\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #0 will write at playground\/tmp_00000_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #1 will write at playground\/tmp_00001_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #2 will write at playground\/tmp_00002_of_00004.arrow\r\nDone writing 21899 indices in 3854224 bytes .\r\nProcess #3 will write at playground\/tmp_00003_of_00004.arrow\r\nSpawning 4 processes\r\n#3: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21899\/21899 [00:02<00:00, 8027.41ex\/s]\r\n#0: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21900\/21900 [00:02<00:00, 7982.87ex\/s]\r\n#1: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21900\/21900 [00:02<00:00, 7923.89ex\/s]\r\n#2: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 21900\/21900 [00:02<00:00, 7920.04ex\/s]\r\nConcatenating 4 shards from multiprocessing\r\n```","I added tests and improved logging.\r\nBoth `map` and `filter` support multiprocessing","A bit strange that the benchmarks on map\/filter are worth than `master`.\r\n(maybe because they are not done on the same machine)","The benchmark also got worse in other PRs (see [here](https:\/\/github.com\/huggingface\/nlp\/pull\/550#commitcomment-41931609) for example, where we have 16sec for `map fast-tokenizer batched` and 18 sec for `map identity`)","Hi,\r\n\r\nwhen I use the multiprocessing in ```.map```:\r\n```\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True, num_proc=16)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nI get the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"src\/run.py\", line 373, in <module>\r\n main()\r\n File \"src\/run.py\", line 295, in main\r\n get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None\r\n File \"src\/run.py\", line 153, in get_dataset\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n File \"\/root\/miniconda3\/envs\/py3.8\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1287, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/root\/miniconda3\/envs\/py3.8\/lib\/python3.8\/site-packages\/datasets\/arrow_dataset.py\", line 1287, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"\/root\/miniconda3\/envs\/py3.8\/lib\/python3.8\/multiprocessing\/pool.py\", line 771, in get\r\n raise self._value\r\n put(task)\r\n File \"\/root\/miniconda3\/envs\/py3.8\/lib\/python3.8\/multiprocessing\/connection.py\", line 206, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"\/root\/miniconda3\/envs\/py3.8\/lib\/python3.8\/multiprocessing\/reduction.py\", line 51, in dumps\r\n cls(buf, protocol).dump(obj)\r\nAttributeError: Can't pickle local object 'get_dataset.<locals>.<lambda>'\r\n```\r\nI think you should use [pathos](https:\/\/github.com\/uqfoundation\/pathos) to pickle the lambda function and some others!\r\nI change the 30 line of src\/datasets\/arrow_dataset.py as following:\r\n```\r\n# 30 line: from multiprocessing import Pool, RLock\r\nimport pathos\r\nfrom pathos.multiprocessing import Pool\r\nfrom multiprocessing import RLock\r\n```\r\nand it works!","That's very cool indeed !\r\nShall we condiser adding this dependency @thomwolf ?","We already use `dill` so that's definitely a very interesting option indeed!","it gets stuck on debian 9 when num_proc > 1\r\n","Are you using a tokenizer ?\r\nDid you try to set `TOKENIZERS_PARALLELISM=false` ?\r\n\r\nFeel free to discuss it in #620 , we're discussing this issue","I set `TOKENIZERS_PARALLELISM=false`. Just the warning went away. The program was still stuck\r\n"],"created_at":1598961377000,"updated_at":1600787516000,"closed_at":1599040885000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/552","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/552","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/552.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/552.patch"},"body":"Adding multiprocessing to `.map`\r\n\r\nIt works in 3 steps:\r\n- shard the dataset in `num_proc` shards\r\n- spawn one process per shard and call `map` on them\r\n- concatenate the resulting datasets\r\n\r\nExample of usage:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n\r\ndataset = load_dataset(\"squad\", split=\"train\")\r\n\r\ndef function(x):\r\n return {\"lowered\": x.lower()}\r\n\r\nprocessed = d.map(\r\n function,\r\n input_columns=[\"context\"],\r\n num_proc=4,\r\n cache_file_name=\"playground\/tmp.arrow\",\r\n load_from_cache_file=False\r\n)\r\n```\r\n\r\nHere it writes 4 files depending on the process rank:\r\n- `playground\/tmp_00000_of_00004.arrow`\r\n- `playground\/tmp_00001_of_00004.arrow`\r\n- `playground\/tmp_00002_of_00004.arrow`\r\n- `playground\/tmp_00003_of_00004.arrow`\r\n\r\nThe suffix format can be specified by the user.\r\n\r\nIf the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual.\r\n\r\nI still need to:\r\n- write tests for this\r\n- try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/552\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/551","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/551\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/551\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/551\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/551","id":690034762,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc2OTkwNjAw","number":551,"title":"added HANS dataset","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598956922000,"updated_at":1598962630000,"closed_at":1598962630000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/551","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/551","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/551.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/551.patch"},"body":"Adds the [HANS](https:\/\/github.com\/tommccoy1\/hans) dataset to evaluate NLI systems.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/551\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/550","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/550\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/550\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/550\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/550","id":689775914,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc2NzgyNDY1","number":550,"title":"[BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)","user":{"login":"gaguilar","id":5833357,"node_id":"MDQ6VXNlcjU4MzMzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5833357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gaguilar","html_url":"https:\/\/github.com\/gaguilar","followers_url":"https:\/\/api.github.com\/users\/gaguilar\/followers","following_url":"https:\/\/api.github.com\/users\/gaguilar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gaguilar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gaguilar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gaguilar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gaguilar\/orgs","repos_url":"https:\/\/api.github.com\/users\/gaguilar\/repos","events_url":"https:\/\/api.github.com\/users\/gaguilar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gaguilar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks a lot for that!\r\nThe line you are mentioning is a bug indeed, do you mind fixing it at the same time?","No worries! \r\n\r\nI pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previous commit in origin\/lince. Hopefully, this is not too messy :)\r\n"],"created_at":1598930823000,"updated_at":1599123961000,"closed_at":1599123961000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/550","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/550","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/550.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/550.patch"},"body":"Hi,\r\n\r\nI have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:\r\n\r\n```\r\npython nlp-cli test .\/datasets\/lince --save_infos --all_configs\r\n```\r\n\r\n**NOTE**: I needed to change [this line](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/commands\/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/550\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/549","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/549\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/549\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/549\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/549","id":689766465,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc2Nzc0OTI1","number":549,"title":"Fix bleurt logging import","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That\u2019s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLet\u2019s update this in the coming release.","Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release)."],"created_at":1598929285000,"updated_at":1599156286000,"closed_at":1599123860000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/549","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/549","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/549.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/549.patch"},"body":"Bleurt started throwing an error in some code we have.\r\nThis looks like the fix but...\r\n\r\nIt's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).\r\n\r\nAny way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?\r\n\r\nThanks (and also for your continued work on the lib...)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/549\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/548","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/548\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/548\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/548\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/548","id":689285996,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1","number":548,"title":"[Breaking] Switch text loading to multi-threaded PyArrow loading","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` tag no ? Apparently we can get this tag with `os.path.getmtime(path)`","I just rebased from master to include the hashing changes from #573 ","I think this is ready to merge, no?","Indeed it's ready to merge :)","Ok added the breaking change info and we can merge indeed.\r\n"],"created_at":1598886941000,"updated_at":1599560398000,"closed_at":1599560397000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/548","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/548","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/548.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/548.patch"},"body":"Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.\r\n\r\nIf it works ok, it would fix #546.\r\n\r\n**Breaking change**:\r\nThe text lines now do not include final line-breaks anymore.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/548\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/547","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/547\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/547\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/547\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/547","id":689268589,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc2MzQ4OTk5","number":547,"title":"[Distributed] Making loading distributed datasets a bit safer","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598885494000,"updated_at":1598886990000,"closed_at":1598886989000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/547","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/547","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/547.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/547.patch"},"body":"Add some file-locks during dataset loading","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/547\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/546","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/546\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/546\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/546\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/546","id":689186526,"node_id":"MDU6SXNzdWU2ODkxODY1MjY=","number":546,"title":"Very slow data loading on large dataset","user":{"login":"agemagician","id":6087313,"node_id":"MDQ6VXNlcjYwODczMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6087313?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/agemagician","html_url":"https:\/\/github.com\/agemagician","followers_url":"https:\/\/api.github.com\/users\/agemagician\/followers","following_url":"https:\/\/api.github.com\/users\/agemagician\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/agemagician\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/agemagician\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/agemagician\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/agemagician\/orgs","repos_url":"https:\/\/api.github.com\/users\/agemagician\/repos","events_url":"https:\/\/api.github.com\/users\/agemagician\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/agemagician\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much faster.\r\n\r\nHowever for a 1TB dataset, the conversion can indeed take time. You could try to load parts of it in parallel, and then use `nlp.concatenate_datasets` to get your full dataset.","Humm, we can give a look at these large scale datasets indeed.\r\n\r\nDo you mind sharing a few stats on your dataset so I can try to test on a similar one?\r\n\r\nIn particular some orders of magnitudes for the number of files, number of lines per files, line lengths.","@lhoestq Yes, I understand that the first time requires more time. The concatenate_datasets seems to be a workaround, but I believe a multi-processing method should be integrated into load_dataset to make it easier and more efficient for users.\r\n\r\n@thomwolf Sure, here are the statistics:\r\nNumber of lines: 4.2 Billion\r\nNumber of files: 6K\r\nNumber of tokens: 800 Billion\r\nThe number of lines is distributed equally across these 6k files.\r\nThe line length varies between 100 tokens to 40k tokens.\r\n","@agemagician you can give a try at a multithreaded version if you want (currently on the #548).\r\n\r\nTo test it, you just need to copy the new `text` processing script which is [here](https:\/\/github.com\/huggingface\/nlp\/blob\/07d92a82b7594498ff702f3cca55c074e2052257\/datasets\/text\/text.py) somewhere on your drive and give it's local path instead of `text` to `load_dataset`. E.g. in your example:\r\n```python\r\ntrain_files = glob.glob(\"xxx\/*.txt\",recursive=True)\r\nrandom.shuffle(train_files)\r\n\r\nprint(train_files)\r\n\r\ndataset = nlp.load_dataset('.\/datasets\/text.py', # path to where you've dowloaded the multi-threaded text loading script\r\n data_files=train_files,\r\n name=\"customDataset\",\r\n version=\"1.0.0\",\r\n cache_dir=\"xxx\/nlp\")\r\n```","I have already generated the dataset, but now I tried to reload it and it is still very slow.\r\n\r\nI also have installed your commit and it is slow, even after the dataset was already generated.\r\n`pip install git+https:\/\/github.com\/huggingface\/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257`\r\n\r\nIt uses only a single thread.\r\n\r\nDid I miss something ?","As mentioned in #548 , each time you call `load_dataset` with `data_files=`, they are hashed to get the cache directory name. Hashing can be too slow with 1TB of data. I feel like we should have a faster way of getting a hash that identifies the input data files","I believe this is really a very important feature, otherwise, we will still have the issue of too slow loading problems even if the data cache generation is fast.","Hmm ok then maybe it's the hashing step indeed.\r\n\r\nLet's see if we can improve this as well.\r\n\r\n(you will very likely have to regenerate your dataset if we change this part of the lib though since I expect modifications on this part of the lib to results in new hashes)","Also, @agemagician you have to follow the step I indicate in my previous message [here](https:\/\/github.com\/huggingface\/nlp\/issues\/546#issuecomment-684648927) to use the new text loading script.\r\n\r\nJust doing `pip install git+https:\/\/github.com\/huggingface\/nlp.git@07d92a82b7594498ff702f3cca55c074e2052257` like you did won't use the new script (they are not inside the library but hosted on our hub).","No problem, I will regenerate it. This will make us see if we solved both issues and now both the data generation step, as well as the hashing step, is fast.","Any news for the hashing ?","I'm working on it today :)","Ok so now the text files won't be hashed.\r\n\r\nI also updated #548 to include this change.\r\nLet us know if it helps @agemagician :)","Perfect thanks for your amazing work."],"created_at":1598878643000,"updated_at":1599569099000,"closed_at":1599560397000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.\r\nIt has been 8 hours and still, it is on the loading steps.\r\nIt does work when the text dataset size is small about 1 GB, but it doesn't scale.\r\nIt also uses a single thread during the data loading step.\r\n\r\n```\r\ntrain_files = glob.glob(\"xxx\/*.txt\",recursive=True)\r\nrandom.shuffle(train_files)\r\n\r\nprint(train_files)\r\n\r\ndataset = nlp.load_dataset('text', \r\n data_files=train_files,\r\n name=\"customDataset\",\r\n version=\"1.0.0\",\r\n cache_dir=\"xxx\/nlp\")\r\n```\r\n\r\nIs there something that I am missing ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/546\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/545","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/545\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/545\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/545\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/545","id":689138878,"node_id":"MDU6SXNzdWU2ODkxMzg4Nzg=","number":545,"title":"New release coming up for this library","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Update: release is planed mid-next week."],"created_at":1598873858000,"updated_at":1610535544000,"closed_at":1610535544000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hi all,\r\nA few words on the roadmap for this library.\r\n\r\nThe next release will be a big one and is planed at the end of this week.\r\n\r\nIn addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:\r\n- have support for multi-modal datasets\r\n- include various significant improvements on speed for standard processing (map, shuffling, ...)\r\n- have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility\r\n- change the name to the final name (voted by the community): `datasets`\r\n- be the 1.0.0 release as we think the API will be mostly stabilized from now on","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/545\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/544","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/544\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/544\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/544\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/544","id":689062519,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc2MTc4MDM2","number":544,"title":"[Distributed] Fix load_dataset error when multiprocessing + add test","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598866210000,"updated_at":1598872511000,"closed_at":1598872510000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/544","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/544","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/544.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/544.patch"},"body":"Fix #543 + add test","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/544\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/543","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/543\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/543\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/543\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/543","id":688644407,"node_id":"MDU6SXNzdWU2ODg2NDQ0MDc=","number":543,"title":"nlp.load_dataset is not safe for multi processes when loading from local files","user":{"login":"luyug","id":55288513,"node_id":"MDQ6VXNlcjU1Mjg4NTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/55288513?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/luyug","html_url":"https:\/\/github.com\/luyug","followers_url":"https:\/\/api.github.com\/users\/luyug\/followers","following_url":"https:\/\/api.github.com\/users\/luyug\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/luyug\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/luyug\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/luyug\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/luyug\/orgs","repos_url":"https:\/\/api.github.com\/users\/luyug\/repos","events_url":"https:\/\/api.github.com\/users\/luyug\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/luyug\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'll take a look!"],"created_at":1598757634000,"updated_at":1598872510000,"closed_at":1598872510000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`\r\nconcurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https:\/\/github.com\/huggingface\/nlp\/blob\/6655008c738cb613c522deb3bd18e35a67b2a7e5\/src\/nlp\/builder.py#L423-L438\r\n\r\nLikely because multiple processes step into download_and_prepare, https:\/\/github.com\/huggingface\/nlp\/blob\/6655008c738cb613c522deb3bd18e35a67b2a7e5\/src\/nlp\/load.py#L550-L554\r\n\r\nThis can happen when launching distributed training with commands like `python -m torch.distributed.launch --nproc_per_node 4` on a new collection of files never loaded before.\r\n\r\nI can create a PR that puts in some file locks. It would be helpful if I can be informed of the convention for naming and placement of the lock.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/543\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/542","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/542\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/542\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/542\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/542","id":688555036,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc1NzkyNTY0","number":542,"title":"Add TensorFlow example","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598715567000,"updated_at":1598867360000,"closed_at":1598867359000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/542","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/542","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/542.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/542.patch"},"body":"Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/542\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/541","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/541\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/541\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/541\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/541","id":688521224,"node_id":"MDU6SXNzdWU2ODg1MjEyMjQ=","number":541,"title":"Best practices for training tokenizers with nlp","user":{"login":"moskomule","id":11806234,"node_id":"MDQ6VXNlcjExODA2MjM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11806234?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/moskomule","html_url":"https:\/\/github.com\/moskomule","followers_url":"https:\/\/api.github.com\/users\/moskomule\/followers","following_url":"https:\/\/api.github.com\/users\/moskomule\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/moskomule\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/moskomule\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/moskomule\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/moskomule\/orgs","repos_url":"https:\/\/api.github.com\/users\/moskomule\/repos","events_url":"https:\/\/api.github.com\/users\/moskomule\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/moskomule\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598702809000,"updated_at":1598702820000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, thank you for developing this library. \r\n\r\nWhat do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/541\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/540","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/540\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/540\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/540\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/540","id":688475884,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc1NzMzNzMz","number":540,"title":"[BUGFIX] Fix Race Dataset Checksum bug","user":{"login":"abarbosa94","id":6608232,"node_id":"MDQ6VXNlcjY2MDgyMzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6608232?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abarbosa94","html_url":"https:\/\/github.com\/abarbosa94","followers_url":"https:\/\/api.github.com\/users\/abarbosa94\/followers","following_url":"https:\/\/api.github.com\/users\/abarbosa94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abarbosa94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abarbosa94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abarbosa94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abarbosa94\/orgs","repos_url":"https:\/\/api.github.com\/users\/abarbosa94\/repos","events_url":"https:\/\/api.github.com\/users\/abarbosa94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abarbosa94\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?","This has fixed #537 at least on my machine hahaha.\r\n\r\nNice point! I think it would totally worth it :) What the best implementation approach would you suggest?\r\n\r\nWould it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense?","I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.\r\nYou just need to add\r\n```python\r\n BUILDER_CONFIGS = [\r\n nlp.BuilderConfig(\r\n name=\"high school\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"middle\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"all\",\r\n description=\"insert description here\",\r\n ),\r\n ]\r\n```\r\nas a class attribute for the `Race` class.\r\n\r\nThen in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.\r\n\r\nYou can check [mlsum](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/mlsum\/mlsum.py) for example if you want to see how it done in general, it's a dataset that has five configurations, and each config has train\/val\/test splits.","Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)\r\n\r\nYou were correct as well, as I was using the script without the `datasets\/race\/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)\r\n\r\nI managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution"],"created_at":1598684410000,"updated_at":1600429340000,"closed_at":1600429340000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/540","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/540","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/540.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/540.patch"},"body":"In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)\r\n\r\nMoreover, I have added some descriptions.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/540\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/539","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/539\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/539\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/539\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/539","id":688323602,"node_id":"MDU6SXNzdWU2ODgzMjM2MDI=","number":539,"title":"[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data","user":{"login":"gaguilar","id":5833357,"node_id":"MDQ6VXNlcjU4MzMzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5833357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gaguilar","html_url":"https:\/\/github.com\/gaguilar","followers_url":"https:\/\/api.github.com\/users\/gaguilar\/followers","following_url":"https:\/\/api.github.com\/users\/gaguilar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gaguilar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gaguilar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gaguilar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gaguilar\/orgs","repos_url":"https:\/\/api.github.com\/users\/gaguilar\/repos","events_url":"https:\/\/api.github.com\/users\/gaguilar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gaguilar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https:\/\/huggingface.co\/nlp\/share_dataset.html#adding-metadata) by [installing from source](https:\/\/huggingface.co\/nlp\/installation.html#installing-from-source) and running the following command from the root of the repo:\r\n```bash\r\npython nlp-cli test .\/datasets\/lince --save_infos --all_configs\r\n```\r\nAnd then you can open a pull-request with the updated json file.\r\n\r\nOtherwise we'll do it sometime this week.","Hi @thomwolf \r\n\r\nThanks for the details! I just created a PR with the updated `dataset_infos.json` file (#550).","Thanks for updating the json file. Closing this one"],"created_at":1598644551000,"updated_at":1599150842000,"closed_at":1599150841000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nThere is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. \r\n\r\nHow can I update the checksum of the library to solve this issue? The error is below and it also appears in the [nlp viewer](https:\/\/huggingface.co\/nlp\/viewer\/?dataset=lince&config=lid_msaea):\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset('lince', 'lid_msaea')\r\n```\r\n\r\nOutput:\r\n```\r\nNonMatchingChecksumError: ['https:\/\/ritual.uh.edu\/lince\/libaccess\/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9\/lid_msaea.zip']\r\nTraceback:\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/ScriptRunner.py\", line 322, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 196, in <module>\r\n dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/caching.py\", line 591, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/caching.py\", line 575, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 150, in get\r\n builder_instance.download_and_prepare()\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 432, in download_and_prepare\r\n download_config.force_download = download_mode == FORCE_REDOWNLOAD\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 469, in _download_and_prepare\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/nlp\/utils\/info_utils.py\", line 36, in verify_checksums\r\n raise NonMatchingChecksumError(str(bad_urls))\r\n```\r\n\r\nThank you in advance!\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/539\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/538","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/538\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/538\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/538\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/538","id":688015912,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc1MzU3MjY2","number":538,"title":"[logging] Add centralized logging - Bump-up cache loads to warnings","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598614949000,"updated_at":1598874171000,"closed_at":1598874171000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/538","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/538","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/538.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/538.patch"},"body":"Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO).\r\n\r\nYou can use:\r\n```\r\nnlp.logging.set_verbosity(verbosity: int)\r\nnlp.logging.set_verbosity_info()\r\nnlp.logging.set_verbosity_warning()\r\nnlp.logging.set_verbosity_debug()\r\nnlp.logging.set_verbosity_error()\r\nnlp.logging.get_verbosity() -> int\r\n```\r\nAnd use the levels:\r\n```\r\nnlp.logging.CRITICAL\r\nnlp.logging.DEBUG\r\nnlp.logging.ERROR\r\nnlp.logging.FATAL\r\nnlp.logging.INFO\r\nnlp.logging.NOTSET\r\nnlp.logging.WARN\r\nnlp.logging.WARNING\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/538\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/537","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/537\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/537\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/537\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/537","id":687614699,"node_id":"MDU6SXNzdWU2ODc2MTQ2OTk=","number":537,"title":"[Dataset] RACE dataset Checksums error","user":{"login":"abarbosa94","id":6608232,"node_id":"MDQ6VXNlcjY2MDgyMzI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6608232?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/abarbosa94","html_url":"https:\/\/github.com\/abarbosa94","followers_url":"https:\/\/api.github.com\/users\/abarbosa94\/followers","following_url":"https:\/\/api.github.com\/users\/abarbosa94\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/abarbosa94\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/abarbosa94\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/abarbosa94\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/abarbosa94\/orgs","repos_url":"https:\/\/api.github.com\/users\/abarbosa94\/repos","events_url":"https:\/\/api.github.com\/users\/abarbosa94\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/abarbosa94\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an update in the data, and we may have to update the expected checksum value.","I just cleared the cache an run it again. The error persists ):\r\n\r\n```\r\n nlp (master) $ rm -rf \/Users\/abarbosa\/.cache\/huggingface\/\r\n nlp (master) $ python\r\nPython 3.8.5 (default, Aug 5 2020, 03:39:04)\r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import nlp\r\n>>> dataset = nlp.load_dataset(\"race\")\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.39k\/4.39k [00:00<00:00, 661kB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.81k\/1.81k [00:00<00:00, 644kB\/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset race\/default (download: 84.52 MiB, generated: 132.61 MiB, post-processed: Unknown size, total: 217.13 MiB) to \/Users\/abarbosa\/.cache\/huggingface\/datasets\/race\/default\/0.1.0\/5461327f1a83549ca0d845a3159c806d2baf4f8d0d8f7d657157ce7cdf3899c2...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 25.4M\/25.4M [01:03<00:00, 401kB\/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/Users\/abarbosa\/Documents\/nlp\/src\/nlp\/load.py\", line 550, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/Users\/abarbosa\/Documents\/nlp\/src\/nlp\/builder.py\", line 471, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/Users\/abarbosa\/Documents\/nlp\/src\/nlp\/builder.py\", line 530, in _download_and_prepare\r\n verify_checksums(\r\n File \"\/Users\/abarbosa\/Documents\/nlp\/src\/nlp\/utils\/info_utils.py\", line 38, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\nnlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['http:\/\/www.cs.cmu.edu\/~glai1\/data\/race\/RACE.tar.gz']\r\n>>>\r\n```","Dealing with the same issue please update the checksum on nlp library end. The data seems to have changed on their end.","We have a discussion on this datasets here: https:\/\/github.com\/huggingface\/nlp\/pull\/540\r\n\r\nFeel free to participate if you have some opinion on the scope of data which should be included in this dataset.","At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\n","> At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n\r\nCould you upload this please?","> > At least for me, the file that was downloaded from CMU isn't the complete dataset, but a small subset of it (~25MB vs ~85MB). I've previously downloaded the dataset directly, so for my personal needs I could just swap out the corrupted file with the correct one. Perhaps you could host it like you do for the Wikipedia and BookCorpus datasets.\r\n> \r\n> Could you upload this please?\r\n\r\nNot sure if I can upload it according to their license (\"You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.\").","I managed to fix it in #540 :)","Closing since @540 is merged\r\n\r\nThanks again @abarbosa94 "],"created_at":1598572696000,"updated_at":1600430824000,"closed_at":1600430824000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:\r\n\r\n```\r\ndataset = nlp.load_dataset(\"race\")\r\nlen(dataset[\"train\"]), len(dataset[\"validation\"])\r\n```\r\n\r\nBut then I got the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n<ipython-input-15-8bf7603ce0ed> in <module>\r\n----> 1 dataset = nlp.load_dataset(\"race\")\r\n 2 len(dataset[\"train\"]), len(dataset[\"validation\"])\r\n\r\n~\/miniconda3\/envs\/masters\/lib\/python3.8\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 546 \r\n 547 # Download and prepare data\r\n--> 548 builder_instance.download_and_prepare(\r\n 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n\r\n~\/miniconda3\/envs\/masters\/lib\/python3.8\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 460 logger.info(\"Dataset not on Hf google storage. Downloading and preparing it from source\")\r\n 461 if not downloaded_from_gcs:\r\n--> 462 self._download_and_prepare(\r\n 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n\r\n~\/miniconda3\/envs\/masters\/lib\/python3.8\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n--> 521 verify_checksums(\r\n 522 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 523 )\r\n\r\n~\/miniconda3\/envs\/masters\/lib\/python3.8\/site-packages\/nlp\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 36 if len(bad_urls) > 0:\r\n 37 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 39 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 40 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['http:\/\/www.cs.cmu.edu\/~glai1\/data\/race\/RACE.tar.gz']\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/537\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/536","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/536\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/536\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/536\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/536","id":687378332,"node_id":"MDExOlB1bGxSZXF1ZXN0NDc0ODE0NzY1","number":536,"title":"Fingerprint","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I changed the way I implemented fingerprint updates to use decorator functions.\r\n\r\nI also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dataset that is memory mapped from a file.\r\n\r\nLet me know what you think @thomwolf "],"created_at":1598545629000,"updated_at":1598883640000,"closed_at":1598883639000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/536","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/536","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/536.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/536.patch"},"body":"This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc.\r\nHowever the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table.\r\n\r\nTo fix that, I added the concept of dataset fingerprint, that is updated after each transform (in place or not), and stored inside the table metadata.\r\n\r\nWhen a dataset is created, an initial fingerprint is computed. If the dataset is memory-mapped, then the fingerprint generator doesn't read the table and only looks at the filename. However if the table is in-memory, then the fingerprint generator reads the content of the table using a batched non-crypto hashing.\r\n\r\nI added a utility class to compute hashes of arbitrary python objects in `fingerprint.py` : `Hasher`. The API is close to standard hashing tools (`.update`, `.hexdigest`). It also supports custom hashing functions depending on object types using a registry like pickle. I added a custom hashing function to hash a `pa.Table` in a batched way, and also for `nlp.DatasetInfo` to leverage its json serialization feature.\r\n\r\nNote about this PR:\r\nThis is a draft PR because #513 needs to be merged first.\r\nThe diff that is shown is for branches fingerprint -> indices (and not master, for now)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/536\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/535","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/535\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/535\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/535\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/535","id":686238315,"node_id":"MDExOlB1bGxSZXF1ZXN0NDczODM3Njg0","number":535,"title":"Benchmarks","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598440886000,"updated_at":1598517600000,"closed_at":1598517599000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/535","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/535","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/535.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/535.patch"},"body":"Adding some benchmarks with DVC\/CML\r\n\r\nTo add a new tracked benchmark:\r\n- create a new python benchmarking script in `.\/benchmarks\/`. The script can use the utilities in `.\/benchmarks\/utils.py` and should output a JSON file with results in `.\/benchmarks\/results\/`.\r\n- add a new pipeline stage in [dvc.yaml](.\/dvc.yaml) with the name of your new benchmark.\r\n\r\nThat's it","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/535\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/534","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/534\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/534\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/534\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/534","id":686115912,"node_id":"MDU6SXNzdWU2ODYxMTU5MTI=","number":534,"title":"`list_datasets()` is broken.","user":{"login":"ashutosh-dwivedi-e3502","id":314169,"node_id":"MDQ6VXNlcjMxNDE2OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/314169?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502","html_url":"https:\/\/github.com\/ashutosh-dwivedi-e3502","followers_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/followers","following_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/orgs","repos_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/repos","events_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ashutosh-dwivedi-e3502\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release","What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```","Thanks @lhoestq . "],"created_at":1598429941000,"updated_at":1598509871000,"closed_at":1598509871000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"version = '0.4.0'\r\n\r\n`list_datasets()` is broken. It results in the following error : \r\n\r\n```\r\nIn [3]: nlp.list_datasets()\r\nOut[3]: ---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n~\/.virtualenvs\/san-lgUCsFg_\/lib\/python3.8\/site-packages\/IPython\/core\/formatters.py in __call__(self, obj)\r\n 700 type_pprinters=self.type_printers,\r\n 701 deferred_pprinters=self.deferred_printers)\r\n--> 702 printer.pretty(obj)\r\n 703 printer.flush()\r\n 704 return stream.getvalue()\r\n\r\n~\/.virtualenvs\/san-lgUCsFg_\/lib\/python3.8\/site-packages\/IPython\/lib\/pretty.py in pretty(self, obj)\r\n 375 if cls in self.type_pprinters:\r\n 376 # printer registered in self.type_pprinters\r\n--> 377 return self.type_pprinters[cls](obj, self, cycle)\r\n 378 else:\r\n 379 # deferred printer\r\n\r\n~\/.virtualenvs\/san-lgUCsFg_\/lib\/python3.8\/site-packages\/IPython\/lib\/pretty.py in inner(obj, p, cycle)\r\n 553 p.text(',')\r\n 554 p.breakable()\r\n--> 555 p.pretty(x)\r\n 556 if len(obj) == 1 and type(obj) is tuple:\r\n 557 # Special case for 1-item tuples.\r\n\r\n~\/.virtualenvs\/san-lgUCsFg_\/lib\/python3.8\/site-packages\/IPython\/lib\/pretty.py in pretty(self, obj)\r\n 392 if cls is not object \\\r\n 393 and callable(cls.__dict__.get('__repr__')):\r\n--> 394 return _repr_pprint(obj, self, cycle)\r\n 395\r\n 396 return _default_pprint(obj, self, cycle)\r\n\r\n~\/.virtualenvs\/san-lgUCsFg_\/lib\/python3.8\/site-packages\/IPython\/lib\/pretty.py in _repr_pprint(obj, p, cycle)\r\n 698 \"\"\"A pprint that just redirects to the normal repr function.\"\"\"\r\n 699 # Find newlines and replace them with p.break_()\r\n--> 700 output = repr(obj)\r\n 701 lines = output.splitlines()\r\n 702 with p.group():\r\n\r\n~\/.virtualenvs\/san-lgUCsFg_\/lib\/python3.8\/site-packages\/nlp\/hf_api.py in __repr__(self)\r\n 110\r\n 111 def __repr__(self):\r\n--> 112 single_line_description = self.description.replace(\"\\n\", \"\")\r\n 113 return f\"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})\"\r\n 114\r\n\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/534\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/533","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/533\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/533\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/533\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/533","id":685585914,"node_id":"MDExOlB1bGxSZXF1ZXN0NDczMjg4OTgx","number":533,"title":"Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598369564000,"updated_at":1598428944000,"closed_at":1598428943000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/533","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/533","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/533.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/533.patch"},"body":"It should fix the CI problems in #513 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/533\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/532","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/532\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/532\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/532\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/532","id":685540614,"node_id":"MDU6SXNzdWU2ODU1NDA2MTQ=","number":532,"title":"File exists error when used with TPU","user":{"login":"go-inoue","id":20531705,"node_id":"MDQ6VXNlcjIwNTMxNzA1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20531705?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/go-inoue","html_url":"https:\/\/github.com\/go-inoue","followers_url":"https:\/\/api.github.com\/users\/go-inoue\/followers","following_url":"https:\/\/api.github.com\/users\/go-inoue\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/go-inoue\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/go-inoue\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/go-inoue\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/go-inoue\/orgs","repos_url":"https:\/\/api.github.com\/users\/go-inoue\/repos","events_url":"https:\/\/api.github.com\/users\/go-inoue\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/go-inoue\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`","Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the dataset is already created it should be fine","Thanks! I tested on 328MB text data on `n1-standard-8 (8 vCPUs, 30 GB memory)`. The main script ran without any issue, but it seems to require a huge space in the drive.\r\n\r\nAs suggested, I ran the following script before running the pre-training command with `xla_spawn.py`.\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nfile_path=\"your_file_name\"\r\nload_dataset(\"text\", data_files=file_path, split=\"train\")\r\n```\r\nThis will create `text-train.arrow` under the default cache directory. Then, I run the script with `xla_spawn.py`. It will load data from the cached file. My understanding is that there's no other way but to do this two-step process with the current version (0.4) of `nlp`.\r\n\r\nDuring another caching process that happens in the main script:\r\n\r\n```\r\n08\/26\/2020 09:19:51 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08\/26\/2020 09:19:53 - INFO - nlp.arrow_dataset - Caching processed dataset at \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\/cache-f90f341e5308a7469\r\n8d872bcc88f9c0e.arrow\r\n```\r\n\r\n`nlp` generates a temporary file per core, each of which is three times larger than the original text data. If each process is actually writing on the disk, you will need a huge amount of space in your drive. (Maybe I'm missing something.)\r\n\r\n```\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp0k43sazw\r\n-rw------- 1 ***** ***** 940M Aug 26 09:31 tmp7sxs9mj5\r\n-rw------- 1 ***** ***** 939M Aug 26 09:31 tmpbbiqw2vp\r\n-rw------- 1 ***** ***** 937M Aug 26 09:31 tmpjxb5ptyu\r\n-rw------- 1 ***** ***** 933M Aug 26 09:31 tmpk3hkdh0e\r\n-rw------- 1 ***** ***** 944M Aug 26 09:31 tmpnoalwftz\r\n-rw------- 1 ***** ***** 931M Aug 26 09:31 tmpuxdr_dz3\r\n-rw------- 1 ***** ***** 945M Aug 26 09:31 tmpxjyuy6dk\r\n```\r\nAfter the caching process, they seem to be merged into one file.\r\n\r\n```\r\n-rw------- 1 ***** ***** 989M Aug 26 09:32 cache-f90f341e5308a74698d872bcc88f9c0e.arrow\r\n-rw-r--r-- 1 ***** ***** 674 Aug 26 09:19 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 09:19 LICENSE\r\n-rw-r--r-- 1 ***** ***** 332M Aug 26 09:10 text-train.arrow\r\n```","Again it looks like every process tries to tokenize the full dataset at the same time.\r\nIf you do the tokenization before calling `xla_spawn.py` once, then each process will then use the tokenized cached file `cache-f90f341e5308a74698d872bcc88f9c0e.arrow` and not recompute it.\r\n\r\nNot sure if there's a better way to do that cc @julien-c @thomwolf ","I wrote a separate script just for preparing a cached file, including tokenization. Each process did use the tokenized cached file.\r\n\r\nCurrently I'm testing the pipeline on 24GB text data. It took about 1.5 hour to create a cached file on `n1-highmem-16 (16 vCPUs, 104 GB memory)`. I assume loading this cached file in the main script with `xla_spawn.py` won't be an issue (even if there are 8 processes).\r\n\r\n```\r\ntotal 98G\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 13:38 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 12:24 ..\r\n-rw------- 1 ***** ***** 74G Aug 26 13:38 cache-a7aa04134ba7b1aff5d9710f14a4e334.arrow\r\n-rw-r--r-- 1 ***** ***** 681 Aug 26 12:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 12:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 25G Aug 26 12:24 text-train.arrow\r\n```","Yes loading the cached file should be fine from different processes","Sorry, I thought it was working, but actually the second call doesn't use the cached file that was generated separately, and it will generate another cache-****.arrorw file with a different name. If I run the training script again (with `xla_spawn.py`), it will use the second cached file, which was generated by the training script itself in the previous run.\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:35 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:29 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:35 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:29 cache-69633651476e943b93c89ace715f9487.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:33 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:33 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:29 text-train.arrow\r\n```","So if I understand correctly it means that the cached file generated by your separated script is different by the one used by the training script ?","Yes.\r\n\r\n1. `cache-69633651476e943b93c89ace715f9487.arrow` was generated with a separate script. \r\n2. I ran the entire script with `xla_spawn.py`.\r\n3. `cache-69633651476e943b93c89ace715f9487.arrow` is not used.\r\n4. `cache-0d77dfce704493dbe63f071eed6a5431.arrow` is created.\r\n5. training starts...\r\n\r\nNow, if I kill the process at step 5, and do the step 2 again, it will use `cache-0d77dfce704493dbe63f071eed6a5431.arrow` (cached file created at step 4) without any issue.\r\n\r\nI used the following to generate the first cached file.\r\n```python\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```","1. Here's the log from the first step.\r\n```\r\nDownloading and preparing dataset text\/default-e84dd29acc4ad9ef (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDataset text downloaded and prepared to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d. Subsequent calls will reuse this data.\r\n```\r\nThere's a file named `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow`, so it did create a cached file.\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 15:59 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 15:58 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 15:58 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n2. Ideally, `cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow` should be used in `run_language_modeling.py` (modified version using `nlp`) with `xla_spawn.py`. But it looks like it's creating a new cached file.\r\n\r\n```\r\n08\/26\/2020 16:13:03 - INFO - filelock - Lock 139635836351096 released on \/home\/*****\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.202fa4f84f552bff1f5400ae012663839c61efb3de068c6c8722d34ac0ea6192\r\n.py.lock\r\n08\/26\/2020 16:13:03 - WARNING - nlp.builder - Using custom data configuration default\r\n08\/26\/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08\/26\/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08\/26\/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/26\/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08\/26\/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08\/26\/2020 16:13:03 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08\/26\/2020 16:13:03 - INFO - nlp.info - Loading Dataset info from \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08\/26\/2020 16:13:03 - INFO - nlp.builder - Reusing dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/26\/2020 16:13:03 - INFO - nlp.builder - Constructing Dataset for split train, from \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08\/26\/2020 16:13:03 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08\/26\/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\/cache-0d77dfce704493dbe\r\n63f071eed6a5431.arrow\r\n^M 0%| | 0\/100 [00:00<?, ?it\/s]08\/26\/2020 16:13:05 - INFO - nlp.arrow_dataset - Caching processed dataset at \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6\r\nfe661fe4d070d380d\/cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n```\r\n\r\nThere are two cached files in the directory:\r\n\r\n```\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 26 16:14 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 26 15:58 ..\r\n-rw------- 1 ***** ***** 99M Aug 26 16:14 cache-0d77dfce704493dbe63f071eed6a5431.arrow\r\n-rw------- 1 ***** ***** 99M Aug 26 15:59 cache-7b1440ba7077af0f0d9035b5a55d01fc.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 26 16:13 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 26 16:13 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 26 15:58 text-train.arrow\r\n```\r\n\r\nIf I kill the process, and run it again, it will use the second cached file.\r\n\r\n```\r\n08\/26\/2020 16:19:52 - WARNING - nlp.builder - Using custom data configuration default\r\n08\/26\/2020 16:19:52 - INFO - nlp.builder - Overwrite dataset info from restored data version.\r\n08\/26\/2020 16:19:52 - INFO - nlp.info - Loading Dataset info from \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08\/26\/2020 16:19:52 - INFO - nlp.builder - Reusing dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/26\/2020 16:19:52 - INFO - nlp.builder - Constructing Dataset for split train, from \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\n08\/26\/2020 16:19:52 - INFO - nlp.utils.info_utils - All the checksums matched successfully for post processing resources\r\n08\/26\/2020 16:19:53 - INFO - nlp.arrow_dataset - Loading cached processed dataset at \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\/cache-0d77dfce70\r\n4493dbe63f071eed6a5431.arrow\r\n08\/26\/2020 16:19:53 - INFO - nlp.arrow_dataset - Set __getitem__(key) output type to torch for ['input_ids'] columns (when key is int or slice) and don't output other (un-formatted) columns.\r\n```","Thanks for all the details.\r\nThe two cached files are supposed to be the same. I suspect that the caching has a problem with the tokenizer.\r\nWhich tokenizer did you use ?","I trained a byte-level BPE tokenizer on my data with `tokenziers` library following this [example](https:\/\/github.com\/huggingface\/tokenizers\/blob\/master\/bindings\/python\/examples\/train_bytelevel_bpe.py).\r\n\r\nAnd I put these model files in a directory named `\"model_name\"`. I also put config.json, which is the original RoBERTa config file.\r\n\r\n```bash\r\n%ls model_name\r\nconfig.json merges.txt vocab.json\r\n```\r\n\r\n[This](https:\/\/github.com\/huggingface\/transformers\/blob\/4bd7be9a4268221d2a0000c7e8033aaeb365c03b\/examples\/language-modeling\/run_language_modeling.py#L196) is the line where `run_language_modeling.py` loads the tokenier.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\n\r\nI use `\"model_name\"` for `model_args.tokenizer_name`. I don't specify `model_args.cache_dir`. It is 'None' by default.","In my separated script for caching, I'm using `use_fast=True` when initializing a tokenizer.\r\n\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(args.config_name, use_fast=True)\r\n```\r\nI wasn't using that option in the main script. That could be the reason...","Yea it could definitely explain why you have two different cache files.\r\nLet me know if using the same tokenizers on both sides fixes the issue","It still creates a new file even if I remove `use_fast=True`... \r\n\r\nHere's the script used to create a cached file.\r\n```python \r\n#!\/usr\/bin\/env python3\r\n\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\nfrom nlp import load_dataset\r\n\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--config_name', type=str, help='Pretrained config name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(args.config_name)\r\n\r\n dataset = load_dataset(\"text\", data_files=args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nHere's how the data is loaded in the modified `run_language_modeling.py`. [[original function](https:\/\/github.com\/huggingface\/transformers\/blob\/971d1802d009d9996b36a34a34477cee849ef39f\/examples\/language-modeling\/run_language_modeling.py#L128-L135)]\r\n\r\n```python\r\ndef get_dataset(args: DataTrainingArguments, tokenizer: PreTrainedTokenizer, evaluate=False):\r\n file_path = args.eval_data_file if evaluate else args.train_data_file\r\n split = \"validation\" if evaluate else \"train\"\r\n if args.line_by_line:\r\n # return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n return dataset\r\n\r\n else:\r\n return TextDataset(\r\n tokenizer=tokenizer, file_path=file_path, block_size=args.block_size, overwrite_cache=args.overwrite_cache\r\n )\r\n```\r\n\r\nProbably I don't need this part in the main script,\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nand simply do this?\r\n```python\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\nreturn dataset\r\n```","You need this part in the main script or it will use the dataset that is not tokenized\r\n\r\n","I can see that the tokenizer in `run_language_modeling.py` is not instantiated the same way as in your separated script.\r\nIndeed we can see L196:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n```\r\nCould you try to make it so they are instantiated the exact same way please ?","I updated my separated script, but it's creating a cached file again. If I don't use the `model_args.cache_dir`, both will get `None`, so they should be the same.\r\n\r\n```python\r\n#!\/usr\/bin\/env python3\r\nimport argparse\r\n\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--tokenizer_name', type=str, help='Pretrained tokenizer name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--cache_dir', type=str, default=None, help='Where do you want to store the pretrained models downloaded from s3')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n\r\n model_args = parser.parse_args()\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir)\r\n\r\n dataset = load_dataset(\"text\", data_files=model_args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIs there a way to specify the cache file to load, and skip the re-computation?","Could you also check that the `args.block_size` used in the lambda function is the same as well ?","Here's a minimal working example to reproduce this issue.\r\n\r\nAssumption:\r\n- You have access to TPU.\r\n- You have installed `transformers` and `nlp`.\r\n- You have tokenizer files (`config.json`, `merges.txt`, `vocab.json`) under the directory named `model_name`.\r\n- You have `xla_spawn.py` (Download from https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/xla_spawn.py).\r\n- You have saved the following script as `prepare_cached_dataset.py`.\r\n\r\n```python\r\n#!\/usr\/bin\/env python3\r\nimport argparse\r\nfrom transformers import AutoTokenizer\r\nfrom nlp import load_dataset\r\n\r\ndef main():\r\n parser = argparse.ArgumentParser(description='description')\r\n parser.add_argument('--tokenizer_name', type=str, help='Pretrained tokenizer name or path if not the same as model_name')\r\n parser.add_argument('--data_file', type=str, help='The input data file (a text file).')\r\n parser.add_argument('--cache_dir', type=str, default=None, help='Where do you want to store the pretrained models downloaded from s3')\r\n parser.add_argument('--block_size', type=int, default=-1, help='The training dataset will be truncated in block of this size for training')\r\n parser.add_argument('--tpu_num_cores', type=int, default=1, help='Number of TPU cores to use (1 or 8). For xla_apwan.py')\r\n model_args = parser.parse_args()\r\n \r\n tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=True)\r\n \r\n dataset = load_dataset(\"text\", data_files=model_args.data_file, split=\"train\")\r\n dataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=model_args.block_size), batched=True)\r\n dataset.set_format(type='torch', columns=['input_ids'])\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n- Run the following command. Replace `your_training_data` with some text file.\r\n\r\n```bash\r\nexport TRAIN_DATA=your_training_data\r\n\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:08 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:08 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n\r\n- Run the same script again. (The output should be just `Using custom data configuration default`.)\r\n```\r\npython prepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n- Check the cached directory.\r\n```bash\r\nls -lha \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 132M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:08 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:20 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:20 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n- The cached file (`cache-bfc7cb0702426d19242db5e8c079f04b.arrow`) is reused.\r\n- Now, run this script with `xla_spawn.py`. Ideally, it should reuse the cached file, however, you will see each process is creating a cache file again.\r\n\r\n```bash\r\npython xla_spawn.py --num_cores 8 \\\r\nprepare_cached_dataset.py \\\r\n--tokenizer_name=model_name \\\r\n--block_size=512 \\\r\n--data_file=$TRAIN_DATA\r\n```\r\n\r\n- Check the cached directory. There are two arrrow files.\r\n```bash\r\nls -lha \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-e84dd29acc4ad9ef\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d\r\ntotal 230M\r\ndrwxr-xr-x 2 ***** ***** 4.0K Aug 28 13:25 .\r\ndrwxr-xr-x 3 ***** ***** 4.0K Aug 28 13:08 ..\r\n-rw------- 1 ***** ***** 99M Aug 28 13:08 cache-bfc7cb0702426d19242db5e8c079f04b.arrow\r\n-rw------- 1 ***** ***** 99M Aug 28 13:25 cache-e0e2313e49c8a110aafcc8133154c19a.arrow\r\n-rw-r--r-- 1 ***** ***** 670 Aug 28 13:24 dataset_info.json\r\n-rw-r--r-- 1 ***** ***** 0 Aug 28 13:24 LICENSE\r\n-rw-r--r-- 1 ***** ***** 33M Aug 28 13:08 text-train.arrow\r\n```\r\n","I ended up specifying the `cache_file_name` argument when I call `map` function.\r\n\r\n```python\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True, truncation=True, max_length=args.block_size),\r\n batched=True,\r\n cache_file_name=cache_file_name)\r\n```\r\n\r\nNote:\r\n- `text` dataset in `nlp` does not strip `\"\\n\"`. If you want the same output as in [`LineByLineTextDataset`](https:\/\/github.com\/huggingface\/transformers\/blob\/afc4ece462ad83a090af620ff4da099a0272e171\/src\/transformers\/data\/datasets\/language_modeling.py#L88-L111), you would need to create your own dataset class where you replace `line` to `line.strip()` [here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/text\/text.py#L35).\r\n"],"created_at":1598366198000,"updated_at":1598962496000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI'm getting a \"File exists\" error when I use [text dataset](https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets\/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).\r\n\r\nI modified [line 131 in the original `run_language_modeling.py`](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/language-modeling\/run_language_modeling.py#L131) as follows:\r\n\r\n```python\r\n# line 131: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=args.block_size), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\nreturn dataset\r\n```\r\n\r\nWhen I run this with [`xla_spawn.py`](https:\/\/github.com\/huggingface\/transformers\/blob\/master\/examples\/xla_spawn.py), I get the following error (it produces one message per core in TPU, which I believe is fine).\r\n\r\nIt seems the current version doesn't take into account distributed training processes as in [this example](https:\/\/github.com\/huggingface\/transformers\/blob\/a573777901e662ec2e565be312ffaeedef6effec\/src\/transformers\/data\/datasets\/language_modeling.py#L35-L38)?\r\n\r\n```\r\n08\/25\/2020 13:59:41 - WARNING - nlp.builder - Using custom data configuration default\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\n08\/25\/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d)\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nException in device=TPU:6: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nException in device=TPU:4: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nException in device=TPU:1: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nException in device=TPU:7: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nException in device=TPU:3: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nDownloading and preparing dataset text\/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to \/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/\r\n447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d...\r\nException in device=TPU:2: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nException in device=TPU:0: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nTraceback (most recent call last):\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\nTraceback (most recent call last):\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/contextlib.py\", line 81, in __enter__\r\n return next(self.gen)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/contextlib.py\", line 81, in __enter__\r\n return next(self.gen)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/contextlib.py\", line 81, in __enter__\r\n return next(self.gen)\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\nFileExistsError: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/contextlib.py\", line 81, in __enter__\r\n return next(self.gen)\r\nFileExistsError: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/contextlib.py\", line 81, in __enter__\r\n return next(self.gen)\r\nFileExistsError: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\nTraceback (most recent call last):\r\nFileExistsError: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\nTraceback (most recent call last):\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/torch_xla\/distributed\/xla_multiprocessing.py\", line 231, in _start_fn\r\n fn(gindex, *args)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 300, in _mp_fn\r\n main()\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 240, in main\r\n train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None\r\n File \"\/home\/*****\/huggingface_roberta\/run_language_modeling.py\", line 134, in get_dataset\r\n dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 546, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 450, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/contextlib.py\", line 81, in __enter__\r\n return next(self.gen)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/site-packages\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/anaconda3\/envs\/torch-xla-1.6\/lib\/python3.6\/os.py\", line 220, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '\/home\/*****\/.cache\/huggingface\/datasets\/text\/default-b0932b2bdbb63283\/0.0.0\/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete'\r\n```\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/532\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/531","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/531\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/531\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/531\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/531","id":685291036,"node_id":"MDExOlB1bGxSZXF1ZXN0NDczMDM4ODc4","number":531,"title":"add concatenate_datasets to the docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598344805000,"updated_at":1598346140000,"closed_at":1598346139000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/531","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/531","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/531.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/531.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/531\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/530","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/530\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/530\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/530\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/530","id":684825612,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcyNjQ5NTk2","number":530,"title":"use ragged tensor by default","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release"],"created_at":1598288775000,"updated_at":1598296947000,"closed_at":1598296945000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/530","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/530","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/530.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/530.patch"},"body":"I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.\r\nPreviously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not.\r\n\r\nTherefore I reverted this behavior to always return a ragged tensor as we used to do.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/530\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/529","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/529\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/529\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/529\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/529","id":684797157,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcyNjI2MDY4","number":529,"title":"Add MLSUM","user":{"login":"RachelKer","id":36986299,"node_id":"MDQ6VXNlcjM2OTg2Mjk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36986299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RachelKer","html_url":"https:\/\/github.com\/RachelKer","followers_url":"https:\/\/api.github.com\/users\/RachelKer\/followers","following_url":"https:\/\/api.github.com\/users\/RachelKer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RachelKer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RachelKer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RachelKer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RachelKer\/orgs","repos_url":"https:\/\/api.github.com\/users\/RachelKer\/repos","events_url":"https:\/\/api.github.com\/users\/RachelKer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RachelKer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :)","Hello, it does work on the fixing real dataset branch. Merci Quentin :)","Nice, glad to hear that :)\r\nde rien !"],"created_at":1598285915000,"updated_at":1598429051000,"closed_at":1598429051000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/529","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/529","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/529.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/529.patch"},"body":"Hello (again :) !), \r\n\r\nSo, I started a new branch because of a [rebase issue](https:\/\/github.com\/huggingface\/nlp\/pull\/463), sorry for the mess. \r\n\r\nHowever, the command `pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the script throws an error as a specific config language is necessary. \r\n\r\nI think that setting a default language would be a bad workaround for this so I kept it as it is. Putting all the train files across languages together would also be a bad idea because of the size. \r\n\r\nThanks for your help, \r\n\r\nRachel\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/529\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/528","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/528\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/528\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/528\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/528","id":684673673,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcyNTIzNDI1","number":528,"title":"fix missing variable names in docs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...`"],"created_at":1598275908000,"updated_at":1598346244000,"closed_at":1598346243000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/528","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/528","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/528.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/528.patch"},"body":"fix #524 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/528\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/527","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/527\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/527\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/527\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/527","id":684632930,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcyNDg4MzUy","number":527,"title":"Fix config used for slow test on real dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1598272774000,"updated_at":1598347245000,"closed_at":1598347244000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/527","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/527","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/527.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/527.patch"},"body":"As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters.\r\n\r\nTo fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load_real_dataset_all_configs`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/527\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/526","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/526\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/526\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/526\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/526","id":684615455,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcyNDczNjcw","number":526,"title":"Returning None instead of \"python\" if dataset is unformatted","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We have to change the tests to expect `None` instead of `python` then","Merging!"],"created_at":1598271035000,"updated_at":1598273443000,"closed_at":1598273442000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/526","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/526","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/526.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/526.patch"},"body":"Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format[\"type\"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/526\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/525","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/525\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/525\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/525\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/525","id":683875483,"node_id":"MDU6SXNzdWU2ODM4NzU0ODM=","number":525,"title":"wmt download speed example","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r\nAlso cc @patrickvonplaten ","Mirror is not official.","Shall we host the files ourselves or it is fine to use this mirror in your opinion ?","Should we add an argument in `load_dataset` to override some URL with a custom URL (e.g. mirror) or a local path?\r\n\r\nThis could also be used to provide local files instead of the original files as requested by some users (e.g. when you made a dataset with the same format than SQuAD and what to use it instead of the official dataset files).","@lhoestq I think we should host it ourselves. I'll put the subset of wmt (without preprocessed files) that we need on s3 and post a link over the weekend.","Is there a solution yet? The download speed is still too slow. 60-70kbps download for wmt16 and around 100kbps for wmt19. @sshleifer ","I'm working on mirror links which will provide high download speed :)\r\nSee https:\/\/github.com\/huggingface\/datasets\/issues\/1892"],"created_at":1598052546000,"updated_at":1613664967000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.\r\n\r\n```\r\nimport nlp\r\nnlp.load_dataset('wmt16', 'de-en')\r\n```\r\nDownloads at 49.1 KB\/S\r\n\r\nWhereas \r\n```\r\npip install gdown # download from google drive\r\n!gdown https:\/\/drive.google.com\/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj\r\n```\r\nDownloads at 127 MB\/s. (The file is a copy of wmt-en-de raw).\r\n\r\n\r\n```\r\nnlp.load_dataset('wmt16', 'ro-en')\r\n```\r\ngoes at 27 MB\/s, much faster. \r\n\r\nif we wget the same data from s3 is the same download speed, but \u00bc the file size:\r\n```\r\nwget https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/translation\/wmt_en_ro_packed_200_rand.tgz\r\n```\r\n\r\nFinally,\r\n```\r\nnlp.load_dataset('wmt19', 'zh-en')\r\n```\r\nStarts fast, but broken. (duplicate of #493 )\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/525\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/524","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/524\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/524\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/524\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/524","id":683686359,"node_id":"MDU6SXNzdWU2ODM2ODYzNTk=","number":524,"title":"Some docs are missing parameter names","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed, good catch!"],"created_at":1598028454000,"updated_at":1598346243000,"closed_at":1598346243000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"See https:\/\/huggingface.co\/nlp\/master\/package_reference\/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/524\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/523","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/523\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/523\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/523\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/523","id":682573232,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcwNzkxMjA1","number":523,"title":"Speed up Tokenization by optimizing cast_to_python_objects","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I took your comments into account and added tests for `cast_to_python_objects`"],"created_at":1597916522000,"updated_at":1598259255000,"closed_at":1598259254000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/523","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/523","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/523.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/523.patch"},"body":"I changed how `cast_to_python_objects` works to make it faster.\r\nIt is used to cast numpy\/pytorch\/tensorflow\/pandas objects to python lists, and it works recursively.\r\n\r\nTo avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted.\r\nIf the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same.\r\nThis trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example.\r\n\r\nSpeed improvement:\r\n\r\n\r\n```python\r\nimport transformers\r\nimport nlp\r\n\r\ntok = transformers.BertTokenizerFast.from_pretrained(\"bert-base-uncased\")\r\ntxt = [\"a \" * 512] * 1000\r\ndataset = nlp.Dataset.from_dict({\"txt\": txt})\r\n\r\n# Tokenization using .map is now faster. Previously it was taking 3.5s\r\n%time _ = dataset.map(lambda x: tok(x[\"txt\"]), batched=True, load_from_cache_file=False)\r\n# 450ms\r\n\r\n# for comparison\r\n%time _ = tok(txt)\r\n# 280ms\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/523\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/522","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/522\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/522\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/522\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/522","id":682478833,"node_id":"MDU6SXNzdWU2ODI0Nzg4MzM=","number":522,"title":"dictionnary typo in docs","user":{"login":"yonigottesman","id":4004127,"node_id":"MDQ6VXNlcjQwMDQxMjc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4004127?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yonigottesman","html_url":"https:\/\/github.com\/yonigottesman","followers_url":"https:\/\/api.github.com\/users\/yonigottesman\/followers","following_url":"https:\/\/api.github.com\/users\/yonigottesman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yonigottesman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yonigottesman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yonigottesman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yonigottesman\/orgs","repos_url":"https:\/\/api.github.com\/users\/yonigottesman\/repos","events_url":"https:\/\/api.github.com\/users\/yonigottesman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yonigottesman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1597907465000,"updated_at":1597909934000,"closed_at":1597909933000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Many places dictionary is spelled dictionnary, not sure if its on purpose or not.\r\nFixed in this pr: \r\nhttps:\/\/github.com\/huggingface\/nlp\/pull\/521 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/522\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/521","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/521\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/521\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/521\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/521","id":682477648,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcwNzEyNzgz","number":521,"title":"Fix dictionnary (dictionary) typo","user":{"login":"yonigottesman","id":4004127,"node_id":"MDQ6VXNlcjQwMDQxMjc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4004127?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yonigottesman","html_url":"https:\/\/github.com\/yonigottesman","followers_url":"https:\/\/api.github.com\/users\/yonigottesman\/followers","following_url":"https:\/\/api.github.com\/users\/yonigottesman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yonigottesman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yonigottesman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yonigottesman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yonigottesman\/orgs","repos_url":"https:\/\/api.github.com\/users\/yonigottesman\/repos","events_url":"https:\/\/api.github.com\/users\/yonigottesman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yonigottesman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)"],"created_at":1597907342000,"updated_at":1597909924000,"closed_at":1597909924000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/521","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/521","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/521.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/521.patch"},"body":"This error happens many times I'm thinking maybe its spelled like this on purpose?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/521\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/520","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/520\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/520\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/520\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/520","id":682264839,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcwNTI4MDE0","number":520,"title":"Transform references for sacrebleu","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think I agree @lhoestq so I pushed a change.\r\nThanks for your work on the library!"],"created_at":1597883215000,"updated_at":1597915854000,"closed_at":1597915853000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/520","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/520","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/520.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/520.patch"},"body":"Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error.\r\n\r\nThis PR transforms reference data in a more standard format into the [unusual format](https:\/\/github.com\/mjpost\/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/520\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/519","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/519\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/519\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/519\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/519","id":682193882,"node_id":"MDU6SXNzdWU2ODIxOTM4ODI=","number":519,"title":"[BUG] Metrics throwing new error on master since 0.4.0","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric","Closing - seems to be just forgetting to tokenize. And found the helpful discussion in #137 "],"created_at":1597872555000,"updated_at":1597874680000,"closed_at":1597874680000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.\r\nWasn't happening on 0.4.0 but happening now on master.\r\n\r\n```\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 226, in compute\r\n self.add_batch(predictions=predictions, references=references)\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 242, in add_batch\r\n batch = self.info.features.encode_batch(batch)\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/nlp\/features.py\", line 527, in encode_batch\r\n encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/nlp\/features.py\", line 527, in <listcomp>\r\n encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column]\r\n File \"\/usr\/local\/lib\/python3.7\/site-packages\/nlp\/features.py\", line 456, in encode_nested_example\r\n raise ValueError(\"Got a string but expected a list instead: '{}'\".format(obj))\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/519\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/518","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/518\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/518\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/518\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/518","id":682131165,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcwNDE0ODE1","number":518,"title":"[METRICS, breaking] Refactor caching behavior, pickle\/cloudpickle metrics and dataset, add tests on metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["(test failure is unrelated)","As discussed with @thomwolf merging since the hyperparameter-search has been merged in transformers."],"created_at":1597866188000,"updated_at":1598284900000,"closed_at":1598284899000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/518","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/518","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/518.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/518.patch"},"body":"Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled\/cloudpickled after instantiation.\r\n\r\nAlso add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances.\r\n\r\nChanges significantly the caching behavior for the metrics:\r\n- if the metric is used in a non-distributed setup (most common case) we try to find a free cache file using UUID instead of asking for an `experiment_id` if we can't lock the cache file this allows to use several instances of the same metrics in parallel.\r\n- if the metrics is used in a distributed setup we ask for an `experiment_id` if we can't lock the cache file (because all the nodes need to have related cache file names for the final sync.\r\n- after the computation, we free the locks and delete all the cache files.\r\n\r\nBreaking: Some arguments for Metrics initialization have been removed for simplicity (`version`...) and some have been renamed for consistency with the rest of the library (`in_memory` => `keep_in_memory`).\r\n\r\nAlso remove the `_has_transformers` detection in utils to avoid importing transformers everytime during loading.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/518\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/517","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/517\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/517\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/517\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/517","id":681896944,"node_id":"MDU6SXNzdWU2ODE4OTY5NDQ=","number":517,"title":"add MLDoc dataset","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Any updates on this?","This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies."],"created_at":1597848119000,"updated_at":1627970373000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am recommending that someone add MLDoc, a multilingual news topic classification dataset.\r\n\r\n- Here's a link to the Github: https:\/\/github.com\/facebookresearch\/MLDoc\r\n- and the paper: http:\/\/www.lrec-conf.org\/proceedings\/lrec2018\/pdf\/658.pdf\r\n\r\nLooks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate\/Industrial), ECAT (Economics), GCAT (Government\/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/517\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/516","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/516\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/516\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/516\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/516","id":681846032,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcwMTY5NTA0","number":516,"title":"[Breaking] Rename formated to formatted","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597844123000,"updated_at":1597912877000,"closed_at":1597912876000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/516","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/516","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/516.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/516.patch"},"body":"`formated` is not correct but `formatted` is","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/516\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/515","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/515\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/515\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/515\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/515","id":681845619,"node_id":"MDExOlB1bGxSZXF1ZXN0NDcwMTY5MTQ0","number":515,"title":"Fix batched map for formatted dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597844090000,"updated_at":1597955443000,"closed_at":1597955442000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/515","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/515","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/515.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/515.patch"},"body":"If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000).\r\nThe happened during the creation of the `pa.Table`, since columns had different lengths.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/515\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/514","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/514\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/514\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/514\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/514","id":681256348,"node_id":"MDU6SXNzdWU2ODEyNTYzNDg=","number":514,"title":"dataset.shuffle(keep_in_memory=True) is never allowed","user":{"login":"vegarab","id":24683907,"node_id":"MDQ6VXNlcjI0NjgzOTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24683907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vegarab","html_url":"https:\/\/github.com\/vegarab","followers_url":"https:\/\/api.github.com\/users\/vegarab\/followers","following_url":"https:\/\/api.github.com\/users\/vegarab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vegarab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vegarab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vegarab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vegarab\/orgs","repos_url":"https:\/\/api.github.com\/users\/vegarab\/repos","events_url":"https:\/\/api.github.com\/users\/vegarab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vegarab\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ","Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_memory` to `True`, the assert should pass, no?","I failed to realise that this only applies to `shuffle()`. Whenever `keep_in_memory` is set to True, this is passed on to the `select()` function. However, if `cache_file_name` is None, it will be defined in the `shuffle()` function before it is passed on to `select()`. \r\n\r\nThus, `select()` is called with `keep_in_memory=True` and a not None value for `cache_file_name`. \r\nThis is essentially fixed in #513 \r\n\r\nEasily reproducible:\r\n```python\r\n>>> import nlp\r\n>>> data = nlp.load_dataset(\"cosmos_qa\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> data.shuffle(keep_in_memory=True)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 1398, in shuffle\r\n verbose=verbose,\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 1178, in select\r\n ), \"Please use either `keep_in_memory` or `cache_file_name` but not both.\"\r\nAssertionError: Please use either `keep_in_memory` or `cache_file_name` but not both.\r\n>>>data.select([0], keep_in_memory=True)\r\n# No error\r\n```","Oh yes ok got it thanks. Should be fixed if we are happy with #513 indeed.","My bad. This is actually not fixed in #513. Sorry about that...\r\nThe new `indices_cache_file_name` is set to a non-None value in the new `shuffle()` as well. \r\n\r\nThe buffer and caching mechanisms used in the `select()` function are too intricate for me to understand why the check is there at all. I've removed it in my local build and it seems to be working fine for my project, without really considering other implications of the change. \r\n\r\n","Ok I'll investigate and add a series of tests on the `keep_in_memory=True` settings which is under-tested atm","Hey, still seeing this issue with the latest version."],"created_at":1597776460000,"updated_at":1627063631000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`\r\n\r\nThe commit added the lines\r\n```python\r\n# lines 994-996 in src\/nlp\/arrow_dataset.py\r\n assert (\r\n not keep_in_memory or cache_file_name is None\r\n ), \"Please use either `keep_in_memory` or `cache_file_name` but not both.\"\r\n```\r\n\r\nThis affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check. \r\n\r\nI'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/514\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/513","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/513\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/513\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/513\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/513","id":681215612,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY5NjQxMjg1","number":513,"title":"[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok I fixed `concatenate_datasets` and added tests\r\nFeel free to merge if it's good for you @thomwolf ","Ok, adding some benchmarks for map\/filters and then I'll merge","Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n```\r\n\/__w\/nlp\/nlp\/src\/nlp\/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\nand PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n(supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\nprotect its data or make it writeable before converting it to a tensor. This type of warning will be\r\nsuppressed for the rest of this program.\r\n(Triggered internally at \/pytorch\/torch\/csrc\/utils\/tensor_numpy.cpp:141.)\r\n532\r\n return torch.tensor(x, **format_kwargs)\r\n```","> Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n> \r\n> ```\r\n> \/__w\/nlp\/nlp\/src\/nlp\/arrow_dataset.py:648: UserWarning: The given NumPy array is not writeable,\r\n> and PyTorch does not support non-writeable tensors. This means you can write to the underlying\r\n> (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to\r\n> protect its data or make it writeable before converting it to a tensor. This type of warning will be\r\n> suppressed for the rest of this program.\r\n> (Triggered internally at \/pytorch\/torch\/csrc\/utils\/tensor_numpy.cpp:141.)\r\n> 532\r\n> return torch.tensor(x, **format_kwargs)\r\n> ```\r\n\r\nNot sure why we have that, it's probably linked to zero copy from arrow to numpy"],"created_at":1597772162000,"updated_at":1598604111000,"closed_at":1598604110000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/513","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/513","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/513.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/513.patch"},"body":"Use an indices mapping instead of rewriting the dataset for all the samples re-ordering\/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).\r\n\r\nAdded a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.\r\n\r\nAll the samples re-ordering\/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck.\r\n\r\n*Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering\/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/513\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/512","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/512\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/512\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/512\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/512","id":681137164,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY5NTc2NzE3","number":512,"title":"Delete CONTRIBUTING.md","user":{"login":"ChenZehong13","id":56394989,"node_id":"MDQ6VXNlcjU2Mzk0OTg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56394989?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChenZehong13","html_url":"https:\/\/github.com\/ChenZehong13","followers_url":"https:\/\/api.github.com\/users\/ChenZehong13\/followers","following_url":"https:\/\/api.github.com\/users\/ChenZehong13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChenZehong13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChenZehong13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChenZehong13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChenZehong13\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChenZehong13\/repos","events_url":"https:\/\/api.github.com\/users\/ChenZehong13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChenZehong13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\ud83d\ude31","Yeah, this is spammy behavior. I've reported the user handle."],"created_at":1597764805000,"updated_at":1597765701000,"closed_at":1597765147000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/512","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/512","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/512.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/512.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/512\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/511","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/511\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/511\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/511\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/511","id":681055553,"node_id":"MDU6SXNzdWU2ODEwNTU1NTM=","number":511,"title":"dataset.shuffle() and select() resets format. Intended?","user":{"login":"vegarab","id":24683907,"node_id":"MDQ6VXNlcjI0NjgzOTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24683907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vegarab","html_url":"https:\/\/github.com\/vegarab","followers_url":"https:\/\/api.github.com\/users\/vegarab\/followers","following_url":"https:\/\/api.github.com\/users\/vegarab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vegarab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vegarab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vegarab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vegarab\/orgs","repos_url":"https:\/\/api.github.com\/users\/vegarab\/repos","events_url":"https:\/\/api.github.com\/users\/vegarab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vegarab\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table and infos).\r\n\r\nThinking about it I don't see a strong reason against transmitting the format from the parent dataset to its newly created child. It's probably what's expected by the user in most cases. What do you think @lhoestq?\r\n\r\nBy the way, I've been working today on a refactoring of all the samples re-ordering\/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). The idea is to speed them up by a lot (like, really a lot) by working as much as possible with an indices mapping table instead of doing a deep copy of the full dataset as we've been doing currently. You can give it a look and try it here: https:\/\/github.com\/huggingface\/nlp\/pull\/513\r\nFeedbacks are very much welcome","I think it's ok to keep the format.\r\nIf we want to have this behavior for `.map` too we just have to make sure it doesn't keep a column that's been removed.","Shall we have this in the coming release by the way @lhoestq ?","Yes sure !","Since datasets 1.0.0 the format is not reset anymore.\r\nClosing this one, but feel free to re-open if you have other questions"],"created_at":1597758361000,"updated_at":1600073138000,"closed_at":1600073138000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?\r\n\r\nWhen working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save(\"dataset.pt\")`. Later loading the dataset object using `torch.load(\"dataset.pt\")`, which conserves the defined format before saving. \r\nI do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset. \r\n\r\nThe obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`.\r\n\r\n_I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_\r\n\r\n#### How to reproduce:\r\n\r\n```python\r\nimport nlp\r\nfrom transformers import T5Tokenizer\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-base\")\r\ndef create_features(batch):\r\n context_encoding = tokenizer.batch_encode_plus(batch[\"context\"])\r\n return {\"input_ids\": context_encoding[\"input_ids\"]}\r\n\r\ndataset = nlp.load_dataset(\"cosmos_qa\", split=\"train\")\r\ndataset = dataset.map(create_features, batched=True)\r\ndataset.set_format(type=\"torch\", columns=[\"input_ids\"])\r\ndataset[0]\r\n# {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])}\r\n\r\ndataset = dataset.shuffle()\r\ndataset[0]\r\n# {'id': '3Q9(...)20', 'context': \"Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]}\r\n\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/511\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/510","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/510\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/510\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/510\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/510","id":680823644,"node_id":"MDU6SXNzdWU2ODA4MjM2NDQ=","number":510,"title":"Version of numpy to use the library","user":{"login":"isspek","id":6966175,"node_id":"MDQ6VXNlcjY5NjYxNzU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6966175?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/isspek","html_url":"https:\/\/github.com\/isspek","followers_url":"https:\/\/api.github.com\/users\/isspek\/followers","following_url":"https:\/\/api.github.com\/users\/isspek\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/isspek\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/isspek\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/isspek\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/isspek\/orgs","repos_url":"https:\/\/api.github.com\/users\/isspek\/repos","events_url":"https:\/\/api.github.com\/users\/isspek\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/isspek\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Seems like this method was added in 1.17. I'll add a requirement on this.","Thank you so much. After upgrading the numpy library, it worked."],"created_at":1597741153000,"updated_at":1597862156000,"closed_at":1597862156000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.\r\n\r\nThanks in advance.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/510\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/509","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/509\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/509\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/509\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/509","id":679711585,"node_id":"MDU6SXNzdWU2Nzk3MTE1ODU=","number":509,"title":"Converting TensorFlow dataset example","user":{"login":"saareliad","id":22762845,"node_id":"MDQ6VXNlcjIyNzYyODQ1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22762845?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saareliad","html_url":"https:\/\/github.com\/saareliad","followers_url":"https:\/\/api.github.com\/users\/saareliad\/followers","following_url":"https:\/\/api.github.com\/users\/saareliad\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saareliad\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saareliad\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saareliad\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saareliad\/orgs","repos_url":"https:\/\/api.github.com\/users\/saareliad\/repos","events_url":"https:\/\/api.github.com\/users\/saareliad\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saareliad\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp\/commands\/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it work in reverse, feel free to open a PR to share it with the community :)","In our docs: [Using a Dataset with PyTorch\/Tensorflow](https:\/\/huggingface.co\/docs\/datasets\/torch_tensorflow.html)."],"created_at":1597565120000,"updated_at":1627970478000,"closed_at":1627970477000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\nI want to use TensorFlow datasets with this repo, I noticed you made some conversion script,\r\ncan you give a simple example of using it?\r\n\r\nThanks\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/509\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/508","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/508\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/508\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/508\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/508","id":679705734,"node_id":"MDU6SXNzdWU2Nzk3MDU3MzQ=","number":508,"title":"TypeError: Receiver() takes no arguments","user":{"login":"sebastiantomac","id":1225851,"node_id":"MDQ6VXNlcjEyMjU4NTE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1225851?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sebastiantomac","html_url":"https:\/\/github.com\/sebastiantomac","followers_url":"https:\/\/api.github.com\/users\/sebastiantomac\/followers","following_url":"https:\/\/api.github.com\/users\/sebastiantomac\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sebastiantomac\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sebastiantomac\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sebastiantomac\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sebastiantomac\/orgs","repos_url":"https:\/\/api.github.com\/users\/sebastiantomac\/repos","events_url":"https:\/\/api.github.com\/users\/sebastiantomac\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sebastiantomac\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Which version of Apache Beam do you have (can you copy your full environment info here)?","apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ","Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a dummy pipeline with [this code](https:\/\/github.com\/apache\/beam\/blob\/master\/sdks\/python\/apache_beam\/examples\/wordcount_minimal.py)\r\n\r\nIf you get the same error, it means that the issue comes from apache beam.\r\nOtherwise we'll investigate what went wrong here","Still, same error, so I guess it is on apache beam then. \r\nThanks for the investigation.","Thanks for trying\r\nLet us know if you find clues of what caused this issue, or if you find a fix"],"created_at":1597562296000,"updated_at":1598972013000,"closed_at":1598971743000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to load a wikipedia data set\r\n\r\n```\r\nimport nlp\r\nfrom nlp import load_dataset\r\n\r\ndataset = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\", cache_dir=data_path, beam_runner='DirectRunner')\r\n#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')\r\n```\r\n\r\nThis fails in the apache beam runner. \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"D:\/ML\/wikiembedding\/gpt2_sv.py\", line 36, in <module>\r\n dataset = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\", cache_dir=my_cache_dir, beam_runner='DirectRunner')\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\nlp\\load.py\", line 548, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\nlp\\builder.py\", line 462, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\nlp\\builder.py\", line 969, in _download_and_prepare\r\n pipeline_results = pipeline.run()\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\apache_beam\\pipeline.py\", line 534, in run\r\n return self.runner.run_pipeline(self, self._options)\r\n....\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\apache_beam\\runners\\worker\\bundle_processor.py\", line 218, in process_encoded\r\n self.output(decoded_value)\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\apache_beam\\runners\\worker\\operations.py\", line 332, in output\r\n cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)\r\n File \"C:\\Users\\seto\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\Cython\\Shadow.py\", line 167, in cast\r\n return type(*args)\r\nTypeError: Receiver() takes no arguments\r\n\r\n```\r\n\r\nThis is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/508\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/507","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/507\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/507\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/507\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/507","id":679400683,"node_id":"MDU6SXNzdWU2Nzk0MDA2ODM=","number":507,"title":"Errors when I use ","user":{"login":"mchari","id":30506151,"node_id":"MDQ6VXNlcjMwNTA2MTUx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30506151?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mchari","html_url":"https:\/\/github.com\/mchari","followers_url":"https:\/\/api.github.com\/users\/mchari\/followers","following_url":"https:\/\/api.github.com\/users\/mchari\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mchari\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mchari\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mchari\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mchari\/orgs","repos_url":"https:\/\/api.github.com\/users\/mchari\/repos","events_url":"https:\/\/api.github.com\/users\/mchari\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mchari\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers."],"created_at":1597439037000,"updated_at":1597441150000,"closed_at":1597441150000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I tried the following example code from https:\/\/huggingface.co\/deepset\/roberta-base-squad2 and got errors \r\nI am using **transformers 3.0.2** code .\r\n\r\n\r\nfrom transformers.pipelines import pipeline\r\nfrom transformers.modeling_auto import AutoModelForQuestionAnswering\r\nfrom transformers.tokenization_auto import AutoTokenizer\r\n\r\nmodel_name = \"deepset\/roberta-base-squad2\"\r\n\r\nnlp = pipeline('question-answering', model=model_name, tokenizer=model_name)\r\nQA_input = {\r\n 'question': 'Why is model conversion important?',\r\n 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'\r\n}\r\nres = nlp(QA_input)\r\n\r\nThe errors are :\r\n\r\nres = nlp(QA_input)\r\n File \".local\/lib\/python3.6\/site-packages\/transformers\/pipelines.py\", line 1316, in __call__\r\n for s, e, score in zip(starts, ends, scores)\r\n File \".local\/lib\/python3.6\/site-packages\/transformers\/pipelines.py\", line 1316, in <listcomp>\r\n for s, e, score in zip(starts, ends, scores)\r\nKeyError: 0\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/507\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/506","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/506\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/506\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/506\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/506","id":679164788,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY3OTkwNjc2","number":506,"title":"fix dataset.map for function without outputs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597412422000,"updated_at":1597663479000,"closed_at":1597663478000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/506","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/506","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/506.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/506.patch"},"body":"As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable.\r\nI fixed that and added tests.\r\n\r\nThanks @avloss for reporting","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/506\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/505","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/505\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/505\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/505\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/505","id":678791400,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY3NjgxMjY4","number":505,"title":"tmp_file referenced before assignment","user":{"login":"avloss","id":17853685,"node_id":"MDQ6VXNlcjE3ODUzNjg1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17853685?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/avloss","html_url":"https:\/\/github.com\/avloss","followers_url":"https:\/\/api.github.com\/users\/avloss\/followers","following_url":"https:\/\/api.github.com\/users\/avloss\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/avloss\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/avloss\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/avloss\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/avloss\/orgs","repos_url":"https:\/\/api.github.com\/users\/avloss\/repos","events_url":"https:\/\/api.github.com\/users\/avloss\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/avloss\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)","I'm closing this one as I created the other PR."],"created_at":1597361253000,"updated_at":1597412566000,"closed_at":1597412566000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/505","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/505","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/505.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/505.patch"},"body":"Just learning about this library - so might've not set up all the flags correctly, but was getting this error about \"tmp_file\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/505\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/504","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/504\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/504\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/504\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/504","id":678756211,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY3NjUxOTA5","number":504,"title":"Added downloading to Hyperpartisan news detection","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you @ghomasHudson for making our dataset available! This is great!","The test passes since #527 :)"],"created_at":1597355626000,"updated_at":1598516321000,"closed_at":1598516321000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/504","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/504","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/504.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/504.patch"},"body":"Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !\r\n\r\nCurrently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/504\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/503","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/503\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/503\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/503\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/503","id":678726538,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY3NjI3MTEw","number":503,"title":"CompGuessWhat?! 0.2.0","user":{"login":"aleSuglia","id":1479733,"node_id":"MDQ6VXNlcjE0Nzk3MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1479733?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aleSuglia","html_url":"https:\/\/github.com\/aleSuglia","followers_url":"https:\/\/api.github.com\/users\/aleSuglia\/followers","following_url":"https:\/\/api.github.com\/users\/aleSuglia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aleSuglia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aleSuglia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aleSuglia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aleSuglia\/orgs","repos_url":"https:\/\/api.github.com\/users\/aleSuglia\/repos","events_url":"https:\/\/api.github.com\/users\/aleSuglia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aleSuglia\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I don't see any significant change in the dataset script (except the version value update), can you check that again please ?","Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?","Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!","Ok np :)\r\nGood luck with your work for the conference","I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.\r\n","Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.\r\nThe checksum is computed by hashing the complete file.\r\nYou can update the checksum by doing \r\n\r\n```\r\nnlp-cli test .\/datasets\/compguesswhat --save_infos --all_configs\r\n```","Any updates on this?","Hi :)\r\n\r\nI think what's left to do is\r\n1- rebase from master, since we changed the name of the library\r\n2- update the metadata file of the dataset using the command \r\n```\r\ndatasets-cli test .\/datasets\/compguesswhat --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nThis command should update the checksum of the dropbox file","That's perfect. I'll have a look at it later today!","Nice thanks !","@lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas?","@lhoestq any updates? :) ","Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.\r\nCould you try to update black, then `make style` ?","Yes I think my versions of isort and black were outdated. Thanks @lhoestq :)\r\n","It still doesn't look right in terms of line-length.\r\nAre you running `black` or `make style` ?","I'm running `make style`. This is the output of the command:\r\n\r\n```\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! \u2728 \ud83c\udf70 \u2728\r\n250 files left unchanged.\r\nisort tests src benchmarks datasets metrics\r\n```","Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too","I think that's because black doesn't revert the changes you first did with the old version.\r\nCould you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes)","I will have a look at it tomorrow. Thanks for your help!","I'm closing this one and I'll open a new one."],"created_at":1597351886000,"updated_at":1603263269000,"closed_at":1603263269000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/503","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/503","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/503.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/503.patch"},"body":"We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/503\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/502","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/502\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/502\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/502\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/502","id":678546070,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY3NDc1MDg0","number":502,"title":"Fix tokenizers caching","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This should fix #501 and also the issue you sent me on slack @sgugger ."],"created_at":1597334017000,"updated_at":1597844239000,"closed_at":1597844238000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/502","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/502","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/502.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/502.patch"},"body":"I've found some cases where the caching didn't work properly for tokenizers:\r\n\r\n1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions\r\n2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates\r\n3. if a tokenizer is used inside a function, the caching of this function would result in the same cache file for different tokenizers\r\n4. if `unique_no_split_tokens`'s attribute is not the same across sessions (after loading a tokenizer) then the caching could be inconsistent\r\n\r\nTo fix that, this is what I did:\r\n\r\n1. register a specific `save_regex` function for pickle that makes regex dumps deterministic\r\n2. ignore cache attribute of some tokenizers before dumping\r\n3. enable recursive dump by default for all dumps\r\n4. make `unique_no_split_tokens` deterministic in https:\/\/github.com\/huggingface\/transformers\/pull\/6461\r\n\r\nI also added tests to make sure that tokenizers hashing works as expected.\r\nIn the future we should find a way to test if hashing also works across session (maybe using two CI jobs ? or by hardcoding a tokenizer's hash ?)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/502\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/501","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/501\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/501\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/501\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/501","id":677952893,"node_id":"MDU6SXNzdWU2Nzc5NTI4OTM=","number":501,"title":"Caching doesn't work for map (non-deterministic)","user":{"login":"wulu473","id":8149933,"node_id":"MDQ6VXNlcjgxNDk5MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8149933?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wulu473","html_url":"https:\/\/github.com\/wulu473","followers_url":"https:\/\/api.github.com\/users\/wulu473\/followers","following_url":"https:\/\/api.github.com\/users\/wulu473\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wulu473\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wulu473\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wulu473\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wulu473\/orgs","repos_url":"https:\/\/api.github.com\/users\/wulu473\/repos","events_url":"https:\/\/api.github.com\/users\/wulu473\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wulu473\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing function.\r\n\r\nI'm working on a fix","Thanks everyone. Works great now."],"created_at":1597263607000,"updated_at":1598286900000,"closed_at":1598286875000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. \r\n\r\n```python\r\nimport nlp\r\nimport transformers\r\n\r\ndef main():\r\n ds = nlp.load_dataset(\"reddit\", split=\"train[:500]\")\r\n\r\n tokenizer = transformers.AutoTokenizer.from_pretrained(\"gpt2\")\r\n\r\n def convert_to_features(example_batch):\r\n input_str = example_batch[\"body\"]\r\n encodings = tokenizer(input_str, add_special_tokens=True, truncation=True)\r\n return encodings\r\n\r\n ds = ds.map(convert_to_features, batched=True)\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nRoughly 3\/10 times, this example recomputes the tokenization.\r\n\r\nIs this expected behaviour?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/501\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/500","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/500\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/500\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/500\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/500","id":677841708,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2ODk0NTk0","number":500,"title":"Use hnsw in wiki_dpr","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597251487000,"updated_at":1597910359000,"closed_at":1597910358000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/500","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/500","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/500.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/500.patch"},"body":"The HNSW faiss index is much faster that regular Flat index.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/500\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/499","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/499\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/499\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/499\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/499","id":677709938,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2Nzg1MjAy","number":499,"title":"Narrativeqa (with full text)","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I took a look at the dummy data creation for this dataset.\r\n\r\nMaybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.\r\n\r\nI managed to make it work with this `dummy_data.zip` file:\r\nhttps:\/\/drive.google.com\/file\/d\/1G9ZHAjelazNApbFI0ep2dnSAWklXgGMd\/view?usp=sharing","@lhoestq Hmmm wasn't that. Must have been something else I missed.\r\n\r\nHave committed your working version though now.","Ok thanks.\r\nCould you rebase from master to fix the CI please ?","Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?","> Hi @ghomasHudson, did you get the chance to add the test split and regenerate the dataset_infos.json file ?\r\n\r\nHave added the test set code but getting an OverflowError when trying to regen the dataset_infos.json:\r\n\r\n---\r\nOverflowError: There was an overflow in the <class 'pyarrow.lib.StructArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB\r\n\r\n---\r\n","Thanks for reporting @ghomasHudson , I'll look into it","It looks like it's an issue with Pyarrow.\r\nBy changing the `DEFAULT_MAX_BATCH_SIZE` to 1000 instead of 10 000 in `arrow_writer.py` I was able to run the command.\r\n\r\nBasically it seems that is an Arrow StructArray has more than 1-2GB of data, then it shuffles some of its content.\r\nI can't find any issue on Apache Arrow's JIRA about this problem. It will require more investigation.\r\n\r\nMaybe we can simply automatically decrease the writer's batch size when this happens. We can just check if the arrow array is more than a certain amount of bytes. ","@lhoestq I've finally got round to regenerating the `dataset_infos.json` for this and adding all 3 splits. I've done this and updated for the new version of datasets.\r\n\r\nThe CI tests still aren't passing though (they pass on my machine). `test_load_dataset_narrativeqa` seems to fail but I have no idea how. Would appreciate if you have any ideas - would be great to finally finish this one!","The dummy data test fails, apparently it's because no examples are yielded for the dummy data.\r\n\r\nAlso it looks like the PR now show changes in many other files than the ones for NarrativeQA, could you create another branch and another PR please ?\r\n\r\nFeel free to ping me on the new PR so we can fi the dummy data together"],"created_at":1597240183000,"updated_at":1607512862000,"closed_at":1607512862000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/499","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/499","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/499.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/499.patch"},"body":"Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset.\r\n\r\nFew notes:\r\n- Had some encoding issues using the default `open` so am using `open(encoding=\"latin-1\"...` which seems to fix it. Looks fine.\r\n- Can't get the dummy data to work. Currently putting stuff at: \r\n ```\r\n dummy\r\n |---- 0.0.0\r\n |- dummy_data.zip\r\n |-master.zip\r\n | |- narrativeqa-master\r\n | |- documents.csv\r\n | |- qaps.csv\r\n | |- third_party ......\r\n | \r\n | - narrativeqa_full_text.zip\r\n | | - 001.content\r\n | | - ....\r\n ```\r\n Not sure what I'm messing up here (probably something obvious).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/499\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/498","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/498\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/498\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/498\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/498","id":677597479,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2Njg5NTcy","number":498,"title":"dont use beam fs to save info for local cache dir","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597230000000,"updated_at":1597411041000,"closed_at":1597411040000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/498","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/498","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/498.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/498.patch"},"body":"If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info\r\n\r\nFix #490 \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/498\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/497","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/497\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/497\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/497\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/497","id":677057116,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3","number":497,"title":"skip header in PAWS-X","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597166785000,"updated_at":1597830602000,"closed_at":1597830601000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/497","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/497","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/497.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/497.patch"},"body":"This should fix #485 \r\n\r\nI also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).\r\n\r\nNote that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli .\/datasets\/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields).\r\n\r\nI think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/497\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/496","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/496\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/496\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/496\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/496","id":677016998,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2MjE1Mjg1","number":496,"title":"fix bad type in overflow check","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597163098000,"updated_at":1597411775000,"closed_at":1597411774000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/496","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/496","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/496.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/496.patch"},"body":"When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field.\r\nThis is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example).\r\n\r\nThis should fix #482","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/496\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/495","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/495\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/495\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/495\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/495","id":676959289,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3","number":495,"title":"stack vectors in pytorch and tensorflow","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597158773000,"updated_at":1597224649000,"closed_at":1597224648000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/495","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/495","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/495.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/495.patch"},"body":"When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`.\r\n\r\nI added support for stacked tensors for both pytorch and tensorflow.\r\nFor ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/495\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/494","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/494\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/494\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/494\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/494","id":676886955,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY2MTExOTQz","number":494,"title":"Fix numpy stacking","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key."],"created_at":1597153230000,"updated_at":1597157810000,"closed_at":1597153792000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/494","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/494","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/494.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/494.patch"},"body":"When getting items using a column name as a key, numpy arrays were not stacked.\r\nI fixed that and added some tests.\r\n\r\nThere is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/494\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/493","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/493\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/493\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/493\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/493","id":676527351,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY1ODIxOTA0","number":493,"title":"Fix wmt zh-en url","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["this doesn't work. I can decompress the file after download locally."],"created_at":1597112092000,"updated_at":1597112548000,"closed_at":1597112532000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/493","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/493","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/493.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/493.patch"},"body":"I verified that\r\n```\r\nwget https:\/\/stuncorpusprod.blob.core.windows.net\/corpusfiles\/UNv1.0.en-zh.tar.gz.00\r\n```\r\nruns in 2 minutes.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/493\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/492","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/492\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/492\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/492\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/492","id":676495064,"node_id":"MDU6SXNzdWU2NzY0OTUwNjQ=","number":492,"title":"nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.","Or maybe the assertion comes from elsewhere ?","I'm using the master branch. The assertion failure comes from the underlying `pa.concat_tables()`, which is in the pyarrow package. That method does check schemas.\r\n\r\nSince `features.type` does not contain information about nullable vs non-nullable features, the `cast_()` method won't resolve the schema mismatch. There is information in a schema which is not stored in features.","I'm doing a refactor of type inference in #363 . Both text fields should match after that","By default nullable will be set to True","It should be good now. I was able to run\r\n\r\n```python\r\n>>> from nlp import concatenate_datasets, load_dataset\r\n>>>\r\n>>> bookcorpus = load_dataset(\"bookcorpus\", split=\"train\")\r\n>>> wiki = load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\n>>> wiki.remove_columns_(\"title\") # only keep the text\r\n>>>\r\n>>> assert bookcorpus.features.type == wiki.features.type\r\n>>> bert_dataset = concatenate_datasets([bookcorpus, wiki])\r\n```","Thanks!"],"created_at":1597105666000,"updated_at":1598458639000,"closed_at":1598458639000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Here's the code I'm trying to run:\r\n\r\n```python\r\ndset_wikipedia = nlp.load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\", cache_dir=args.cache_dir)\r\ndset_wikipedia.drop(columns=[\"title\"])\r\ndset_wikipedia.features.pop(\"title\")\r\ndset_books = nlp.load_dataset(\"bookcorpus\", split=\"train\", cache_dir=args.cache_dir)\r\ndset = nlp.concatenate_datasets([dset_wikipedia, dset_books])\r\n```\r\n\r\nThis fails because they have different schemas, despite having identical features.\r\n\r\n```python\r\nassert dset_wikipedia.features == dset_books.features # True\r\nassert dset_wikipedia._data.schema == dset_books._data.schema # False\r\n```\r\n\r\nThe Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves.\r\n\r\n```python\r\ndset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema)\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/492\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/491","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/491\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/491\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/491\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/491","id":676486275,"node_id":"MDU6SXNzdWU2NzY0ODYyNzU=","number":491,"title":"No 0.4.0 release on GitHub","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I did the release on github, and updated the doc :)\r\nSorry for the delay","Thanks!"],"created_at":1597103997000,"updated_at":1597164607000,"closed_at":1597164607000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https:\/\/huggingface.co\/nlp\/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/491\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/490","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/490\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/490\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/490\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/490","id":676482242,"node_id":"MDU6SXNzdWU2NzY0ODIyNDI=","number":490,"title":"Loading preprocessed Wikipedia dataset requires apache_beam","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1597103210000,"updated_at":1597411040000,"closed_at":1597411040000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Running \r\n\r\n`nlp.load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\", dir=\"\/tmp\/wikipedia\")`\r\n\r\ngives an error if apache_beam is not installed, stemming from\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/38eb2413de54ee804b0be81781bd65ac4a748ced\/src\/nlp\/builder.py#L981-L988\r\n\r\nThis succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/490\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/489","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/489\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/489\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/489\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/489","id":676456257,"node_id":"MDU6SXNzdWU2NzY0NTYyNTc=","number":489,"title":"ug","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["whoops","please delete this"],"created_at":1597098783000,"updated_at":1597100114000,"closed_at":1597098820000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/489\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/488","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/488\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/488\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/488\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/488","id":676299993,"node_id":"MDU6SXNzdWU2NzYyOTk5OTM=","number":488,"title":"issues with downloading datasets for wmt16 and wmt19","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I found `UNv1.0.en-ru.tar.gz` here: https:\/\/conferences.unite.un.org\/uncorpus\/en\/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https:\/\/stuncorpusprod.blob.core.windows.net\/corpusfiles\/UNv1.0.en-ru.tar.gz.00\r\nwget -c https:\/\/stuncorpusprod.blob.core.windows.net\/corpusfiles\/UNv1.0.en-ru.tar.gz.01\r\nwget -c https:\/\/stuncorpusprod.blob.core.windows.net\/corpusfiles\/UNv1.0.en-ru.tar.gz.02\r\ncat UNv1.0.en-ru.tar.gz.0* > UNv1.0.en-ru.tar.gz\r\n```\r\nit has other languages as well, in case https:\/\/storage.googleapis.com\/tfdataset-data\/downloadataset\/uncorpus\/ is gone","Further, `nlp.load_dataset('wmt19', 'ru-en')` has only the `train` and `val` datasets. `test` is missing.\r\n\r\nFixed locally for summarization needs, by running:\r\n```\r\npip install sacrebleu\r\nsacrebleu -t wmt19 -l ru-en --echo src > test.source\r\nsacrebleu -t wmt19 -l ru-en --echo ref > test.target\r\n```\r\nh\/t @sshleifer "],"created_at":1597080771000,"updated_at":1597122454000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I have encountered multiple issues while trying to:\r\n```\r\nimport nlp\r\ndataset = nlp.load_dataset('wmt16', 'ru-en')\r\nmetric = nlp.load_metric('wmt16')\r\n```\r\n1. I had to do `pip install -e \".[dev]\" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e \".[dev]\" ` fixed.\r\n\r\n2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for. \r\n\r\nI tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below)\r\n\r\n3. my machine has crushed and when I retried I got:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".\/download.py\", line 9, in <module>\r\n dataset = nlp.load_dataset('wmt16', 'ru-en')\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/load.py\", line 549, in load_dataset\r\n download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/builder.py\", line 449, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/home\/stas\/anaconda3\/envs\/main\/lib\/python3.7\/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/builder.py\", line 422, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/home\/stas\/anaconda3\/envs\/main\/lib\/python3.7\/os.py\", line 221, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '\/home\/stas\/.cache\/huggingface\/datasets\/wmt16\/ru-en\/1.0.0\/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete'\r\n```\r\nit can't handle resumes. but neither allows a new start. Had to delete it manually.\r\n\r\n4. and finally when it downloaded the dataset, it then failed to fetch the metrics:\r\n```\r\nTraceback (most recent call last):\r\n File \".\/download.py\", line 15, in <module>\r\n metric = nlp.load_metric('wmt16')\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/load.py\", line 442, in load_metric\r\n module_path, hash = prepare_module(path, download_config=download_config, dataset=False)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/load.py\", line 258, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/utils\/file_utils.py\", line 198, in cached_path\r\n local_files_only=download_config.local_files_only,\r\n File \"\/mnt\/nvme1\/code\/huggingface\/nlp-master\/src\/nlp\/utils\/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/metrics\/wmt16\/wmt16.py\r\n```\r\n\r\n5. If I run the same code with `wmt19`, it fails too:\r\n\r\n```\r\nConnectionError: Couldn't reach https:\/\/storage.googleapis.com\/tfdataset-data\/downloadataset\/uncorpus\/UNv1.0.en-ru.tar.gz\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/488\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/487","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/487\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/487\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/487\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/487","id":676143029,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY1NTA1NjQy","number":487,"title":"Fix elasticsearch result ids returning as strings","user":{"login":"sai-prasanna","id":3595526,"node_id":"MDQ6VXNlcjM1OTU1MjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3595526?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sai-prasanna","html_url":"https:\/\/github.com\/sai-prasanna","followers_url":"https:\/\/api.github.com\/users\/sai-prasanna\/followers","following_url":"https:\/\/api.github.com\/users\/sai-prasanna\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sai-prasanna\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sai-prasanna\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sai-prasanna\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sai-prasanna\/orgs","repos_url":"https:\/\/api.github.com\/users\/sai-prasanna\/repos","events_url":"https:\/\/api.github.com\/users\/sai-prasanna\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sai-prasanna\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like you need to rebase from master to fix the CI. Could you do that please ?"],"created_at":1597066631000,"updated_at":1598870566000,"closed_at":1598870566000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/487","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/487","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/487.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/487.patch"},"body":"I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant \"id_\" returned for searches are strings, but our library assumes them to be integers.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/487\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/486","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/486\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/486\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/486\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/486","id":675649034,"node_id":"MDU6SXNzdWU2NzU2NDkwMzQ=","number":486,"title":"Bookcorpus data contains pretokenized text","user":{"login":"orsharir","id":99543,"node_id":"MDQ6VXNlcjk5NTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/orsharir","html_url":"https:\/\/github.com\/orsharir","followers_url":"https:\/\/api.github.com\/users\/orsharir\/followers","following_url":"https:\/\/api.github.com\/users\/orsharir\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/orsharir\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/orsharir\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/orsharir\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/orsharir\/orgs","repos_url":"https:\/\/api.github.com\/users\/orsharir\/repos","events_url":"https:\/\/api.github.com\/users\/orsharir\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/orsharir\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do it. Could you provide more details ?","I'm afraid that I don't know how to obtain the original BookCorpus data. I believe this version came from an anonymous Google Drive link posted in another issue.\r\n\r\nGoing through the raw text in this version, it's apparent that NLTK's TreebankWordTokenizer was applied on it (I gave some examples in my original post), followed by:\r\n`' '.join(tokens)`\r\nYou can retrieve the tokenization by splitting on whitespace. You can then \"detokenize\" it with TreebankWordDetokenizer class of NLTK (though, as I suggested, use the fixed version in my repo). This will bring the text closer to its original form, but some steps of TreebankWordTokenizer are destructive, so it wouldn't be one-to-one. Something along the lines of the following should work:\r\n```\r\ntreebank_detokenizer = nltk.tokenize.treebank.TreebankWordDetokenizer()\r\ndb = nlp.load_dataset('bookcorpus', split=nlp.Split.TRAIN)\r\ndb = db.map(lambda x: treebank_detokenizer.detokenize(x['text'].split()))\r\n```\r\n\r\nRegarding other issues beyond the above, I'm afraid that I can't help with that.","Ok I get it, that would be very cool indeed\r\n\r\nWhat kinds of patterns the detokenizer can't retrieve ?","The TreebankTokenizer makes some assumptions about whitespace, parentheses, quotation marks, etc. For instance, while tokenizing the following text:\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nwill result in:\r\n```\r\nDwayne `` The Rock '' Johnson\r\n```\r\nwhere the left and right quotation marks are turned into distinct symbols. Upon reconstruction, we can attach the left part to its token on the right, and respectively for the right part. However, the following texts would be tokenized exactly the same:\r\n```\r\nDwayne \" The Rock \" Johnson\r\nDwayne \" The Rock\" Johnson\r\nDwayne \" The Rock\" Johnson\r\n...\r\n```\r\nIn the above examples, the detokenizer would correct these inputs into the canonical text\r\n```\r\nDwayne \"The Rock\" Johnson\r\n```\r\nHowever, there are cases where there the solution cannot easily be inferred (at least without a true LM - this tokenizer is just a bunch of regexes). For instance, in cases where you have a fragment that contains the end of quote, but not its beginning, plus an accidental space:\r\n```\r\n... and it sounds fantastic, \" he said.\r\n```\r\nIn the above case, the tokenizer would assume that the quotes refer to the next token, and so upon detokenization it will result in the following mistake:\r\n```\r\n... and it sounds fantastic, \"he said.\r\n```\r\n\r\nWhile these are all odd edge cases (the basic assumptions do make sense), in noisy data they can occur, which is why I mentioned that the detokenizer cannot restore the original perfectly.\r\n","To confirm, since this is preprocessed, this was not the exact version of the Book Corpus used to actually train the models described here (particularly Distilbert)? https:\/\/huggingface.co\/datasets\/bookcorpus\r\n\r\nOr does this preprocessing exactly match that of the papers?","I believe these are just artifacts of this particular source. It might be better to crawl it again, or use another preprocessed source, as found here: https:\/\/github.com\/soskek\/bookcorpus ","Yes actually the BookCorpus on hugginface is based on [this](https:\/\/github.com\/soskek\/bookcorpus\/issues\/24#issuecomment-643933352). And I kind of regret naming it as \"BookCorpus\" instead of something like \"BookCorpusLike\".\r\n\r\nBut there is a good news ! @shawwn has replicated BookCorpus in his way, and also provided a link to download the plain text files. see [here](https:\/\/github.com\/soskek\/bookcorpus\/issues\/27). There is chance we can have a \"OpenBookCorpus\" !"],"created_at":1596956004000,"updated_at":1601467264000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, \"didn't\" becomes \"did\" + \"n't\", and double quotes are changed to `` and '' for start and end quotes, respectively.\r\n\r\nOn my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https:\/\/github.com\/nltk\/nltk\/pull\/2575","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/486\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/485","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/485\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/485\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/485\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/485","id":675595393,"node_id":"MDU6SXNzdWU2NzU1OTUzOTM=","number":485,"title":"PAWS dataset first item is header","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596924325000,"updated_at":1597830601000,"closed_at":1597830601000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\nimport nlp\r\ndataset = nlp.load_dataset('xtreme', 'PAWS-X.en')\r\ndataset['test'][0]\r\n```\r\n\r\nprints the following\r\n\r\n```\r\n{'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'}\r\n```\r\n\r\ndataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/485\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/484","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/484\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/484\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/484\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/484","id":675088983,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY0NjY1NTU4","number":484,"title":"update mirror for RT dataset","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for adding this mirror link :)\r\n\r\nCould you run the following command to update the json file `dataset_infos.json` used to verify the integrity of the downloaded file ?\r\n\r\n```\r\nnlp-cli test .\/datasets\/rotten_tomatoes --save_infos --ignore_verifications\r\n```","done! @lhoestq ","the build_doc CI fail comes from master and has been fixed on master","done @thomwolf @lhoestq "],"created_at":1596813945000,"updated_at":1598276017000,"closed_at":1598276017000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/484","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/484","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/484.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/484.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/484\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/483","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/483\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/483\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/483\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/483","id":675080694,"node_id":"MDU6SXNzdWU2NzUwODA2OTQ=","number":483,"title":"rotten tomatoes movie review dataset taken down","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["found a mirror: https:\/\/storage.googleapis.com\/seldon-datasets\/sentence_polarity_v1\/rt-polaritydata.tar.gz","fixed in #484 ","Closing this one. Thanks again @jxmorris12 for taking care of this :)"],"created_at":1596813121000,"updated_at":1599557794000,"closed_at":1599557793000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http:\/\/www.cs.cornell.edu\/people\/pabo\/movie-review-data\/rt-polaritydata.tar.gz). It's not downloadable anymore.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/483\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/482","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/482\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/482\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/482\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/482","id":674851147,"node_id":"MDU6SXNzdWU2NzQ4NTExNDc=","number":482,"title":"Bugs : dataset.map() is frozen on ELI5","user":{"login":"ratthachat","id":56621342,"node_id":"MDQ6VXNlcjU2NjIxMzQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56621342?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ratthachat","html_url":"https:\/\/github.com\/ratthachat","followers_url":"https:\/\/api.github.com\/users\/ratthachat\/followers","following_url":"https:\/\/api.github.com\/users\/ratthachat\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ratthachat\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ratthachat\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ratthachat\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ratthachat\/orgs","repos_url":"https:\/\/api.github.com\/users\/ratthachat\/repos","events_url":"https:\/\/api.github.com\/users\/ratthachat\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ratthachat\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look","I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip install git+https:\/\/github.com\/huggingface\/nlp.git@fix-bad-type-in-overflow-check\r\n```\r\n\r\nAlso I noticed that the first 1000 examples have an empty list in the `title_urls` field. The feature type inference in `.map` will consider it `null` because of that, and it will crash when it encounter the next example with a `title_urls` that is not empty.\r\n\r\nTherefore to fix that, what you can do for now is increase the writer batch size so that the feature inference will take into account at least one example with a non-empty `title_urls`:\r\n\r\n```python\r\n# default batch size is 1_000 and it's not enough for feature type inference because of empty lists\r\nvalid_dataset = valid_dataset.map(make_input_target, writer_batch_size=3_000) \r\n```\r\n\r\nI was able to run the frozen cell with these changes.","@lhoestq Perfect and thank you very much!!\r\nClose the issue.","@lhoestq mapping the function `make_input_target` was passed by your fixing.\r\n\r\nHowever, there is another error in the final step of `valid_dataset.map(convert_to_features, batched=True)`\r\n\r\n`ArrowInvalid: Could not convert Thepiratebay.vg with type str: converting to null type`\r\n(The [same colab notebook above with new error message](https:\/\/colab.research.google.com\/drive\/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing#scrollTo=5sRrJ3_C8rLt))\r\n\r\nDo you have some ideas? (I am really sorry I could not debug it by myself since I never used `pyarrow` before) \r\nNote that `train_dataset.map(convert_to_features, batched=True)` can be run successfully even though train_dataset is 27x bigger than `valid_dataset` so I believe the problem lies in some field of `valid_dataset` again .","I got this issue too and fixed it by specifying `writer_batch_size=3_000` in `.map`.\r\nThis is because Arrow didn't expect `Thepiratebay.vg` in `title_urls `, as all previous examples have empty lists in `title_urls `","I am clear now . Thank so much again Quentin!"],"created_at":1596788615000,"updated_at":1597241626000,"closed_at":1597190115000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi Huggingface Team!\r\n\r\nThank you guys once again for this amazing repo.\r\n\r\nI have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https:\/\/github.com\/patil-suraj\/exploring-T5\/blob\/master\/T5_on_TPU.ipynb) \r\n\r\nHowever, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 \/ 0.17.0 \/ 1.0.0 also have the same frozen process.\r\n\r\nReproducible code can be found on [this colab notebook ](https:\/\/colab.research.google.com\/drive\/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow.\r\n\r\n----------------------------------------\r\n**More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object)\r\n\r\nI also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/482\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/481","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/481\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/481\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/481\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/481","id":674567389,"node_id":"MDExOlB1bGxSZXF1ZXN0NDY0MjM2MTA1","number":481,"title":"Apply utf-8 encoding to all datasets","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Not sure why the AWS test is failing - perhaps I made too many concurrent CI builds \ud83d\ude22. Can someone please rerun the CI to check the error is not on my end?","I pushed an improved docstring and the unit tests now pass, which suggests the previous failure on AWS was simply a timeout error. \r\n\r\nFor some reason the docs are now failing to build, but does not seem related to my changes:\r\n```\r\nWarning, treated as error:\r\n\/home\/circleci\/nlp\/src\/nlp\/dataset_dict.py:docstring of nlp.DatasetDict.filter:27:Inline interpreted text or phrase reference start-string without end-string.\r\nmake: *** [Makefile:20: html] Error 2\r\n```\r\n\r\nAny ideas what's going wrong?","The build_doc fail has been fixed on master.\r\nIt was due to the latest update of sphinx that has some issues, so I pinned the previous version for now.","I noticed that you also changed the Apache Beam `open` to also use utf-8. However it doesn't have an `encoding` parameter.\r\nTherefore you should ignore lines like\r\n\r\n```python\r\nbeam.io.filesystems.FileSystems.open(filepath)\r\n```\r\n\r\nI guess you could add a rule to your regex to only include the `open` call that have a space right before it.","Good catch @lhoestq! Your suggestion to match on `open(...)` with a whitespace was a great idea - it allowed me to simplify the regexp considerably \ud83d\ude04.\r\n\r\nI fixed the Apache Beam false positives and also caught a few problems in `json.load()`, e.g.\r\n```python\r\nrelation_name_map = json.load(open(rel_info), encoding='utf-8')\r\n```\r\n\r\nI've tested that the new regexp doesn't reintroduce these false positives, so I think the PR is ready for another review.","Ok to merge this @lhoestq ?"],"created_at":1596744129000,"updated_at":1597911368000,"closed_at":1597911368000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/481","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/481","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/481.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/481.patch"},"body":"## Description\r\nThis PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets\/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function\r\n\r\n```python\r\ndef apply_encoding_on_file_open(filepath: str):\r\n \"\"\"Apply UTF-8 encoding for all instances where a non-binary file is opened.\"\"\"\r\n \r\n with open(filepath, 'r', encoding='utf-8') as input_file:\r\n regexp = re.compile(r\"(?!.*\\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\\b)(?<=\\s)(open)\\((.*)\\)\")\r\n input_text = input_file.read()\r\n match = regexp.search(input_text)\r\n \r\n if match:\r\n output = regexp.sub(lambda m: m.group()[:-1]+', encoding=\"utf-8\")', input_text)\r\n with open(filepath, 'w', encoding='utf-8') as output_file:\r\n output_file.write(output)\r\n```\r\n\r\nto perform the replacement. \r\n\r\nNote:\r\n\r\n1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly\r\n2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time.\r\n3. I only applied the replacement to files in `datasets\/`. Let me know if this should be extended to other places like `metrics\/`\r\n4. I have implemented a unit test that should catch missing encodings in future CI runs\r\n\r\nCloses #468 and possibly #347 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/481\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/480","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/480\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/480\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/480\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/480","id":674245959,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYzOTcwNjQ2","number":480,"title":"Column indexing hotfix","user":{"login":"TevenLeScao","id":26709476,"node_id":"MDQ6VXNlcjI2NzA5NDc2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26709476?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TevenLeScao","html_url":"https:\/\/github.com\/TevenLeScao","followers_url":"https:\/\/api.github.com\/users\/TevenLeScao\/followers","following_url":"https:\/\/api.github.com\/users\/TevenLeScao\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TevenLeScao\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TevenLeScao\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TevenLeScao\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TevenLeScao\/orgs","repos_url":"https:\/\/api.github.com\/users\/TevenLeScao\/repos","events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TevenLeScao\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good to me as well but we'll want to add a test indeed.\r\nYou can add one if you have time @TevenLeScao.\r\nOtherwise, we'll do it when we are back with Quentin. ","I fixed it in #494 "],"created_at":1596713825000,"updated_at":1597221370000,"closed_at":1597221370000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/480","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/480","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/480.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/480.patch"},"body":"As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/480\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/479","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/479\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/479\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/479\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/479","id":673905407,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYzNjkxMjA0","number":479,"title":"add METEOR metric","user":{"login":"vegarab","id":24683907,"node_id":"MDQ6VXNlcjI0NjgzOTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24683907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vegarab","html_url":"https:\/\/github.com\/vegarab","followers_url":"https:\/\/api.github.com\/users\/vegarab\/followers","following_url":"https:\/\/api.github.com\/users\/vegarab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vegarab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vegarab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vegarab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vegarab\/orgs","repos_url":"https:\/\/api.github.com\/users\/vegarab\/repos","events_url":"https:\/\/api.github.com\/users\/vegarab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vegarab\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Really nice !\r\nThanks for adding this one.\r\n\r\nI noticed that there are some '-' that are left in the description in the middle of some workds. It migh come from copy-pasting the pdf paper. ex: `im-provement`. Could you fix that please ?","@lhoestq \r\nLinebreaks have been removed! Note that there are still a few compound words that are hyphenated intentionally. ","I think you just need to rebase from master to fix the CI :)","Yes I made the mistake of simply merging master into this branch. A rebase seems to be neater :) Although all the commits ended up being added twice. I assume you just squash them into a single one on merge anyways?","Yes indeed they'll be squashed"],"created_at":1596669180000,"updated_at":1597844349000,"closed_at":1597844349000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/479","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/479","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/479.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/479.patch"},"body":"Added the METEOR metric. Can be used like this:\r\n\r\n```python\r\nimport nlp\r\nmeteor = nlp.load_metric('metrics\/meteor')\r\nmeteor.compute([\"some string\", \"some string\"], [\"some string\", \"some similar string\"])\r\n# {'meteor': 0.6411637931034483}\r\nmeteor.add(\"some string\", \"some string\")\r\nmeteor.add('some string\", \"some similar string\")\r\nmeteor.compute()\r\n# {'meteor': 0.6411637931034483}\r\n```\r\n\r\nUses [NLTK's implementation](https:\/\/www.nltk.org\/api\/nltk.translate.html#module-nltk.translate.meteor_score), [(source)](https:\/\/github.com\/nltk\/nltk\/blob\/develop\/nltk\/translate\/meteor_score.py)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/479\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/478","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/478\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/478\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/478\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/478","id":673178317,"node_id":"MDU6SXNzdWU2NzMxNzgzMTc=","number":478,"title":"Export TFRecord to GCP bucket","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nevermind, I restarted my python session and it worked fine...\r\n\r\n---\r\n\r\nI had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)"],"created_at":1596589712000,"updated_at":1596590497000,"closed_at":1596590496000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs:\/\/my_bucket\/x.tfrecord')`\r\n\r\nSince `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.\r\n\r\n`dataset.export('local.tfrecord')` works fine, \r\nbut `dataset.export('gs:\/\/my_bucket\/x.tfrecord')` does not work. \r\n\r\nThere is no error message, I just can't find the file on my bucket...\r\n\r\n---\r\n\r\nLooking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`. \r\n\r\n**What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?**\r\n\r\n@jarednielsen @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/478\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/477","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/477\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/477\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/477\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/477","id":673142143,"node_id":"MDU6SXNzdWU2NzMxNDIxNDM=","number":477,"title":"Overview.ipynb throws exceptions with nlp 0.4.0","user":{"login":"mandy-li","id":23109219,"node_id":"MDQ6VXNlcjIzMTA5MjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23109219?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mandy-li","html_url":"https:\/\/github.com\/mandy-li","followers_url":"https:\/\/api.github.com\/users\/mandy-li\/followers","following_url":"https:\/\/api.github.com\/users\/mandy-li\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mandy-li\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mandy-li\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mandy-li\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mandy-li\/orgs","repos_url":"https:\/\/api.github.com\/users\/mandy-li\/repos","events_url":"https:\/\/api.github.com\/users\/mandy-li\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mandy-li\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting this issue\r\n\r\nThere was a bug where numpy arrays would get returned instead of tensorflow tensors.\r\nThis is fixed on master.\r\n\r\nI tried to re-run the colab and encountered this error instead:\r\n\r\n```\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'\r\n```\r\n\r\nThis is because the dataset returns a Tensor and not a RaggedTensor.\r\nBut I think we should always return a RaggedTensor unless the length of the sequence is fixed (it that case they can be stack into a Tensor).","Hi, I got another error (on Colab):\r\n\r\n```python\r\n# You can read a few attributes of the datasets before loading them (they are python dataclasses)\r\nfrom dataclasses import asdict\r\n\r\nfor key, value in asdict(datasets[6]).items():\r\n print('\ud83d\udc49 ' + key + ': ' + str(value))\r\n\r\n---------------------------------------------------------------------------\r\n\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-6-b8ace6c227a2> in <module>()\r\n 2 from dataclasses import asdict\r\n 3 \r\n----> 4 for key, value in asdict(datasets[6]).items():\r\n 5 print('\ud83d\udc49 ' + key + ': ' + str(value))\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/dataclasses.py in asdict(obj, dict_factory)\r\n 1008 \"\"\"\r\n 1009 if not _is_dataclass_instance(obj):\r\n-> 1010 raise TypeError(\"asdict() should be called on dataclass instances\")\r\n 1011 return _asdict_inner(obj, dict_factory)\r\n 1012 \r\n\r\nTypeError: asdict() should be called on dataclass instances\r\n```","Indeed we'll update the cola with the new release coming up this week."],"created_at":1596583095000,"updated_at":1627970535000,"closed_at":1627970535000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:\r\n\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-5-48907f2ad433> in <module>\r\n----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}\r\n 2 labels = {\"output_1\": train_tf_dataset[\"start_positions\"].to_tensor(default_value=0, shape=[None, 1])}\r\n 3 labels[\"output_2\"] = train_tf_dataset[\"end_positions\"].to_tensor(default_value=0, shape=[None, 1])\r\n 4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)\r\n\r\n<ipython-input-5-48907f2ad433> in <dictcomp>(.0)\r\n----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}\r\n 2 labels = {\"output_1\": train_tf_dataset[\"start_positions\"].to_tensor(default_value=0, shape=[None, 1])}\r\n 3 labels[\"output_2\"] = train_tf_dataset[\"end_positions\"].to_tensor(default_value=0, shape=[None, 1])\r\n 4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)\r\n\r\nAttributeError: 'numpy.ndarray' object has no attribute 'to_tensor'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/477\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/476","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/476\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/476\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/476\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/476","id":672991854,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYyOTMyMTgx","number":476,"title":"CheckList","user":{"login":"marcotcr","id":698010,"node_id":"MDQ6VXNlcjY5ODAxMA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/698010?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/marcotcr","html_url":"https:\/\/github.com\/marcotcr","followers_url":"https:\/\/api.github.com\/users\/marcotcr\/followers","following_url":"https:\/\/api.github.com\/users\/marcotcr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/marcotcr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/marcotcr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/marcotcr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/marcotcr\/orgs","repos_url":"https:\/\/api.github.com\/users\/marcotcr\/repos","events_url":"https:\/\/api.github.com\/users\/marcotcr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/marcotcr\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Also, a little out of my depth there, but would there be a way to have the default pip install checklist command not require mysql and mariadb to be installed? Feels like that might be a source of confusion for users.\r\n\r\nI removed the pattern dependency, mysql is not a requirement anymore. I'm not sure where mariadb is coming from. "],"created_at":1596565925000,"updated_at":1599161648000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/476","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/476","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/476.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/476.patch"},"body":"Sorry for the large pull request.\r\n- Added checklists as datasets. I can't run `test_load_real_dataset` (see #474), but I can load the datasets successfully as shown in the example notebook\r\n- Added a checklist wrapper ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/476\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/475","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/475\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/475\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/475\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/475","id":672884595,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYyODQzMzQz","number":475,"title":"misc. bugs and quality of life","user":{"login":"joeddav","id":9353833,"node_id":"MDQ6VXNlcjkzNTM4MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9353833?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joeddav","html_url":"https:\/\/github.com\/joeddav","followers_url":"https:\/\/api.github.com\/users\/joeddav\/followers","following_url":"https:\/\/api.github.com\/users\/joeddav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joeddav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joeddav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joeddav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joeddav\/orgs","repos_url":"https:\/\/api.github.com\/users\/joeddav\/repos","events_url":"https:\/\/api.github.com\/users\/joeddav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joeddav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Cool thanks, I made those changes. LMK if you think it's ready for merge.","Ok to merge for me"],"created_at":1596555149000,"updated_at":1597698848000,"closed_at":1597698847000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/475","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/475","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/475.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/475.patch"},"body":"A few misc. bugs and QOL improvements that I've come across in using the library. Let me know if you don't like any of them and I can adjust\/remove them.\r\n\r\n1. Printing datasets without a description field throws an error when formatting the `single_line_description`. This fixes that, and also adds some formatting to the repr to make it slightly more readable.\r\n```\r\n>>> print(list_datasets()[0])\r\nnlp.ObjectInfo(\r\n\tid='aeslc',\r\n\tdescription='A collection of email messages of employees in the Enron Corporation.There are two features: - email_body: email body text. - subject_line: email subject text.',\r\n\tfiles=[nlp.S3Object('aeslc.py'), nlp.S3Object('dataset_infos.json'), nlp.S3Object('dummy\/1.0.0\/dummy_data-zip-extracted\/dummy_data\/AESLC-master\/enron_subject_line\/dev\/allen-p_inbox_29.subject'), nlp.S3Object('dummy\/1.0.0\/dummy_data-zip-extracted\/dummy_data\/AESLC-master\/enron_subject_line\/test\/allen-p_inbox_24.subject'), nlp.S3Object('dummy\/1.0.0\/dummy_data-zip-extracted\/dummy_data\/AESLC-master\/enron_subject_line\/train\/allen-p_inbox_20.subject'), nlp.S3Object('dummy\/1.0.0\/dummy_data.zip'), nlp.S3Object('urls_checksums\/checksums.txt')]\r\n)\r\n```\r\n\r\n2. Add id-only option to `list_datasets` and `list_metrics` to allow the user to easily print out just the names of the datasets & metrics. I often found myself annoyed that this took so many strokes to do.\r\n\r\n```python\r\n[dataset.id for dataset in list_datasets()] # before\r\nlist_datasets(id_only=True) # after\r\n```\r\n\r\n3. Fix null-seed randomization caching. When using `train_test_split` and `shuffle`, the computation was being cached even without a seed or generator being passed. The result was that calling `.shuffle` more than once on the same dataset didn't do anything without passing a distinct seed or generator. Likewise with `train_test_split`.\r\n\r\n4. Indexing by iterables of bool. I added support for passing an iterable of type bool to `_getitem` as a numpy\/pandas-like indexing method. Let me know if you think it's redundant with `filter` (I know it's not optimal memory-wise), but I think it's nice to have as a lightweight alternative to do simple things without having to create a copy of the entire dataset, e.g.\r\n\r\n```python\r\ndataset[dataset['label'] == 0] # numpy-like bool indexing to look at instances with labels of 0\r\n```\r\n\r\n5. Add an `input_column` argument to `map` and `filter`, which allows you to filter\/map on a particular column rather than passing the whole dict to the function. Also adds `fn_kwargs` to be passed to the function. I think these together make mapping much cleaner in many cases such as mono-column tokenization:\r\n\r\n```python\r\n# before\r\ndataset = dataset.map(lambda batch: tokenizer(batch[\"text\"])\r\n# after\r\ndataset = dataset.map(tokenizer, input_column=\"text\")\r\ndataset = dataset.map(tokenizer, input_column=\"text\", fn_kwargs={\"truncation\": True, \"padding\": True})\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/475\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/474","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/474\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/474\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/474\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/474","id":672407330,"node_id":"MDU6SXNzdWU2NzI0MDczMzA=","number":474,"title":"test_load_real_dataset when config has BUILDER_CONFIGS that matter","user":{"login":"marcotcr","id":698010,"node_id":"MDQ6VXNlcjY5ODAxMA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/698010?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/marcotcr","html_url":"https:\/\/github.com\/marcotcr","followers_url":"https:\/\/api.github.com\/users\/marcotcr\/followers","following_url":"https:\/\/api.github.com\/users\/marcotcr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/marcotcr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/marcotcr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/marcotcr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/marcotcr\/orgs","repos_url":"https:\/\/api.github.com\/users\/marcotcr\/repos","events_url":"https:\/\/api.github.com\/users\/marcotcr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/marcotcr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing`\r\n\r\nAs mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS","This was fixed in #527 \r\n\r\nClosing this one, but feel free to re-open if you have other questions"],"created_at":1596498396000,"updated_at":1599490393000,"closed_at":1599490393000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error.\r\nI think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/tests\/test_dataset_common.py#L200)). This causes [this line](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/builder.py#L201) to always be false because `config_kwargs` is not `None`. [This line](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/builder.py#L222) will be run instead, which doesn't use `BUILDER_CONFIGS`.\r\n\r\nFor an example, you can try running the test for lince:\r\n` RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lince`\r\nwhich yields\r\n> E TypeError: __init__() missing 3 required positional arguments: 'colnames', 'classes', and 'label_column'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/474\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/473","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/473\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/473\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/473\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/473","id":672007247,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4","number":473,"title":"add DoQA dataset (ACL 2020)","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596454012000,"updated_at":1599758351000,"closed_at":1599133455000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/473","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/473","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/473.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/473.patch"},"body":"add DoQA dataset (ACL 2020) http:\/\/ixa.eus\/node\/12931","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/473\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/472","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/472\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/472\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/472\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/472","id":672000745,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYyMTE1MjA4","number":472,"title":"add crd3 dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This PR was already approved by @lhoestq in #456 . This one just make style to remove some typos"],"created_at":1596453302000,"updated_at":1596453730000,"closed_at":1596453729000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/472","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/472","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/472.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/472.patch"},"body":"opening new PR for CRD3 dataset (ACL2020) to fix the circle CI problems","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/472\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/471","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/471\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/471\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/471\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/471","id":671996423,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYyMTExNTU1","number":471,"title":"add reuters21578 dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596452834000,"updated_at":1599127683000,"closed_at":1599127130000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/471","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/471","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/471.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/471.patch"},"body":"new PR to add the reuters21578 dataset and fix the circle CI problems.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/471\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/470","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/470\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/470\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/470\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/470","id":671952276,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYyMDc0MzQ0","number":470,"title":"Adding IWSLT 2017 dataset.","user":{"login":"Narsil","id":204321,"node_id":"MDQ6VXNlcjIwNDMyMQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/204321?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Narsil","html_url":"https:\/\/github.com\/Narsil","followers_url":"https:\/\/api.github.com\/users\/Narsil\/followers","following_url":"https:\/\/api.github.com\/users\/Narsil\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Narsil\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Narsil\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Narsil\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Narsil\/orgs","repos_url":"https:\/\/api.github.com\/users\/Narsil\/repos","events_url":"https:\/\/api.github.com\/users\/Narsil\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Narsil\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok I tried to add the dummy dataset (I actually modified the dummy_data command to generate them for me because it was too painful to do that manually).\r\n\r\nThe dummy_data test seems to work:\r\n```bash\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_iwslt2017\r\n```\r\n\r\nHowever the test on the full data fails, because the `**config_kwargs` don't include `pair, multilingual`.\r\nI could add a default parameter for the Config (but that feels broken, how can one config be the \"default\" ?). If I do I still have errors, saying that something within the downloader is a directory so I'm not sure where that comes from.\r\n\r\nI can share my auto_zip dummy data code if you want (I tried to keep it clean). [Edit: it's [here](https:\/\/github.com\/Narsil\/nlp\/tree\/auto_zip)]. \r\nThe way it works is that it just keeps X line from the beginning of the original files, and Y lines at the end. It's good enough for my usage, but I guess it could work for most data files out there (as long as they're real text and not binary format)","The slow test doesn't support dataset that require config parameters that don't have default values.\r\n\r\nTo improve that we can replace it by two tests:\r\n- one test that loads the default config (it can simply be the first config of the config lists for example)\r\n- one tests that iterate over all configs and load them all one by one\r\n\r\nBy using the configs inside the builder config lists, there is no need to instantiate new configs, so the missing parameter error doesn't happen.\r\n\r\nDoes that sound good to you ?","Seems fair.\r\nHowever I'm unsure what I should do ?\r\n\r\nShould I wait for #527 to pass and rebase and the command will be the same ?\r\nShould I update something ?","I think everything is fine on your side. Thanks for adding this dataset :)\r\n\r\nI think it's better to wait for the slow test to be updated if you don't mind.\r\n","Sure ! :)","Thanks for fixing the isort\/black changes :)\r\nFeel free to merge if it's good for you @Narsil "],"created_at":1596448359000,"updated_at":1599482010000,"closed_at":1599482010000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/470","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/470","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/470.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/470.patch"},"body":"Created a [IWSLT 2017](https:\/\/sites.google.com\/site\/iwsltevaluation2017\/TED-tasks) dataset script for the *multilingual data*.\r\n\r\n```\r\nBilingual data: {Arabic, German, French, Japanese, Korean, Chinese} <-> English\r\nMultilingual data: German, English, Italian, Dutch, Romanian. (Any pair)\r\n```\r\n\r\nI'm unsure how to handle bilingual vs multilingual. Given `nlp` architecture a Config option seems to be the way to go, however, it might be a bit confusing to have different language pairs with different option. Using just language pairs is not viable as English to German exists in both.\r\n\r\nAny opinion on how that should be done ?\r\nEDIT: I decided to just omit de-en from multilingual as it's only a subset of the bilingual one. That way only language pairs exist.\r\nEDIT : Could be interesting for #438 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/470\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/469","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/469\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/469\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/469\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/469","id":671876963,"node_id":"MDU6SXNzdWU2NzE4NzY5NjM=","number":469,"title":"invalid data type 'str' at _convert_outputs in arrow_dataset.py","user":{"login":"Murgates","id":30617486,"node_id":"MDQ6VXNlcjMwNjE3NDg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30617486?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Murgates","html_url":"https:\/\/github.com\/Murgates","followers_url":"https:\/\/api.github.com\/users\/Murgates\/followers","following_url":"https:\/\/api.github.com\/users\/Murgates\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Murgates\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Murgates\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Murgates\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Murgates\/orgs","repos_url":"https:\/\/api.github.com\/users\/Murgates\/repos","events_url":"https:\/\/api.github.com\/users\/Murgates\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Murgates\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow)\r\nIt can be done with `dataset.set_format(\"torch\", columns=columns)` (or \"tensorflow\").\r\n\r\nNote that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the list of columns you want to keep (`input_ids` for example)","Hello . Yes, I did set the output format as below for the two columns \r\n\r\n `train_dataset.set_format('torch',columns=['Text','Label'])`\r\n ","I think you're having this issue because you try to format strings as pytorch tensors, which is not possible.\r\nIndeed by having \"Text\" in `columns=['Text','Label']`, you try to convert the text values to pytorch tensors.\r\n\r\nInstead I recommend you to first tokenize your dataset using a tokenizer from transformers. For example\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ntrain_dataset.map(lambda x: tokenizer(x[\"Text\"]), batched=True)\r\ntrain_dataset.set_format(\"torch\", column=[\"input_ids\"])\r\n```\r\n\r\nAnother way to fix your issue would be to not set the format to pytorch, and leave the dataset as it is by default. In that case, the strings are returned normally when you get examples from your dataloader. It means that you would have to tokenize the examples in the training loop (or using a data collator) though.\r\n\r\nLet me know if you have other questions","Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.\r\nI dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error\r\n\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-145-ca218223c9fc> in <module>()\r\n----> 1 val_loss, predictions, true_val = evaluate(dataloader_validation)\r\n 2 val_f1 = f1_score_func(predictions, true_val)\r\n 3 tqdm.write(f'Validation loss: {val_loss}')\r\n 4 tqdm.write(f'F1 Score (Weighted): {val_f1}')\r\n\r\n6 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/dataset.py in <genexpr>(.0)\r\n 160 \r\n 161 def __getitem__(self, index):\r\n--> 162 return tuple(tensor[index] for tensor in self.tensors)\r\n 163 \r\n 164 def __len__(self):\r\n\r\nTypeError: new(): invalid data type 'str' ","> Hi, actually the thing is I am getting the same error and even after tokenizing them I am passing them through batch_encode_plus.\r\n> I dont know what seems to be the problem is. I even converted it into 'pt' while passing them through batch_encode_plus but when I am evaluating my model , i am getting this error\r\n> \r\n> TypeError Traceback (most recent call last)\r\n> in ()\r\n> ----> 1 val_loss, predictions, true_val = evaluate(dataloader_validation)\r\n> 2 val_f1 = f1_score_func(predictions, true_val)\r\n> 3 tqdm.write(f'Validation loss: {val_loss}')\r\n> 4 tqdm.write(f'F1 Score (Weighted): {val_f1}')\r\n> \r\n> 6 frames\r\n> \/usr\/local\/lib\/python3.6\/dist-packages\/torch\/utils\/data\/dataset.py in (.0)\r\n> 160\r\n> 161 def **getitem**(self, index):\r\n> --> 162 return tuple(tensor[index] for tensor in self.tensors)\r\n> 163\r\n> 164 def **len**(self):\r\n> \r\n> TypeError: new(): invalid data type 'str'\r\n\r\nI got the same error and fix it .\r\nyou can check your input where there may be string contained.\r\nsuch as\r\n```\r\na = [1,2,3,4,'<unk>']\r\ntorch.tensor(a)\r\n```","I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?","> I didn't know tokenizers could return strings in the token ids. Which tokenizer are you using to get this @Doragd ?\r\n\r\ni'm sorry that i met this issue in another place (not in huggingface repo). ","@akhilkapil do you have strings in your dataset ? When you set the dataset format to \"pytorch\" you should exclude columns with strings as pytorch can't make tensors out of strings"],"created_at":1596440909000,"updated_at":1603357466000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I trying to build multi label text classifier model using Transformers lib. \r\n\r\nI'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error \r\n\r\nFile \"C:\\***\\arrow_dataset.py\", line 343, in _convert_outputs\r\n v = command(v)\r\nTypeError: new(): invalid data type 'str'\r\n\r\nI'm using pyarrow 1.0.0. And I have simple custom data set with Text and Integer Label. \r\nEx: Data\r\n Text , Label #Column Header\r\n I'm facing an Network issue, 1\r\n I forgot my password, 2\r\n\r\nError StackTrace:\r\n\r\nFile \"C:\\**\\transformers\\trainer.py\", line 492, in train\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"C:\\**\\tqdm\\std.py\", line 1104, in __iter__\r\n for obj in iterable:\r\n File \"C:\\**\\torch\\utils\\data\\dataloader.py\", line 345, in __next__\r\n data = self._next_data()\r\n File \"C:\\**\\torch\\utils\\data\\dataloader.py\", line 385, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"C:\\**\\torch\\utils\\data\\_utils\\fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"C:\\**\\torch\\utils\\data\\_utils\\fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"C:\\**\\nlp\\arrow_dataset.py\", line 414, in __getitem__\r\n output_all_columns=self._output_all_columns,\r\n File \"C:\\**\\nlp\\arrow_dataset.py\", line 403, in _getitem\r\n outputs, format_type=format_type, format_columns=format_columns, output_all_columns=output_all_columns\r\n File \"C:\\**\\nlp\\arrow_dataset.py\", line 343, in _convert_outputs\r\n v = command(v)\r\nTypeError: new(): invalid data type 'str'\r\n \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/469\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/468","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/468\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/468\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/468\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/468","id":671622441,"node_id":"MDU6SXNzdWU2NzE2MjI0NDE=","number":468,"title":"UnicodeDecodeError while loading PAN-X task of XTREME dataset","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed. Solution 1 is the simplest.\r\n\r\nThis is actually a recurring problem.\r\nI think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.\r\nAnd probably add a test in the CI to forbid using this in the future.","I'm happy to tackle the broader problem - will open a PR when it's ready!","That would be awesome!","I've created a simple function that seems to do the trick:\r\n\r\n```python\r\ndef apply_encoding_on_file_open(filepath: str):\r\n \"\"\"Apply UTF-8 encoding for all instances where a non-binary file is opened.\"\"\"\r\n \r\n with open(filepath, 'r', encoding='utf-8') as input_file:\r\n regexp = re.compile(r\"\"\"\r\n (?!.*\\b(?:encoding|rb|wb|wb+|ab|ab+)\\b)\r\n (open)\r\n \\((.*)\\)\r\n \"\"\")\r\n input_text = input_file.read()\r\n match = regexp.search(input_text)\r\n \r\n if match:\r\n print('Found match!', match.group())\r\n # append utf-8 encoding to matching groups in-place\r\n output = regexp.sub(lambda m: m.group()[:-1]+', encoding=\"utf-8\")', input_text)\r\n with open(filepath, 'w', encoding='utf-8') as output_file:\r\n output_file.write(output)\r\n else:\r\n print(\"No match found!\")\r\n```\r\n\r\nThe regexp does a negative lookahead to avoid matching on cases where the encoding is already specified or when binary files are involved.\r\n\r\nFrom an implementation perspective:\r\n\r\n* Would it make sense to include this function in `nlp-cli` so that we can run something like\r\n```\r\nnlp-cli fix_encoding path\/to\/folder\r\n```\r\nand the command recursively fixes all files in the target?\r\n* What is the desired behaviour in the CI test? Here we could either have a simple script that we run as a `job` in the CI and raises an error if a missing encoding is detected. Alternatively we could incorporate this behaviour into the CLI and run that in the CI.\r\n\r\nPlease let me know what you prefer among the alternatives.\r\n","I realised I was overthinking the problem, so decided to just run the regexp over the codebase and make the PR. In other words, we can ignore my comments about using the CLI \ud83d\ude38 "],"created_at":1596377110000,"updated_at":1597911368000,"closed_at":1597911368000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hi \ud83e\udd17 team!\r\n\r\n## Description of the problem\r\nI'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnicodeDecodeError Traceback (most recent call last)\r\n<ipython-input-5-1d61f439b843> in <module>\r\n----> 1 dataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='.\/data')\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 528 ignore_verifications = ignore_verifications or save_infos\r\n 529 # Download\/copy dataset processing script\r\n--> 530 module_path, hash = prepare_module(path, download_config=download_config, dataset=True)\r\n 531 \r\n 532 # Get dataset builder class from the processing script\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in prepare_module(path, download_config, dataset, force_local_path, **download_kwargs)\r\n 265 \r\n 266 # Download external imports if needed\r\n--> 267 imports = get_imports(local_path)\r\n 268 local_imports = []\r\n 269 library_imports = []\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in get_imports(file_path)\r\n 156 lines = []\r\n 157 with open(file_path, mode=\"r\") as f:\r\n--> 158 lines.extend(f.readlines())\r\n 159 \r\n 160 logger.info(\"Checking %s for additional imports.\", file_path)\r\n\r\n\/usr\/lib\/python3.6\/encodings\/ascii.py in decode(self, input, final)\r\n 24 class IncrementalDecoder(codecs.IncrementalDecoder):\r\n 25 def decode(self, input, final=False):\r\n---> 26 return codecs.ascii_decode(input, self.errors)[0]\r\n 27 \r\n 28 class StreamWriter(Codec,codecs.StreamWriter):\r\n\r\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 111: ordinal not in range(128)\r\n```\r\n\r\n## Steps to reproduce\r\nInstall from nlp's master branch\r\n```python\r\npip install git+https:\/\/github.com\/huggingface\/nlp.git\r\n```\r\nthen run\r\n```python\r\nfrom nlp import load_dataset\r\n# AmazonPhotos.zip is located in data\/\r\ndataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='.\/data')\r\n```\r\n\r\n## OS \/ platform details\r\n\r\n- `nlp` version: latest from master\r\n- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.4.0 (True)\r\n- Tensorflow version (GPU?): 2.1.0 (True)\r\n- Using GPU in script?: True\r\n- Using distributed or parallel set-up in script?: False\r\n\r\n## Proposed solution\r\nEither change [line 762](https:\/\/github.com\/huggingface\/nlp\/blob\/7ada00b1d62f94eee22a7df38c6b01e3f27194b7\/datasets\/xtreme\/xtreme.py#L762) in `xtreme.py` to include UTF-8 encoding:\r\n\r\n```\r\n# old\r\nwith open(filepath) as f\r\n# new\r\nwith open(filepath, encoding='utf-8') as f\r\n```\r\n\r\nor raise a warning that suggests setting the locale explicitly, e.g.\r\n```python\r\nimport locale\r\nlocale.setlocale(locale.LC_ALL, 'C.UTF-8')\r\n```\r\nI have a preference for the first solution. Let me know if you agree and I'll be happy to implement the simple fix!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/468\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/467","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/467\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/467\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/467\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/467","id":671580010,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYxNzgwMzUy","number":467,"title":"DOCS: Fix typo","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1596358777000,"updated_at":1596376347000,"closed_at":1596359934000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/467","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/467","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/467.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/467.patch"},"body":"Fix typo from dictionnary -> dictionary","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/467\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/466","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/466\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/466\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/466\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/466","id":670766891,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYxMDEzOTM0","number":466,"title":"[METRICS] Various improvements on metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The cast function is now called inside `features.encode_example`.\r\nI also added `encode_batch` that was missing.\r\n\r\nMoreover I used the cast function in `Dataset.map` to support torch\/tensorflow tensors or numpy arrays inputs.\r\n\r\nThere are tests for tensors inputs in metrics and in .map","I think we can merge"],"created_at":1596279825000,"updated_at":1597677300000,"closed_at":1597677299000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/466","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/466","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/466.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/466.patch"},"body":"- Disallow the use of positional arguments to avoid `predictions` vs `references` mistakes\r\n- Allow to directly feed numpy\/pytorch\/tensorflow\/pandas objects in metrics","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/466\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/465","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/465\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/465\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/465\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/465","id":669889779,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYwMjEwODYw","number":465,"title":"Keep features after transform","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["One note on features inference:\r\n\r\nif an arrow type is `struct of items` where each item is a `list`, then we return a `dict` in which each item is a `Sequence`.\r\nIt means that we don't use the Sequence <-> dict swap when we infer features.\r\n\r\nIt's fine because the swap is generally used in dataset scripts, in which features are defined (inferred features are discarded)","If it's fine for you @thomwolf we can merge this one :) ","Yes this is fine I think!"],"created_at":1596206601000,"updated_at":1596220053000,"closed_at":1596220052000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/465","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/465","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/465.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/465.patch"},"body":"When applying a transform like `map`, some features were lost (and inferred features were used).\r\nIt was the case for ClassLabel, Translation, etc.\r\n\r\nTo fix that, I did some modifications in the `ArrowWriter`:\r\n\r\n- added the `update_features` parameter. When it's `True`, then the features specified by the user (if any) can be updated with inferred features if their type don't match. `map` transform sets `update_features=True` when writing to cache file or buffer. Features won't change by default in `map`.\r\n\r\n- added the `with_metadata` parameter. If `True`, the `features` (after update) will be written inside the metadata of the schema in this format:\r\n```\r\n{\r\n \"huggingface\": {\"features\" : <serialized Features exactly like dataset_info.json>}\r\n} \r\n```\r\nThen, once a dataset is instantiated without info\/features, these metadata are used to set the features of the dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/465\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/464","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/464\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/464\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/464\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/464","id":669767381,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYwMTAxNDYz","number":464,"title":"Add rename, remove and cast in-place operations","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596198621000,"updated_at":1596210602000,"closed_at":1596210600000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/464","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/464","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/464.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/464.patch"},"body":"Add a bunch of in-place operation leveraging the Arrow back-end to rename and remove columns and cast to new features without using the more expensive `map` method.\r\n\r\nThese methods are added to `Dataset` as well as `DatasetDict`.\r\n\r\nAdded tests for these new methods and add the methods to the doc.\r\n\r\nNaming follows the new pattern with a trailing underscore indicating in-place methods.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/464\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/463","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/463\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/463\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/463\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/463","id":669735455,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1","number":463,"title":"Add dataset\/mlsum","user":{"login":"RachelKer","id":36986299,"node_id":"MDQ6VXNlcjM2OTg2Mjk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36986299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RachelKer","html_url":"https:\/\/github.com\/RachelKer","followers_url":"https:\/\/api.github.com\/users\/RachelKer\/followers","following_url":"https:\/\/api.github.com\/users\/RachelKer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RachelKer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RachelKer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RachelKer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RachelKer\/orgs","repos_url":"https:\/\/api.github.com\/users\/RachelKer\/repos","events_url":"https:\/\/api.github.com\/users\/RachelKer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RachelKer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think the problem is related to `wiki_dpr` dataset which is making the circle CI failed as you can see:\r\n```\r\nFAILED tests\/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\nFAILED tests\/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr\/dummy_psgs_w100_no_embeddings\r\nFAILED tests\/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr\/dummy_psgs_w100_with_nq_embeddings\r\nFAILED tests\/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr\/psgs_w100_no_embeddings\r\nFAILED tests\/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr\/psgs_w100_with_nq_embeddings\r\n\r\n```\r\nI'm facing the same issues with my last commits, I tried to rebase from master but it still not working. Maybe @lhoestq can help with.","Hello, I am confused about the next steps I need to do. Did the forced merge solve the issue ?","Hello :)\r\nI think you can just rebase from master and it should solve the CI error"],"created_at":1596196252000,"updated_at":1598280882000,"closed_at":1598280882000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/463","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/463","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/463.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/463.patch"},"body":"New pull request that should correct the previous errors. \r\n\r\nThe load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/463\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/462","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/462\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/462\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/462\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/462","id":669715547,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz","number":462,"title":"add DoQA (ACL 2020) dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596194756000,"updated_at":1596454107000,"closed_at":1596454107000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/462","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/462","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/462.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/462.patch"},"body":"adds DoQA (ACL 2020) dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/462\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/461","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/461\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/461\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/461\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/461","id":669703508,"node_id":"MDExOlB1bGxSZXF1ZXN0NDYwMDQzNDY5","number":461,"title":"Doqa","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596193872000,"updated_at":1596193995000,"closed_at":1596193995000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/461","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/461","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/461.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/461.patch"},"body":"add DoQA (ACL 2020) dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/461\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/460","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/460\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/460\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/460\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/460","id":669585256,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU5OTM2OTU2","number":460,"title":"Fix KeyboardInterrupt in map and bad indices in select","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @TevenLeScao for finding this issue","Thanks @lhoestq for catching this \u2764\ufe0f"],"created_at":1596185835000,"updated_at":1596195139000,"closed_at":1596195138000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/460","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/460","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/460.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/460.patch"},"body":"If you interrupted a map function while it was writing, the cached file was not discarded.\r\nTherefore the next time you called map, it was loading an incomplete arrow file.\r\n\r\nWe had the same issue with select if there was a bad indice at one point.\r\n\r\nTo fix that I used temporary files that are renamed once everything is finished.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/460\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/459","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/459\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/459\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/459\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/459","id":669545437,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy","number":459,"title":"[Breaking] Update Dataset and DatasetDict API","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596183093000,"updated_at":1598430516000,"closed_at":1598430515000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/459","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/459","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/459.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/459.patch"},"body":"This PR contains a few breaking changes so it's probably good to keep it for the next (major) release:\r\n- rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX.\r\n- remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format.\r\n- add a few more properties and methods to `DatasetDict`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/459\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/458","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/458\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/458\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/458\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/458","id":668972666,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU5Mzk5ODg2","number":458,"title":"Install CoVal metric from github","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596128365000,"updated_at":1596203793000,"closed_at":1596203793000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/458","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/458","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/458.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/458.patch"},"body":"Changed the import statements in `coval.py` to direct the user to install the original package from github if it's not already installed (the warning will only display properly after merging [PR455](https:\/\/github.com\/huggingface\/nlp\/pull\/455))\r\n\r\nAlso changed the function call to use named rather than positional arguments.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/458\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/457","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/457\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/457\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/457\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/457","id":668898386,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU5MzMyOTM1","number":457,"title":"add set_format to DatasetDict + tests","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596124400000,"updated_at":1596130476000,"closed_at":1596130474000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/457","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/457","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/457.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/457.patch"},"body":"Add the `set_format` and `formated_as` and `reset_format` to `DatasetDict`.\r\nAdd tests to these for `Dataset` and `DatasetDict`.\r\nFix some bugs uncovered by the tests for `pandas` formating.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/457\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/456","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/456\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/456\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/456\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/456","id":668723785,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0","number":456,"title":"add crd3(ACL 2020) dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596115715000,"updated_at":1596454132000,"closed_at":1596454132000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/456","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/456","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/456.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/456.patch"},"body":"This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/456\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/455","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/455\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/455\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/455\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/455","id":668037965,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw","number":455,"title":"Add bleurt","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry one nit: Could we use named arguments for the call to BLEURT?\r\n\r\ni.e. \r\n scores = self.scorer.score(references=references, candidates=predictions)\r\n\r\n(i.e. so it is less bug prone)\r\n","Following up on Ankur's comment---we are going to drop support for\npositional (not named) arguments in the future releases because it seems to\ncause bugs and confusion. I hope it doesn't create too much of a mess.\n\nLe jeu. 30 juil. 2020 \u00e0 10:44, ankparikh <notifications@github.com> a\n\u00e9crit :\n\n> Sorry one nit: Could we use named arguments for the call to BLEURT?\n>\n> i.e.\n> scores = self.scorer.score(references=references, candidates=predictions)\n>\n> (i.e. so it is less bug prone)\n>\n> \u2014\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/nlp\/pull\/455#issuecomment-666414514>, or\n> unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA>\n> .\n>\n","> Following up on Ankur's comment---we are going to drop support for positional (not named) arguments in the future releases because it seems to cause bugs and confusion. I hope it doesn't create too much of a mess. Le jeu. 30 juil. 2020 \u00e0 10:44, ankparikh <notifications@github.com> a \u00e9crit :\r\n> [\u2026](#)\r\n> Sorry one nit: Could we use named arguments for the call to BLEURT? i.e. scores = self.scorer.score(references=references, candidates=predictions) (i.e. so it is less bug prone) \u2014 You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#455 (comment)](https:\/\/github.com\/huggingface\/nlp\/pull\/455#issuecomment-666414514)>, or unsubscribe <https:\/\/github.com\/notifications\/unsubscribe-auth\/ABTMRNGAN2PMECS5K4DIHJDR6GBMLANCNFSM4PL323FA> .\r\n\r\nChanged @ankparikh @tsellam, thanks for taking a look!","We should avoid positional arguments in metrics on our side as well. It's a dangerous source of errors indeed."],"created_at":1596046112000,"updated_at":1596203774000,"closed_at":1596203774000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/455","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/455","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/455.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/455.patch"},"body":"This PR adds the BLEURT metric to the library.\r\n\r\nThe BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.\r\n\r\nNote that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend using `bleurt-base-128` instead. I think it's safer to have our users have a functioning metric when they call the default behavior, we'll address discrepancies in the issues\/discussions if it comes up.\r\n\r\nIn addition to the BLEURT file, `load.py` was changed so we can ask users to pip install the required packages from git when they have a `setup.py` but are not on PyPL\r\n\r\ncc @ankparikh @tsellam","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/455\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/454","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/454\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/454\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/454\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/454","id":668011577,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU4NTc3MzA3","number":454,"title":"Create SECURITY.md","user":{"login":"ChenZehong13","id":56394989,"node_id":"MDQ6VXNlcjU2Mzk0OTg5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/56394989?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChenZehong13","html_url":"https:\/\/github.com\/ChenZehong13","followers_url":"https:\/\/api.github.com\/users\/ChenZehong13\/followers","following_url":"https:\/\/api.github.com\/users\/ChenZehong13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChenZehong13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChenZehong13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChenZehong13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChenZehong13\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChenZehong13\/repos","events_url":"https:\/\/api.github.com\/users\/ChenZehong13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChenZehong13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596043414000,"updated_at":1596059152000,"closed_at":1596059152000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/454","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/454","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/454.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/454.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/454\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/453","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/453\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/453\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/453\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/453","id":667728247,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky","number":453,"title":"add builder tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1596018127000,"updated_at":1596021246000,"closed_at":1596021245000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/453","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/453","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/453.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/453.patch"},"body":"I added `as_dataset` and `download_and_prepare` to the tests","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/453\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/452","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/452\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/452\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/452\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/452","id":667498295,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy","number":452,"title":"Guardian authorship dataset","user":{"login":"malikaltakrori","id":25109412,"node_id":"MDQ6VXNlcjI1MTA5NDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25109412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/malikaltakrori","html_url":"https:\/\/github.com\/malikaltakrori","followers_url":"https:\/\/api.github.com\/users\/malikaltakrori\/followers","following_url":"https:\/\/api.github.com\/users\/malikaltakrori\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/malikaltakrori\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/malikaltakrori\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/malikaltakrori\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/malikaltakrori\/orgs","repos_url":"https:\/\/api.github.com\/users\/malikaltakrori\/repos","events_url":"https:\/\/api.github.com\/users\/malikaltakrori\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/malikaltakrori\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! Glad you managed to fix the version issue.\r\n\r\nThe command `\r\npython nlp-cli dummy_data datasets\/guardian_authorship --save_infos --all_configs` is supposed to generate a json file `dataset_infos.json` next to your dataset script, but I can't see it in the PR.\r\nCan you make sure you have the json file on your side and that you have pushed it ?","Done!","Is there anything else that I should do? and would the new dataset be available via the NLP package now? ","Sorry I forgot to merge this one ! Doing it now","Thanks for the heads up ;)","No worries, this is my first contribution to an online package, and I feel very proud it's part of this library :) Thank you very much!"],"created_at":1595989437000,"updated_at":1597936197000,"closed_at":1597936076000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/452","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/452","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/452.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/452.patch"},"body":"A new dataset: Guardian news articles for authorship attribution\r\n\r\n**tests passed:**\r\npython nlp-cli dummy_data datasets\/guardian_authorship --save_infos --all_configs\r\n\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship\r\n\r\n**Tests failed:**\r\nReal data: RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship\r\noutput: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...' \r\n\r\nRemarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:\r\n* _glue - OSError: Cannot find data file.\r\n*_newsgroup - FileNotFoundError: Local file datasets\/newsgroup\/dummy\/18828_comp.graphics\/3.0.0\/dummy_data.zip doesn't exist\r\n\r\nThank you for letting us contribute to such a huge and important library! \r\n\r\nEDIT:\r\nI was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/452\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/451","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/451\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/451\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/451\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/451","id":667210468,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3OTIxNDMx","number":451,"title":"Fix csv\/json\/txt cache dir","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think this is the way to go but I\u2019m afraid this might be a little slow. I was thinking that we could use a high quality very fast non crypto hash like xxhash for these stuff (hashing data files)","Yep good idea, I'll take a look","I tested the hashing speed [here](https:\/\/colab.research.google.com\/drive\/1hlhP84kLIHmOzMRQN1h8x10hKWpXXyud?usp=sharing).\r\nI was able to get 8x speed with `xxhashlib` (42ms vs 345ms for 100MiB of data).\r\nWhat do you think @thomwolf ?","I added xxhash and some tests"],"created_at":1595953851000,"updated_at":1596031043000,"closed_at":1596031042000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/451","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/451","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/451.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/451.patch"},"body":"The cache dir for csv\/json\/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user.\r\n\r\nTo fix that, I added a line that use the hash of the data files provided by the user to define the cache dir.\r\n\r\nThis should fix #444 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/451\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/450","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/450\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/450\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/450\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/450","id":667074120,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3ODA5ODA2","number":450,"title":"add sogou_news","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595942950000,"updated_at":1596029418000,"closed_at":1596029417000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/450","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/450","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/450.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/450.patch"},"body":"This PR adds the sogou news dataset\r\n#353 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/450\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/449","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/449\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/449\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/449\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/449","id":666898923,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx","number":449,"title":"add reuters21578 dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Awesome !\r\n> Good job on parsing these files :O\r\n> \r\n> Do you think it would be hard to get the two other split configurations ?\r\n\r\nIt shouldn't be that hard, I think I can consider different config names for each split ","> > Awesome !\r\n> > Good job on parsing these files :O\r\n> > Do you think it would be hard to get the two other split configurations ?\r\n> \r\n> It shouldn't be that hard, I think I can consider different config names for each split\r\n\r\nYes that would be perfect","closing this PR and opening a new one to fix the circle CI problems"],"created_at":1595926692000,"updated_at":1596453031000,"closed_at":1596453031000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/449","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/449","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/449.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/449.patch"},"body":"This PR adds the `Reuters_21578` dataset https:\/\/kdd.ics.uci.edu\/databases\/reuters21578\/reuters21578.html \r\n#353 \r\n\r\nThe datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)\r\n\r\nIn the Readme file 3 ways to split the dataset are given.:\r\n\r\n- The Modified Lewis (\"ModLewis\") Split: train, test and unused-set\r\n\r\n- The Modified Apte (\"ModApte\") Split : train, test and unused-set\r\n\r\n- The Modified Hayes (\"ModHayes\") Split: train and test\r\n\r\nHere I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/449\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/448","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/448\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/448\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/448\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/448","id":666893443,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3NjYwMDU2","number":448,"title":"add aws load metric test","user":{"login":"idoh","id":5303103,"node_id":"MDQ6VXNlcjUzMDMxMDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5303103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/idoh","html_url":"https:\/\/github.com\/idoh","followers_url":"https:\/\/api.github.com\/users\/idoh\/followers","following_url":"https:\/\/api.github.com\/users\/idoh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/idoh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/idoh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/idoh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/idoh\/orgs","repos_url":"https:\/\/api.github.com\/users\/idoh\/repos","events_url":"https:\/\/api.github.com\/users\/idoh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/idoh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you run `make style` to fix the code_quality fail ?\r\nYou'll need `black` and `isort` that you can install by doing `pip install -e .[quality]`","Thanks @lhoestq\r\nI fixed the styling","Thank you :)"],"created_at":1595926222000,"updated_at":1595948547000,"closed_at":1595948547000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/448","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/448","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/448.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/448.patch"},"body":"Following issue #445\r\nAdded a test to recognize import errors of all metrics","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/448\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/447","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/447\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/447\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/447\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/447","id":666842115,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0","number":447,"title":"[BugFix] fix wrong import of DEFAULT_TOKENIZER","user":{"login":"idoh","id":5303103,"node_id":"MDQ6VXNlcjUzMDMxMDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5303103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/idoh","html_url":"https:\/\/github.com\/idoh","followers_url":"https:\/\/api.github.com\/users\/idoh\/followers","following_url":"https:\/\/api.github.com\/users\/idoh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/idoh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/idoh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/idoh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/idoh\/orgs","repos_url":"https:\/\/api.github.com\/users\/idoh\/repos","events_url":"https:\/\/api.github.com\/users\/idoh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/idoh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595922070000,"updated_at":1595941081000,"closed_at":1595940725000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/447","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/447","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/447.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/447.patch"},"body":"Fixed the path to `DEFAULT_TOKENIZER`\r\n#445","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/447\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/446","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/446\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/446\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/446\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/446","id":666837351,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3NjEyNTg5","number":446,"title":"[BugFix] fix wrong import of DEFAULT_TOKENIZER","user":{"login":"idoh","id":5303103,"node_id":"MDQ6VXNlcjUzMDMxMDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5303103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/idoh","html_url":"https:\/\/github.com\/idoh","followers_url":"https:\/\/api.github.com\/users\/idoh\/followers","following_url":"https:\/\/api.github.com\/users\/idoh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/idoh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/idoh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/idoh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/idoh\/orgs","repos_url":"https:\/\/api.github.com\/users\/idoh\/repos","events_url":"https:\/\/api.github.com\/users\/idoh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/idoh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595921567000,"updated_at":1595921686000,"closed_at":1595921639000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/446","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/446","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/446.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/446.patch"},"body":"Fixed the path to `DEFAULT_TOKENIZER`\r\n#445 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/446\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/445","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/445\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/445\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/445\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/445","id":666836658,"node_id":"MDU6SXNzdWU2NjY4MzY2NTg=","number":445,"title":"DEFAULT_TOKENIZER import error in sacrebleu","user":{"login":"idoh","id":5303103,"node_id":"MDQ6VXNlcjUzMDMxMDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5303103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/idoh","html_url":"https:\/\/github.com\/idoh","followers_url":"https:\/\/api.github.com\/users\/idoh\/followers","following_url":"https:\/\/api.github.com\/users\/idoh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/idoh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/idoh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/idoh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/idoh\/orgs","repos_url":"https:\/\/api.github.com\/users\/idoh\/repos","events_url":"https:\/\/api.github.com\/users\/idoh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/idoh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This issue was resolved by #447 "],"created_at":1595921490000,"updated_at":1595941136000,"closed_at":1595941136000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Latest Version 0.3.0\r\n\r\nWhen loading the metric \"sacrebleu\" there is an import error due to the wrong path\r\n![image](https:\/\/user-images.githubusercontent.com\/5303103\/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/445\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/444","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/444\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/444\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/444\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/444","id":666280842,"node_id":"MDU6SXNzdWU2NjYyODA4NDI=","number":444,"title":"Keep loading old file even I specify a new file in load_dataset","user":{"login":"joshhu","id":10594453,"node_id":"MDQ6VXNlcjEwNTk0NDUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10594453?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/joshhu","html_url":"https:\/\/github.com\/joshhu","followers_url":"https:\/\/api.github.com\/users\/joshhu\/followers","following_url":"https:\/\/api.github.com\/users\/joshhu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/joshhu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/joshhu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/joshhu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/joshhu\/orgs","repos_url":"https:\/\/api.github.com\/users\/joshhu\/repos","events_url":"https:\/\/api.github.com\/users\/joshhu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/joshhu\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Same here !","This is the only fix I could come up with without touching the repo's code.\r\n```python\r\nfrom nlp.builder import FORCE_REDOWNLOAD\r\ndataset = load_dataset('csv', data_file='.\/a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1')\r\n```\r\nYou'll have to change the version each time you want to load a different csv file.\r\nIf you're willing to add a ```print```, you can go to ```nlp.load``` and add ```print(builder_instance.cache_dir)``` right before the ```return ds``` in the ```load_dataset``` method. It'll print the cache folder, and you'll just have to erase it (and then you won't need the change here above)."],"created_at":1595855286000,"updated_at":1596031042000,"closed_at":1596031042000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I used load a file called 'a.csv' by \r\n```\r\ndataset = load_dataset('csv', data_file='.\/a.csv')\r\n```\r\nAnd after a while, I tried to load another csv called 'b.csv'\r\n```\r\ndataset = load_dataset('csv', data_file='.\/b.csv')\r\n```\r\nHowever, the new dataset seems to remain the old 'a.csv' and not loading new csv file.\r\n\r\nEven worse, after I load a.csv, the load_dataset function keeps loading the 'a.csv' afterward. \r\n\r\nIs this a cache problem?\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/444\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/443","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/443\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/443\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/443\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/443","id":666246716,"node_id":"MDU6SXNzdWU2NjYyNDY3MTY=","number":443,"title":"Cannot unpickle saved .pt dataset with torch.save()\/load()","user":{"login":"vegarab","id":24683907,"node_id":"MDQ6VXNlcjI0NjgzOTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24683907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vegarab","html_url":"https:\/\/github.com\/vegarab","followers_url":"https:\/\/api.github.com\/users\/vegarab\/followers","following_url":"https:\/\/api.github.com\/users\/vegarab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vegarab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vegarab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vegarab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vegarab\/orgs","repos_url":"https:\/\/api.github.com\/users\/vegarab\/repos","events_url":"https:\/\/api.github.com\/users\/vegarab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vegarab\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https:\/\/github.com\/huggingface\/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. "],"created_at":1595852017000,"updated_at":1595855111000,"closed_at":1595855111000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:\r\n\r\n```python\r\n>>> import torch\r\n>>> import nlp\r\n\r\n>>> squad = nlp.load_dataset(\"squad.py\", split=\"train\")\r\n>>> squad\r\nDataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)\r\n>>> squad = squad.map(create_features, batched=True)\r\n>>> squad.set_format(type=\"torch\", columns=[\"source_ids\", \"target_ids\", \"attention_mask\"])\r\n>>> torch.save(squad, \"squad.pt\")\r\n\r\n>>> squad_pt = torch.load(\"squad.pt\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/torch\/serialization.py\", line 593, in load\r\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/torch\/serialization.py\", line 773, in _legacy_load\r\n result = unpickler.load()\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/splits.py\", line 493, in __setitem__\r\n raise ValueError(\"Cannot add elem. Use .add() instead.\")\r\nValueError: Cannot add elem. Use .add() instead.\r\n```\r\nwhere `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`. \r\n```python\r\ndef create_features(batch):\r\n source_text_encoding = tokenizer.batch_encode_plus(\r\n batch[\"source_text\"],\r\n max_length=max_source_length,\r\n pad_to_max_length=True,\r\n truncation=True)\r\n\r\n target_text_encoding = tokenizer.batch_encode_plus(\r\n batch[\"target_text\"],\r\n max_length=max_target_length,\r\n pad_to_max_length=True,\r\n truncation=True)\r\n\r\n features = {\r\n \"source_ids\": source_text_encoding[\"input_ids\"],\r\n \"target_ids\": target_text_encoding[\"input_ids\"],\r\n \"attention_mask\": source_text_encoding[\"attention_mask\"]\r\n }\r\n\r\n return features\r\n```\r\n\r\nI found a similar issue in [issue 5267 in the huggingface\/transformers repo](https:\/\/github.com\/huggingface\/transformers\/issues\/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/443\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/442","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/442\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/442\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/442\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/442","id":666201810,"node_id":"MDU6SXNzdWU2NjYyMDE4MTA=","number":442,"title":"[Suggestion] Glue Diagnostic Data with Labels ","user":{"login":"ggbetz","id":3662782,"node_id":"MDQ6VXNlcjM2NjI3ODI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3662782?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ggbetz","html_url":"https:\/\/github.com\/ggbetz","followers_url":"https:\/\/api.github.com\/users\/ggbetz\/followers","following_url":"https:\/\/api.github.com\/users\/ggbetz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ggbetz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ggbetz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ggbetz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ggbetz\/orgs","repos_url":"https:\/\/api.github.com\/users\/ggbetz\/repos","events_url":"https:\/\/api.github.com\/users\/ggbetz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ggbetz\/received_events","type":"User","site_admin":false},"labels":[{"id":2067401494,"node_id":"MDU6TGFiZWwyMDY3NDAxNDk0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Dataset%20discussion","name":"Dataset discussion","color":"72f99f","default":false,"description":"Discussions on the datasets"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595847598000,"updated_at":1598282000000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello! First of all, thanks for setting up this useful project!\r\n\r\nI've just realised you provide the the [Glue Diagnostics Data](https:\/\/huggingface.co\/nlp\/viewer\/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set.\r\n\r\nYet, the data with labels is available, too (see also [here](https:\/\/gluebenchmark.com\/diagnostics#introduction)):\r\n\r\nhttps:\/\/www.dropbox.com\/s\/ju7d95ifb072q9f\/diagnostic-full.tsv?dl=1 \r\n\r\nHave you considered incorporating it?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/442\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/441","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/441\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/441\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/441\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/441","id":666148413,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3","number":441,"title":"Add features parameter in load dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This one is ready for review now","I changed to using features only, instead of info.\r\nLet mw know if it sounds good to you now @thomwolf "],"created_at":1595843401000,"updated_at":1596113477000,"closed_at":1596113476000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/441","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/441","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/441.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/441.patch"},"body":"Added `features` argument in `nlp.load_dataset`.\r\nIf they don't match the data type, it raises a `ValueError`.\r\n\r\nIt's a draft PR because #440 needs to be merged first.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/441\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/440","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/440\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/440\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/440\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/440","id":666116823,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU3MDE2MjQy","number":440,"title":"Fix user specified features in map","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595840666000,"updated_at":1595928323000,"closed_at":1595928322000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/440","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/440","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/440.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/440.patch"},"body":"`.map` didn't keep the user specified features because of an issue in the writer.\r\nThe writer used to overwrite the user specified features with inferred features.\r\n\r\nI also added tests to make sure it doesn't happen again.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/440\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/439","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/439\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/439\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/439\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/439","id":665964673,"node_id":"MDU6SXNzdWU2NjU5NjQ2NzM=","number":439,"title":"Issues: Adding a FAISS or Elastic Search index to a Dataset","user":{"login":"nsankar","id":431890,"node_id":"MDQ6VXNlcjQzMTg5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/431890?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nsankar","html_url":"https:\/\/github.com\/nsankar","followers_url":"https:\/\/api.github.com\/users\/nsankar\/followers","following_url":"https:\/\/api.github.com\/users\/nsankar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nsankar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nsankar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nsankar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nsankar\/orgs","repos_url":"https:\/\/api.github.com\/users\/nsankar\/repos","events_url":"https:\/\/api.github.com\/users\/nsankar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nsankar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https:\/\/huggingface.co\/transformers\/master\/model_doc\/dpr.html).\r\n\r\nMoreover all the indexing features will also be available in the next release of `nlp`.","@lhoestq Thanks for the info ","@lhoestq I tried installing transformer from the master branch. Python imports for DPR again didnt' work. Anyways, Looking forward to trying it in the next release of nlp ","@nsankar have you tried with the latest version of the library?","@yjernite it worked. Thanks"],"created_at":1595823917000,"updated_at":1603849584000,"closed_at":1603849584000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https:\/\/huggingface.co\/nlp\/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/439\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/438","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/438\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/438\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/438\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/438","id":665865490,"node_id":"MDU6SXNzdWU2NjU4NjU0OTA=","number":438,"title":"New Datasets: IWSLT15+, ITTB","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ?\r\nThe tutorial on writing a new dataset loading script is here: https:\/\/huggingface.co\/nlp\/add_dataset.html\r\nAnd the part on how to share a new dataset is here: https:\/\/huggingface.co\/nlp\/share_dataset.html","Hi @sshleifer, I'm trying to add IWSLT using the link you provided but the download urls are not working. Only `[en, de]` pair is working. For others language pairs it throws a `404` error.\r\n\r\n"],"created_at":1595799784000,"updated_at":1598281935000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"**Links:**\r\n[iwslt](https:\/\/pytorchnlp.readthedocs.io\/en\/latest\/_modules\/torchnlp\/datasets\/iwslt.html)\r\nDon't know if that link is up to date.\r\n\r\n[ittb](http:\/\/www.cfilt.iitb.ac.in\/iitb_parallel\/)\r\n**Motivation**: replicate mbart finetuning results (table below)\r\n![image](https:\/\/user-images.githubusercontent.com\/6045025\/88490093-0c1c8c00-cf67-11ea-960d-8dcaad2aa8eb.png)\r\n\r\n\r\nFor future readers, we already have the following language pairs in the wmt namespaces:\r\n\r\n```\r\nwmt14: ['cs-en', 'de-en', 'fr-en', 'hi-en', 'ru-en']\r\nwmt15: ['cs-en', 'de-en', 'fi-en', 'fr-en', 'ru-en']\r\nwmt16: ['cs-en', 'de-en', 'fi-en', 'ro-en', 'ru-en', 'tr-en']\r\nwmt17: ['cs-en', 'de-en', 'fi-en', 'lv-en', 'ru-en', 'tr-en', 'zh-en']\r\nwmt18: ['cs-en', 'de-en', 'et-en', 'fi-en', 'kk-en', 'ru-en', 'tr-en', 'zh-en']\r\nwmt19: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/438\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/437","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/437\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/437\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/437\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/437","id":665597176,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU2NjIzNjc3","number":437,"title":"Fix XTREME PAN-X loading","user":{"login":"lvwerra","id":8264887,"node_id":"MDQ6VXNlcjgyNjQ4ODc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8264887?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lvwerra","html_url":"https:\/\/github.com\/lvwerra","followers_url":"https:\/\/api.github.com\/users\/lvwerra\/followers","following_url":"https:\/\/api.github.com\/users\/lvwerra\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lvwerra\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lvwerra\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lvwerra\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lvwerra\/orgs","repos_url":"https:\/\/api.github.com\/users\/lvwerra\/repos","events_url":"https:\/\/api.github.com\/users\/lvwerra\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lvwerra\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There is an interesting design question here (cc @lhoestq).\r\n\r\nI guess the labels form a closed set so we could also use a [nlp.ClassLabel](https:\/\/huggingface.co\/nlp\/package_reference\/main_classes.html#nlp.ClassLabel) instead of a string. The differences will be mainly that:\r\n- the labels are stored as integers and thus ready for training a model\r\n- the string to int conversion methods are handled by the `nlp.ClassLabel` feature (see the [doc](https:\/\/huggingface.co\/nlp\/package_reference\/main_classes.html#nlp.ClassLabel) and [here](https:\/\/huggingface.co\/nlp\/features.html) and [here](https:\/\/huggingface.co\/nlp\/quicktour.html#fine-tuning-a-deep-learning-model)).\r\n\r\nIn my opinion, storing the labels as integers instead of strings makes it:\r\n- slightly less readable when accessing a dataset example (e.g. with `dataset[0]`)\r\n- force you with a specific mapping from string to integers\r\n- more clear that there is a fixed and predefined list of labels\r\n- easier to list all the labels (directly visible in the features).\r\n\r\n=> overall I'm pretty neutral about using one or the other option (`nlp.string` or `nlp.ClassLabel`).\r\n\r\nNote that we can now rather easily convert from one to the other with the map function and something like:\r\n```python\r\ndataset = dataset.map(lambda x: x, features=nlp.Features({'labels': nlp.ClassLabel(MY_LABELS_NAMES)}))\r\ndataset = dataset.map(lambda x: {'labels': dataset.features['labels'].int2str(x['labels'])}, features=nlp.Features({'labels': nlp.Value('string')}))\r\n```\r\n^^ this could probably be made even simpler (in particular for the second case)","I see. This is an interesting question.\r\nMaybe as the dataset doesn't provide the mapping we shouldn't force an arbitrary one, and keep them as strings ?\r\nMoreover for NER the labels are often different from a dataset to the other so it's probably good to keep strings (there is no conventional mapping).\r\nAlso as the column is called \"ner_tags\" (or \"langs\"), you can already assume that there is a fixed and predefined list of labels.","Yes sounds good to me.\r\nThis make me wonder if we don\u2019t want to have a default identity function in `map` so this method could also be used to easily cast features. What do you think?","Yes sounds good. I also noticed that people use map with identity to write a dataset into a specified cache file."],"created_at":1595688297000,"updated_at":1596097695000,"closed_at":1596097695000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/437","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/437","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/437.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/437.patch"},"body":"Hi \ud83e\udd17 \r\nIn response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word\/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sentence and their NER tags. This is also in agreement with the [NER example](https:\/\/github.com\/huggingface\/transformers\/tree\/master\/examples\/token-classification) in the transformers repo.\r\n\r\nWith the fix the output of the dataset should look as follows:\r\n```python\r\n>>> dataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='.\/data')\r\n>>> dataset['train'][0]\r\n{'words': ['R.H.', 'Saunders', '(', 'St.', 'Lawrence', 'River', ')', '(', '968', 'MW', ')'],\r\n 'ner_tags': ['B-ORG', 'I-ORG', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O'],\r\n 'langs': ['en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en', 'en']}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/437\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/436","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/436\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/436\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/436\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/436","id":665582167,"node_id":"MDU6SXNzdWU2NjU1ODIxNjc=","number":436,"title":"Google Colab - load_dataset - PyArrow exception","user":{"login":"nsankar","id":431890,"node_id":"MDQ6VXNlcjQzMTg5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/431890?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nsankar","html_url":"https:\/\/github.com\/nsankar","followers_url":"https:\/\/api.github.com\/users\/nsankar\/followers","following_url":"https:\/\/api.github.com\/users\/nsankar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nsankar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nsankar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nsankar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nsankar\/orgs","repos_url":"https:\/\/api.github.com\/users\/nsankar\/repos","events_url":"https:\/\/api.github.com\/users\/nsankar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nsankar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed, we\u2019ll make a new PyPi release next week to solve this. Cc @lhoestq ","+1! this is the reason our tests are failing at [TextAttack](https:\/\/github.com\/QData\/TextAttack) \r\n\r\n(Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case we'll just wait for you all to update)","Came to raise this issue, great to see other already have and it's being fixed so soon!\r\n\r\nAs an aside, since no one wrote this already, it seems like the version check only looks at the second part of the version number making sure it is >16, but pyarrow newest version is 1.0.0 so the second past is 0!","> Indeed, we\u2019ll make a new PyPi release next week to solve this. Cc @lhoestq\r\n\r\nYes definitely","please fix this on pypi! @lhoestq ","Is this issue fixed ?","We\u2019ll release the new version later today. Apologies for the delay.","I just pushed the new version on pypi :)","Thanks for the update."],"created_at":1595682320000,"updated_at":1597910898000,"closed_at":1597910898000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue\r\n\r\nImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just restart the runtime to use the right version of `pyarrow`.\r\n\r\nThe error goes only when I install version 0.16.0 \r\ni.e. !pip install pyarrow==0.16.0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/436\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/435","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/435\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/435\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/435\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/435","id":665507141,"node_id":"MDU6SXNzdWU2NjU1MDcxNDE=","number":435,"title":"ImportWarning for pyarrow 1.0.0","user":{"login":"HanGuo97","id":18187806,"node_id":"MDQ6VXNlcjE4MTg3ODA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18187806?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/HanGuo97","html_url":"https:\/\/github.com\/HanGuo97","followers_url":"https:\/\/api.github.com\/users\/HanGuo97\/followers","following_url":"https:\/\/api.github.com\/users\/HanGuo97\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/HanGuo97\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/HanGuo97\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/HanGuo97\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/HanGuo97\/orgs","repos_url":"https:\/\/api.github.com\/users\/HanGuo97\/repos","events_url":"https:\/\/api.github.com\/users\/HanGuo97\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/HanGuo97\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This was fixed in #434 \r\nWe'll do a release later this week to include this fix.\r\nThanks for reporting","I dont know if the fix was made but the problem is still present : \r\nInstaled with pip : NLP 0.3.0 \/\/ pyarrow 1.0.0 \r\nOS : archlinux with kernel zen 5.8.5","Yes it was fixed in `nlp>=0.4.0`\r\nYou can update with pip","Sorry, I didn't got the updated version, all is now working perfectly thanks"],"created_at":1595648679000,"updated_at":1599587835000,"closed_at":1596472652000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The following PR raised ImportWarning at `pyarrow ==1.0.0` https:\/\/github.com\/huggingface\/nlp\/pull\/265\/files","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/435\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/434","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/434\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/434\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/434\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/434","id":665477638,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU2NTM3Njgz","number":434,"title":"Fixed check for pyarrow","user":{"login":"nadahlberg","id":58701810,"node_id":"MDQ6VXNlcjU4NzAxODEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/58701810?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/nadahlberg","html_url":"https:\/\/github.com\/nadahlberg","followers_url":"https:\/\/api.github.com\/users\/nadahlberg\/followers","following_url":"https:\/\/api.github.com\/users\/nadahlberg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/nadahlberg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/nadahlberg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/nadahlberg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/nadahlberg\/orgs","repos_url":"https:\/\/api.github.com\/users\/nadahlberg\/repos","events_url":"https:\/\/api.github.com\/users\/nadahlberg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/nadahlberg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great, thanks!"],"created_at":1595636213000,"updated_at":1595658994000,"closed_at":1595658994000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/434","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/434","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/434.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/434.patch"},"body":"Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/434\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/433","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/433\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/433\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/433\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/433","id":665311025,"node_id":"MDU6SXNzdWU2NjUzMTEwMjU=","number":433,"title":"How to reuse functionality of a (generic) dataset?","user":{"login":"ArneBinder","id":3375489,"node_id":"MDQ6VXNlcjMzNzU0ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3375489?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ArneBinder","html_url":"https:\/\/github.com\/ArneBinder","followers_url":"https:\/\/api.github.com\/users\/ArneBinder\/followers","following_url":"https:\/\/api.github.com\/users\/ArneBinder\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ArneBinder\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ArneBinder\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ArneBinder\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ArneBinder\/orgs","repos_url":"https:\/\/api.github.com\/users\/ArneBinder\/repos","events_url":"https:\/\/api.github.com\/users\/ArneBinder\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ArneBinder\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets\/csv\r\n- json: https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets\/json\r\n- text: https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets\/text\r\n\r\nYou can find more details about this way to load datasets here in the documentation: https:\/\/huggingface.co\/nlp\/loading_datasets.html#from-local-files\r\n\r\nMaybe your brat loading script could be shared in a similar fashion?","> Maybe your brat loading script could be shared in a similar fashion?\r\n\r\n@thomwolf that was also my first idea and I think I will tackle that in the next days. I separated the code and created a real abstract class `AbstractBrat` to allow to inherit from that (I've just seen that the dataset_loader loads the first non abstract class), now `Brat` is very similar in its functionality to https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets\/text but inherits from `AbstractBrat`.\r\n\r\nHowever, it is still not clear to me how to add a specific dataset (as explained in https:\/\/huggingface.co\/nlp\/add_dataset.html) to your repo that uses this format\/abstract class, i.e. re-using the `features` entry of the `DatasetInfo` object and `_generate_examples()`. Again, by doing so, the only remaining entries\/functions to define would be `_DESCRIPTION`, `_CITATION`, `homepage` and `_URL` (which is all copy-paste stuff) and `_split_generators()`.\r\n \r\nIn a lack of better ideas, I tried sth like below, but of course it does not work outside `nlp` (`AbstractBrat` is currently defined in [datasets\/brat.py](https:\/\/github.com\/ArneBinder\/nlp\/blob\/5e81fb8710546ee7be3353a7f02a3045e9a8351e\/datasets\/brat\/brat.py)):\r\n```python\r\nfrom __future__ import absolute_import, division, print_function\r\n\r\nimport os\r\n\r\nimport nlp\r\n\r\nfrom datasets.brat.brat import AbstractBrat\r\n\r\n_CITATION = \"\"\"\r\n@inproceedings{lauscher2018b,\r\n title = {An argument-annotated corpus of scientific publications},\r\n booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},\r\n publisher = {Association for Computational Linguistics},\r\n author = {Lauscher, Anne and Glava\\v{s}, Goran and Ponzetto, Simone Paolo},\r\n address = {Brussels, Belgium},\r\n year = {2018},\r\n pages = {40\u201346}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThis dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing \r\nfine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific \r\npublications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of \r\nscientific writing.\r\n\"\"\"\r\n\r\n_URL = \"http:\/\/data.dws.informatik.uni-mannheim.de\/sci-arg\/compiled_corpus.zip\"\r\n\r\n\r\nclass Sciarg(AbstractBrat):\r\n\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def _info(self):\r\n\r\n brat_features = super()._info().features\r\n return nlp.DatasetInfo(\r\n # This is the description that will appear on the datasets page.\r\n description=_DESCRIPTION,\r\n # nlp.features.FeatureConnectors\r\n features=brat_features,\r\n # If there's a common (input, target) tuple from the features,\r\n # specify them here. They'll be used if as_supervised=True in\r\n # builder.as_dataset.\r\n #supervised_keys=None,\r\n # Homepage of the dataset for documentation\r\n homepage=\"https:\/\/github.com\/anlausch\/ArguminSci\",\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n # TODO: Downloads the data and defines the splits\r\n # dl_manager is a nlp.download.DownloadManager that can be used to\r\n # download and extract URLs\r\n dl_dir = dl_manager.download_and_extract(_URL)\r\n data_dir = os.path.join(dl_dir, \"compiled_corpus\")\r\n print(f'data_dir: {data_dir}')\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"directory\": data_dir,\r\n },\r\n ),\r\n ]\r\n``` \r\n\r\nNevertheless, many thanks for tackling the dataset accessibility problem with this great library!","As temporary fix I've created [ArneBinder\/nlp-formats](https:\/\/github.com\/ArneBinder\/nlp-formats) (contributions welcome)."],"created_at":1595611657000,"updated_at":1596190997000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https:\/\/brat.nlplab.org\/standoff.html), [dataset code](https:\/\/github.com\/ArneBinder\/nlp\/blob\/brat\/datasets\/brat\/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?\r\n\r\nIn my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/433\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/432","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/432\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/432\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/432\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/432","id":665234340,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3","number":432,"title":"Fix handling of config files while loading datasets from multiple processes","user":{"login":"orsharir","id":99543,"node_id":"MDQ6VXNlcjk5NTQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/99543?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/orsharir","html_url":"https:\/\/github.com\/orsharir","followers_url":"https:\/\/api.github.com\/users\/orsharir\/followers","following_url":"https:\/\/api.github.com\/users\/orsharir\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/orsharir\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/orsharir\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/orsharir\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/orsharir\/orgs","repos_url":"https:\/\/api.github.com\/users\/orsharir\/repos","events_url":"https:\/\/api.github.com\/users\/orsharir\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/orsharir\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes)","Ok I see.\r\nWhy not use filelock in this case then ?","I think we should \ud83d\ude42","Thanks for approving my patch.\n\nI agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.\n\nI'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it."],"created_at":1595603457000,"updated_at":1596301902000,"closed_at":1596097528000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/432","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/432","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/432.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/432.patch"},"body":"When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>\/datasets\/<dataset name>\/<hash>\/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.\r\n\r\nThis pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/432\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/431","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/431\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/431\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/431\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/431","id":665044416,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU2MTgyNDE2","number":431,"title":"Specify split post processing + Add post processing resources downloading","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I was using a hack in `wiki_dpr` to download the index from GCS even for the configurations without the embeddings.\r\nHowever as GCS is something internal, I changed the logic to add a download step for indexes directly in the dataset script, using the `DownloadManager`.\r\n\r\nThis change was directly linked to the changes I did to take into account the split name in the post processing, so I included this change in this PR too.\r\n\r\nTo summarize:\r\n\r\nDataset builders can now implement\r\n- `_post_processing_resources(split)`: return a dict `resource_name -> resource_file_name`. It defines the additional resources such as indexes or arrow files that you need in post processing\r\n- `_download_post_processing_resources(split, resource_name, dl_manager))`: if some resources can be downloaded, you can use the download_manager to download them\r\n- `_post_process(dataset, resources_path)`: (main function for post processing) given a dataset, you can apply dataset transforms or add indexes. For resources that have been downloaded, you can load them. For the others, you can generate and save them. The paths to load\/save resources are in `resources_path` which is a dictionary `resource_name -> resource_path`\r\n\r\nAbout the CI:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\n```\r\nIt fails because I changed the input of post processing functions (to include the split name)","I started to add metadata in the DatasetInfo.\r\nNote that because there are new fields, **ALL the dataset_info[s].json generated after these changes won't be loadable from older versions of the lib**\r\n\r\nRight now it looks like this:\r\n```json\r\n \"post_processing_resources_checksums\": {\r\n \"train\": {\r\n \"embeddings_index\": {\r\n \"num_bytes\": 30720045,\r\n \"checksum\": \"b04fb4f4f3ab83b9d1b9f6f9eb236f1c04a9fd61bef7cee16b12df8ac911766a\"\r\n }\r\n }\r\n },\r\n \"post_processing_size\": 30720045,\r\n```","Good point. Should we anticipate already that we may add other fields in the future and change the code to support the addition of new fields without breaking backward compatibility in the future?","I added:\r\n- post processing features (inside a PostProcessedInfo object)\r\n- backward compatibility for dataset info\r\n- post processing tests (as_dataset and download_and_prepare) for map (change features), select (change number of elements) and add_faiss_index (add indexes)\r\nAnd I fixed a bug in `map` that I found thanks to the new tests\r\n\r\nNow I just have to move `post_processing_resources_checksums` to PostProcessedInfo as well and everything should be good :)\r\nEdit: done"],"created_at":1595582959000,"updated_at":1596186304000,"closed_at":1596186303000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/431","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/431","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/431.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/431.patch"},"body":"Previously if you tried to do\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nwiki = load_dataset(\"wiki_dpr\", \"psgs_w100_with_nq_embeddings\", split=\"train[:100]\", with_index=True)\r\n```\r\nThen you'd get an error `Index size should match Dataset size...`\r\nThis was because it was trying to use the full index (21M elements).\r\n\r\nTo fix that I made it so post processing resources can be named according to the split.\r\n\r\nI'm going to add tests on post processing too.\r\n\r\nNote that the CI will fail as I added a new argument in `_post_processing_resources`: the AWS version of wiki_dpr fails, and there's also an error telling that it is not synced (it'll be synced once it's merged):\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\nFAILED tests\/test_hf_gcp.py::TestDatasetSynced::test_script_synced_with_s3_wiki_dpr\r\n```\r\n\r\nEDIT: I did a change to ignore the script hash to locate the arrow files on GCS, so I removed the sync test. It was there just because of the hash logic for files on GCS","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/431\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/430","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/430\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/430\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/430\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/430","id":664583837,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU1ODAxOTI2","number":430,"title":"add DatasetDict","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I did the changes in the docstrings and I added a type check in each `DatasetDict` method to make sure all values are of type `Dataset`","Awesome, do you mind adding these in the doc as well?","I added it to the docs (processing + main classes)","I'm trying to follow along with the following about datasets from the docs:\r\n\r\nhttps:\/\/huggingface.co\/nlp\/loading_datasets.html\r\nhttps:\/\/huggingface.co\/nlp\/processing.html\r\n\r\nHowever the train_test_split method no longer works as it is expecting a dataset, rather than a datsetdict. How would I got about splitting a CSV into a train and test set? \r\n\r\nI'm trying to utilize the Trainer() class, but am having trouble converting my data from a csv into dataset objects to pass in."],"created_at":1595519029000,"updated_at":1596502913000,"closed_at":1596013582000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/430","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/430","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/430.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/430.patch"},"body":"## Add DatasetDict\r\n\r\n### Overview\r\n\r\nWhen you call `load_dataset` it can return a dictionary of datasets if there are several splits (train\/test for example).\r\nIf you wanted to apply dataset transforms you had to iterate over each split and apply the transform.\r\n\r\nInstead of returning a dict, it now returns a `nlp.DatasetDict` object which inherits from dict and contains the same data as before, except that now users can call dataset transforms directly from the output, and they'll be applied on each split.\r\n\r\nBefore:\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nsquad = load_dataset(\"squad\")\r\nprint(squad.keys())\r\n# dict_keys(['train', 'validation'])\r\nsquad = {\r\n split_name: dataset.map(my_func) for split_name, dataset in squad.items()\r\n}\r\nprint(squad.keys())\r\n# dict_keys(['train', 'validation'])\r\n```\r\n\r\nNow:\r\n```python\r\nfrom nlp import load_dataset\r\n\r\nsquad = load_dataset(\"squad\")\r\nprint(squad.keys())\r\n# dict_keys(['train', 'validation'])\r\nsquad = squad.map(my_func)\r\nprint(squad.keys())\r\n# dict_keys(['train', 'validation'])\r\n```\r\n\r\n### Dataset transforms\r\n\r\n`nlp.DatasetDict` implements the following dataset transforms:\r\n- map\r\n- filter\r\n- sort\r\n- shuffle\r\n\r\n### Arguments\r\n\r\nThe arguments of the methods are the same except for split-specific arguments like `cache_file_name`.\r\nFor such arguments, the expected input is a dictionary `{split_name: argument_value}`\r\nIt concerns:\r\n- `cache_file_name` in map, filter, sort, shuffle\r\n- `seed` and `generator` in shuffle","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/430\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/429","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/429\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/429\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/429\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/429","id":664412137,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU1NjU2MDk5","number":429,"title":"mlsum","user":{"login":"RachelKer","id":36986299,"node_id":"MDQ6VXNlcjM2OTg2Mjk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36986299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RachelKer","html_url":"https:\/\/github.com\/RachelKer","followers_url":"https:\/\/api.github.com\/users\/RachelKer\/followers","following_url":"https:\/\/api.github.com\/users\/RachelKer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RachelKer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RachelKer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RachelKer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RachelKer\/orgs","repos_url":"https:\/\/api.github.com\/users\/RachelKer\/repos","events_url":"https:\/\/api.github.com\/users\/RachelKer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RachelKer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @RachelKer for this PR.\r\n\r\nI think the dummy_data structure does not also match. In the `_split_generator` you have something like `os.path.join(downloaded_files[\"validation\"], lang+'_val.jsonl')` but in you dummy_data you have `os.path.join(downloaded_files[\"validation\"], lang+\"_val.zip\", lang+'_val.jsonl')`. I think ` jsonl` files should be directly in the `dummy_data` folder without the sub-folder \r\n\r\n@lhoestq ","Hi @RachelKer :)\r\nThanks for adding MLSUM !\r\n\r\nTo fix the CI I think you just have to rebase from master","Great, I think it is working now. Thanks :)","It looks like your PR does tons of changes in other datasets. \r\nMaybe this is because of the merge from master ?","Hmm, I see, sorry I messed up somewhere. Maybe it's easier if we close the pull request and I do another one ?","Yea if it's easier for you feel free to re-open a PR"],"created_at":1595505159000,"updated_at":1596195980000,"closed_at":1596195980000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/429","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/429","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/429.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/429.patch"},"body":"Hello, \r\n\r\nThe tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https:\/\/gitlab.lip6.fr\/scialom\/mlsum_data","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/429\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/428","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/428\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/428\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/428\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/428","id":664367086,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU1NjE3Nzcy","number":428,"title":"fix concatenate_datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595500259000,"updated_at":1595500500000,"closed_at":1595500498000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/428","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/428","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/428.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/428.patch"},"body":"`concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/428\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/427","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/427\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/427\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/427\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/427","id":664341623,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU1NTk1Nzc3","number":427,"title":"Allow sequence features for beam + add processed Natural Questions","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595497961000,"updated_at":1595509770000,"closed_at":1595509769000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/427","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/427","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/427.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/427.patch"},"body":"## Allow Sequence features for Beam Datasets + add Natural Questions\r\n\r\n### The issue\r\n\r\nThe steps of beam datasets processing is the following:\r\n- download the source files and send them in a remote storage (gcs)\r\n- process the files using a beam runner (dataflow)\r\n- save output in remote storage (gcs)\r\n- convert output to arrow in remote storage (gcs)\r\n\r\nHowever it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features.\r\n\r\n### The proposed solution\r\n\r\nTo allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it.\r\n\r\n### Natural Questions\r\n\r\nI was able to process NQ with it, and so I added the json infos file in this PR too.\r\nThe processed arrow files are also stored in gcs.\r\nIt allows you to load NQ with\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nnq = load_dataset(\"natural_questions\") # download the 90GB arrow files from gcs and return the dataset\r\n```\r\n\r\n### Tests\r\n\r\nI added a test case to make sure it works as expected.\r\nNote that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged.\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions\/default\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/427\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/426","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/426\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/426\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/426\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/426","id":664203897,"node_id":"MDU6SXNzdWU2NjQyMDM4OTc=","number":426,"title":"[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes that's definitely something we plan to add ^^","Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.","So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https:\/\/github.com\/tensorflow\/tensorflow\/blob\/2b96f3662bd776e277f86997659e61046b56c315\/tensorflow\/python\/data\/ops\/dataset_ops.py#L1623).\r\n\r\nThere, `num_parallel_calls` is turned into a tensor and and fed to `gen_dataset_ops.parallel_map_dataset` where it looks like tensorflow takes over.\r\n\r\nWe could start with something simple like a thread or process pool that `imap`s over some shards.\r\n ","Multiprocessing was added in #552 . You can set the number of processes with `.map(..., num_proc=...)`. It also works for `filter`\r\n\r\nClosing this one, but feel free to reo-open if you have other questions","@lhoestq Great feature implemented! Do you have plans to add it to official tutorials [Processing data in a Dataset](https:\/\/huggingface.co\/docs\/datasets\/processing.html?highlight=save#augmenting-the-dataset)? It took me sometime to find this parallel processing api.","Thanks for the heads up !\r\n\r\nI just added a paragraph about multiprocessing:\r\nhttps:\/\/huggingface.co\/docs\/datasets\/master\/processing.html#multiprocessing"],"created_at":1595480441000,"updated_at":1615541652000,"closed_at":1599490084000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process\/thread\/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/426\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/425","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/425\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/425\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/425\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/425","id":664029848,"node_id":"MDU6SXNzdWU2NjQwMjk4NDg=","number":425,"title":"Correct data structure for PAN-X task in XTREME dataset?","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for noticing ! This looks more reasonable indeed.\r\nFeel free to open a PR","Hi @lhoestq \r\nI made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `\"features\"` part of the PAN-X.LANG dataset:\r\n\r\n```json\r\n\"features\":{\r\n \"word\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n },\r\n \"ner_tag\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n },\r\n \"lang\":{\r\n \"dtype\":\"string\",\r\n \"id\":null,\r\n \"_type\":\"Value\"\r\n }\r\n}\r\n```\r\nTo fit the code above the fields `\"word\"`, `\"ner_tag\"`, and `\"lang\"` would become `\"words\"`, `ner_tags\"` and `\"langs\"`. In addition the `dtype` should be changed from `\"string\"` to `\"list\"`.\r\n\r\n I made this changes but when trying to test this locally with `dataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='.\/data')` I face the issue that the `dataset_info.json` file is always overwritten by a downloaded version with the old settings, which then throws an error because the schema does not match. This makes it hard to test the changes locally. Do you have any suggestions on how to deal with that?\r\n","Hi !\r\n\r\nYou have to point to your local script.\r\nFirst clone the repo and then:\r\n\r\n```python\r\ndataset = load_dataset(\".\/datasets\/xtreme\", \"PAN-X.en\")\r\n```\r\nThe \"xtreme\" directory contains \"xtreme.py\".\r\n\r\nYou also have to change the features definition in the `_info` method. You could use:\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": [nlp.Value(\"string\")],\r\n \"ner_tags\": [nlp.Value(\"string\")],\r\n \"langs\": [nlp.Value(\"string\")],\r\n})\r\n```\r\n\r\nHope this helps !\r\nLet me know if you have other questions.","Thanks, I am making progress. I got a new error `NonMatchingSplitsSizesError ` (see traceback below), which I suspect is due to the fact that number of rows in the dataset changed (one row per word --> one row per sentence) as well as the number of bytes due to the slightly updated data structure. \r\n\r\n```python\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=1756492, num_examples=80536, dataset_name='xtreme'), 'recorded': SplitInfo(name='validation', num_bytes=1837109, num_examples=10000, dataset_name='xtreme')}, {'expected': SplitInfo(name='test', num_bytes=1752572, num_examples=80326, dataset_name='xtreme'), 'recorded': SplitInfo(name='test', num_bytes=1833214, num_examples=10000, dataset_name='xtreme')}, {'expected': SplitInfo(name='train', num_bytes=3496832, num_examples=160394, dataset_name='xtreme'), 'recorded': SplitInfo(name='train', num_bytes=3658428, num_examples=20000, dataset_name='xtreme')}]\r\n```\r\nI can fix the error by replacing the values in the `datasets_infos.json` file, which I tested for English. However, to update this for all 40 datasets manually is slightly painful. Is there a better way to update the expected values for all datasets?","You can update the json file by calling\r\n```\r\nnlp-cli test .\/datasets\/xtreme --save_infos --all_configs\r\n```","One more thing about features. I mentioned\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": [nlp.Value(\"string\")],\r\n \"ner_tags\": [nlp.Value(\"string\")],\r\n \"langs\": [nlp.Value(\"string\")],\r\n})\r\n```\r\n\r\nbut it's actually not consistent with the way we write datasets. Something like this is simpler to read and more consistent with the way we define datasets:\r\n\r\n```python\r\nfeatures = nlp.Features({\r\n \"words\": nlp.Sequence(nlp.Value(\"string\")),\r\n \"ner_tags\": nlp.Sequence(nlp.Value(\"string\")),\r\n \"langs\": nlp.Sequence(nlp.Value(\"string\")),\r\n})\r\n```\r\n\r\nSorry about that","Closing this since PR #437 fixed the problem and has been merged to `master`. "],"created_at":1595449760000,"updated_at":1596375034000,"closed_at":1596375034000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hi \ud83e\udd17 team!\r\n\r\n## Description of the problem\r\nThanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\n# AmazonPhotos.zip is located in data\/\r\ndataset = load_dataset(\"xtreme\", \"PAN-X.en\", data_dir='.\/data')\r\ndataset_train = dataset['train']\r\n```\r\n\r\nHowever, I am not sure that `load_dataset()` is returning the correct data structure for NER. \r\n\r\nCurrently, every row in `dataset_train` is of the form\r\n```python\r\n{'word': str, 'ner_tag': str, 'lang': str}\r\n```\r\nbut I think we actually want something like\r\n```python\r\n{'words': List[str], 'ner_tags': List[str], 'langs': List[str]}\r\n```\r\nso that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples.\r\n\r\nIndeed, [this line](https:\/\/github.com\/google-research\/xtreme\/blob\/522434d1aece34131d997a97ce7e9242a51a688a\/third_party\/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages.\r\n\r\n## Proposed solution\r\nReplace\r\n```python\r\nwith open(filepath) as f:\r\n data = csv.reader(f, delimiter=\"\\t\", quoting=csv.QUOTE_NONE)\r\n for id_, row in enumerate(data):\r\n if row:\r\n lang, word = row[0].split(\":\")[0], row[0].split(\":\")[1]\r\n tag = row[1]\r\n yield id_, {\"word\": word, \"ner_tag\": tag, \"lang\": lang}\r\n```\r\nfrom [these lines](https:\/\/github.com\/huggingface\/nlp\/blob\/ce7d3a1d630b78fe27188d1706f3ea980e8eec43\/datasets\/xtreme\/xtreme.py#L881-L887) of the `_generate_examples()` function with something like\r\n\r\n```python\r\nguid_index = 1\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n words = []\r\n ner_tags = []\r\n langs = []\r\n for line in f:\r\n if line.startswith(\"-DOCSTART-\") or line == \"\" or line == \"\\n\":\r\n if words:\r\n yield guid_index, {\"words\": words, \"ner_tags\": ner_tags, \"langs\": langs}\r\n guid_index += 1\r\n words = []\r\n ner_tags = []\r\n else:\r\n # pan-x data is tab separated\r\n splits = line.split(\"\\t\")\r\n # strip out en: prefix\r\n langs.append(splits[0][:2])\r\n words.append(splits[0][3:])\r\n if len(splits) > 1:\r\n labels.append(splits[-1].replace(\"\\n\", \"\"))\r\n else:\r\n # examples have no label in test set\r\n labels.append(\"O\")\r\n```\r\nIf you agree, me or @lvwerra would be happy to implement this and create a PR.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/425\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/424","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/424\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/424\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/424\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/424","id":663858552,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0","number":424,"title":"Web of science","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595432311000,"updated_at":1595514478000,"closed_at":1595514476000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/424","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/424","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/424.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/424.patch"},"body":"this PR adds the WebofScience dataset\r\n#353 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/424\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/423","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/423\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/423\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/423\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/423","id":663079359,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU0NTU4OTA0","number":423,"title":"Change features vs schema logic","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I had to make `SplitDict` serializable to be able to copy `DatasetInfo` objects properly.\r\nSerialization was also asked in #389 ","One thing I forgot to say here, is that we also want to use the features arguments of `load_dataset` (which goes in the builder\u2019s config) to override the default features of a dataset script."],"created_at":1595343167000,"updated_at":1595668114000,"closed_at":1595499317000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/423","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/423","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/423.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/423.patch"},"body":"## New logic for `nlp.Features` in datasets\r\n\r\nPreviously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`.\r\nHowever `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files.\r\n\r\nChanges:\r\n- Remove `schema` field in `nlp.Dataset`\r\n- Make `features` the source of truth to read\/write examples\r\n- `features` can no longer be `None` in `nlp.Dataset`\r\n- Update `features` after each dataset transform such as `nlp.Dataset.map`\r\n\r\nTodo: change the tests to take these changes into account","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/423\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/422","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/422\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/422\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/422\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/422","id":663028497,"node_id":"MDExOlB1bGxSZXF1ZXN0NDU0NTE3MDU2","number":422,"title":"- Corrected encoding for IMDB.","user":{"login":"ghazi-f","id":25091538,"node_id":"MDQ6VXNlcjI1MDkxNTM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25091538?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghazi-f","html_url":"https:\/\/github.com\/ghazi-f","followers_url":"https:\/\/api.github.com\/users\/ghazi-f\/followers","following_url":"https:\/\/api.github.com\/users\/ghazi-f\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghazi-f\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghazi-f\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghazi-f\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghazi-f\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghazi-f\/repos","events_url":"https:\/\/api.github.com\/users\/ghazi-f\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghazi-f\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595339219000,"updated_at":1595433773000,"closed_at":1595433773000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/422","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/422","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/422.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/422.patch"},"body":"The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/422\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/421","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/421\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/421\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/421\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/421","id":662213864,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUzNzkzMzQ1","number":421,"title":"Style change","user":{"login":"lordtt13","id":35500534,"node_id":"MDQ6VXNlcjM1NTAwNTM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35500534?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lordtt13","html_url":"https:\/\/github.com\/lordtt13","followers_url":"https:\/\/api.github.com\/users\/lordtt13\/followers","following_url":"https:\/\/api.github.com\/users\/lordtt13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lordtt13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lordtt13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lordtt13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lordtt13\/orgs","repos_url":"https:\/\/api.github.com\/users\/lordtt13\/repos","events_url":"https:\/\/api.github.com\/users\/lordtt13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lordtt13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["What about the other PR #419 ?","Oh this is the PR where I ran make quality and make style and some previous files from master were changed","Oh right ! Let me fix the style myself if you don't mind"],"created_at":1595275709000,"updated_at":1595434120000,"closed_at":1595434119000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/421","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/421","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/421.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/421.patch"},"body":"make quality and make style ran on scripts","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/421\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/420","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/420\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/420\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/420\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/420","id":662029782,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUzNjI5OTk2","number":420,"title":"Better handle nested features","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595263453000,"updated_at":1595319649000,"closed_at":1595318992000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/420","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/420","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/420.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/420.patch"},"body":"Changes:\r\n- added arrow schema to features conversion (it's going to be useful to fix #342 )\r\n- make flatten handle deep features (useful for tfrecords conversion in #339 )\r\n- add tests for flatten and features conversions\r\n- the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/420\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/419","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/419\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/419\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/419\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/419","id":661974747,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUzNTgxNzQz","number":419,"title":"EmoContext dataset add","user":{"login":"lordtt13","id":35500534,"node_id":"MDQ6VXNlcjM1NTAwNTM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35500534?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lordtt13","html_url":"https:\/\/github.com\/lordtt13","followers_url":"https:\/\/api.github.com\/users\/lordtt13\/followers","following_url":"https:\/\/api.github.com\/users\/lordtt13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lordtt13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lordtt13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lordtt13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lordtt13\/orgs","repos_url":"https:\/\/api.github.com\/users\/lordtt13\/repos","events_url":"https:\/\/api.github.com\/users\/lordtt13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lordtt13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595260125000,"updated_at":1595578921000,"closed_at":1595578920000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/419","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/419","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/419.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/419.patch"},"body":"EmoContext Dataset add\r\n\r\nSigned-off-by: lordtt13 <thakurtanmay72@yahoo.com>","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/419\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/418","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/418\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/418\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/418\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/418","id":661914873,"node_id":"MDU6SXNzdWU2NjE5MTQ4NzM=","number":418,"title":"Addition of google drive links to dl_manager","user":{"login":"lordtt13","id":35500534,"node_id":"MDQ6VXNlcjM1NTAwNTM0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35500534?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lordtt13","html_url":"https:\/\/github.com\/lordtt13","followers_url":"https:\/\/api.github.com\/users\/lordtt13\/followers","following_url":"https:\/\/api.github.com\/users\/lordtt13\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lordtt13\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lordtt13\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lordtt13\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lordtt13\/orgs","repos_url":"https:\/\/api.github.com\/users\/lordtt13\/repos","events_url":"https:\/\/api.github.com\/users\/lordtt13\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lordtt13\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think the problem is the way you wrote your urls. Try the following structure to see `https:\/\/drive.google.com\/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ","Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`? it should work with google drive links.\r\n","Yes it worked, thank you!"],"created_at":1595256722000,"updated_at":1595259572000,"closed_at":1595259572000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.\r\n\r\nThis is the script for me:\r\n\r\n```python\r\nclass EmoConfig(nlp.BuilderConfig):\r\n \"\"\"BuilderConfig for SQUAD.\"\"\"\r\n\r\n def __init__(self, **kwargs):\r\n \"\"\"BuilderConfig for EmoContext.\r\n Args:\r\n **kwargs: keyword arguments forwarded to super.\r\n \"\"\"\r\n super(EmoConfig, self).__init__(**kwargs)\r\n\r\n_TEST_URL = \"https:\/\/drive.google.com\/file\/d\/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb\/view?usp=sharing\"\r\n_TRAIN_URL = \"https:\/\/drive.google.com\/file\/d\/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X\/view?usp=sharing\"\r\n\r\nclass EmoDataset(nlp.GeneratorBasedBuilder):\r\n \"\"\" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 \"\"\"\r\n\r\n VERSION = nlp.Version(\"1.0.0\")\r\n force = False\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features(\r\n {\r\n \"text\": nlp.Value(\"string\"),\r\n \"label\": nlp.features.ClassLabel(names=[\"others\", \"happy\", \"sad\", \"angry\"]),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=\"https:\/\/www.aclweb.org\/anthology\/S19-2005\/\",\r\n citation=_CITATION,\r\n )\r\n \r\n def _get_drive_url(self, url):\r\n base_url = 'https:\/\/drive.google.com\/uc?id='\r\n split_url = url.split('\/')\r\n return base_url + split_url[5]\r\n \r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n if(not os.path.exists(\"emo-train.json\") or self.force):\r\n gdown.download(self._get_drive_url(_TRAIN_URL), \"emo-train.json\", quiet = True)\r\n if(not os.path.exists(\"emo-test.json\") or self.force):\r\n gdown.download(self._get_drive_url(_TEST_URL), \"emo-test.json\", quiet = True)\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": \"emo-train.json\",\r\n \"split\": \"train\",\r\n },\r\n ),\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TEST,\r\n gen_kwargs={\"filepath\": \"emo-test.json\", \"split\": \"test\"},\r\n ),\r\n ]\r\n\r\n def _generate_examples(self, filepath, split):\r\n \"\"\" Yields examples. \"\"\"\r\n with open(filepath, 'rb') as f:\r\n data = json.load(f)\r\n for id_, text, label in zip(data[\"text\"].keys(), data[\"text\"].values(), data[\"Label\"].values()):\r\n yield id_, {\r\n \"text\": text,\r\n \"label\": label,\r\n }\r\n```\r\n\r\nCan someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/418\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/417","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/417\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/417\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/417\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/417","id":661804054,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUzNDMyODE5","number":417,"title":"Fix docstrins multiple metrics instances","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595250539000,"updated_at":1595411460000,"closed_at":1595411459000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/417","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/417","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/417.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/417.patch"},"body":"We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated).\r\n\r\nThis should fix #304 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/417\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/416","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/416\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/416\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/416\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/416","id":661635393,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUzMjg1NTM4","number":416,"title":"Fix xtreme panx directory","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["great, I think I did not download the data the way you do, but yours is more reasonable."],"created_at":1595239757000,"updated_at":1595319346000,"closed_at":1595319344000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/416","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/416","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/416.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/416.patch"},"body":"Fix #412 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/416\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/415","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/415\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/415\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/415\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/415","id":660687076,"node_id":"MDU6SXNzdWU2NjA2ODcwNzY=","number":415,"title":"Something is wrong with WMT 19 kk-en dataset","user":{"login":"ChenghaoMou","id":32014649,"node_id":"MDQ6VXNlcjMyMDE0NjQ5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32014649?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ChenghaoMou","html_url":"https:\/\/github.com\/ChenghaoMou","followers_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/followers","following_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/orgs","repos_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/repos","events_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ChenghaoMou\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595146731000,"updated_at":1595238866000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The translation in the `train` set does not look right:\r\n\r\n```\r\n>>>import nlp\r\n>>>from nlp import load_dataset\r\n>>>dataset = load_dataset('wmt19', 'kk-en')\r\n>>>dataset[\"train\"][\"translation\"][0]\r\n{'kk': 'Trumpian Uncertainty', 'en': '\u0422\u0440\u0430\u043c\u043f\u0442\u044b\u049b \u0431\u0435\u043b\u0433\u0456\u0441\u0456\u0437\u0434\u0456\u043a'}\r\n>>>dataset[\"validation\"][\"translation\"][0]\r\n{'kk': '\u0410\u049b\u0448\u0430-\u043d\u0435\u0441\u0438\u0435 \u0441\u0430\u044f\u0441\u0430\u0442\u044b\u043d\u044b\u04a3 \u0441\u0446\u0435\u043d\u0430\u0440\u0438\u0439\u0456\u043d \u049b\u0430\u0439\u0442\u0430 \u0436\u0430\u0437\u0441\u0430\u049b', 'en': 'Rewriting the Monetary-Policy Script'}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/415\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/414","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/414\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/414\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/414\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/414","id":660654013,"node_id":"MDU6SXNzdWU2NjA2NTQwMTM=","number":414,"title":"from_dict delete?","user":{"login":"hackerxiaobai","id":22817243,"node_id":"MDQ6VXNlcjIyODE3MjQz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/22817243?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hackerxiaobai","html_url":"https:\/\/github.com\/hackerxiaobai","followers_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/followers","following_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/orgs","repos_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/repos","events_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hackerxiaobai\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\nRight now if you want to use `from_dict` you have to install the package from the master branch\r\n```\r\npip install git+https:\/\/github.com\/huggingface\/nlp.git\r\n```","> `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\n> Right now if you want to use `from_dict` you have to install the package from the master branch\r\n> \r\n> ```\r\n> pip install git+https:\/\/github.com\/huggingface\/nlp.git\r\n> ```\r\nOK, thank you.\r\n"],"created_at":1595142516000,"updated_at":1595298077000,"closed_at":1595298077000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"AttributeError: type object 'Dataset' has no attribute 'from_dict'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/414\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/413","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/413\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/413\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/413\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/413","id":660063655,"node_id":"MDU6SXNzdWU2NjAwNjM2NTU=","number":413,"title":"Is there a way to download only NQ dev?","user":{"login":"tholor","id":1563902,"node_id":"MDQ6VXNlcjE1NjM5MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1563902?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tholor","html_url":"https:\/\/github.com\/tholor","followers_url":"https:\/\/api.github.com\/users\/tholor\/followers","following_url":"https:\/\/api.github.com\/users\/tholor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tholor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tholor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tholor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tholor\/orgs","repos_url":"https:\/\/api.github.com\/users\/tholor\/repos","events_url":"https:\/\/api.github.com\/users\/tholor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tholor\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Unfortunately it's not possible to download only the dev set of NQ.\r\n\r\nI think we could add a way to download only the test set by adding a custom configuration to the processing script though.","Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially also others. \r\nFor us, it will in this case make the difference of using the library or keeping the old downloads of the raw dev datasets. \r\nHowever, I don't know if that fits into your plans with the library and can also understand if you don't want to support this.","I don't think we could force this behavior generally since the dataset script authors are free to organize the file download as they want (sometimes the mapping between split and files can be very much nontrivial) but we can add an additional configuration for Natural Question indeed as @lhoestq indicate."],"created_at":1595068103000,"updated_at":1596027980000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? \r\nAs we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. \r\n\r\nI tried\r\n```\r\ndataset = nlp.load_dataset('natural_questions', split=\"validation\", beam_runner=\"DirectRunner\")\r\n```\r\nBut this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits \/ slicing options only available after downloading?\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/413\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/412","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/412\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/412\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/412\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/412","id":660047139,"node_id":"MDU6SXNzdWU2NjAwNDcxMzk=","number":412,"title":"Unable to load XTREME dataset from disk","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lewtun, you have to provide the full path to the downloaded file for example `\/home\/lewtum\/..`","I was able to repro. Opening a PR to fix that.\r\nThanks for reporting this issue !","Thanks for the rapid fix @lhoestq!"],"created_at":1595066100000,"updated_at":1595319344000,"closed_at":1595319344000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Hi \ud83e\udd17 team!\r\n\r\n## Description of the problem\r\nFollowing the [docs](https:\/\/huggingface.co\/nlp\/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https:\/\/github.com\/google-research\/xtreme) benchmark.\r\n\r\nI have manually downloaded the `AmazonPhotos.zip` file from [here](https:\/\/www.amazon.com\/clouddrive\/share\/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset.\r\n\r\nAs far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path:\r\n\r\n```\r\n# path where load_dataset is looking for fr.tar.gz\r\n\/root\/.cache\/huggingface\/datasets\/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6\/\r\n# path where it actually exists\r\n\/root\/.cache\/huggingface\/datasets\/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6\/panx_dataset\/\r\n```\r\n\r\n## Steps to reproduce the problem\r\n\r\n1. Manually download the XTREME benchmark from [here](https:\/\/www.amazon.com\/clouddrive\/share\/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1)\r\n\r\n2. Run the following code snippet\r\n```python\r\nfrom nlp import load_dataset\r\n# AmazonPhotos.zip is in the root of the folder\r\ndataset = load_dataset(\"xtreme\", \"PAN-X.fr\", data_dir='.\/')\r\n```\r\n\r\n3. Here is the stack trace\r\n```\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\n<ipython-input-4-26786bb5fa93> in <module>\r\n----> 1 dataset = load_dataset(\"xtreme\", \"PAN-X.fr\", data_dir='.\/')\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 464 split_dict = SplitDict(dataset_name=self.name)\r\n 465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 467 # Checksums verification\r\n 468 if verify_infos:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/datasets\/xtreme\/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746\/xtreme.py in _split_generators(self, dl_manager)\r\n 725 panx_dl_dir = dl_manager.extract(panx_path)\r\n 726 lang = self.config.name.split(\".\")[1]\r\n--> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + \".tar.gz\"))\r\n 728 return [\r\n 729 nlp.SplitGenerator(\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/utils\/download_manager.py in extract(self, path_or_paths)\r\n 196 \"\"\"\r\n 197 return map_nested(\r\n--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,\r\n 199 )\r\n 200 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/utils\/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)\r\n 170 return tuple(mapped)\r\n 171 # Singleton\r\n--> 172 return function(data_struct)\r\n 173 \r\n 174 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/utils\/download_manager.py in <lambda>(path)\r\n 196 \"\"\"\r\n 197 return map_nested(\r\n--> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,\r\n 199 )\r\n 200 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/utils\/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 203 elif urlparse(url_or_filename).scheme == \"\":\r\n 204 # File, but it doesn't exist.\r\n--> 205 raise FileNotFoundError(\"Local file {} doesn't exist\".format(url_or_filename))\r\n 206 else:\r\n 207 # Something unknown\r\n\r\nFileNotFoundError: Local file \/root\/.cache\/huggingface\/datasets\/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6\/fr.tar.gz doesn't exist\r\n```\r\n\r\n## OS and hardware\r\n```\r\n- `nlp` version: 0.3.0\r\n- Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.6.9\r\n- PyTorch version (GPU?): 1.4.0 (True)\r\n- Tensorflow version (GPU?): 2.1.0 (True)\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/412\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/411","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/411\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/411\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/411\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/411","id":659393398,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy","number":411,"title":"Sbf","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1595002785000,"updated_at":1595322826000,"closed_at":1595322825000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/411","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/411","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/411.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/411.patch"},"body":"This PR adds the Social Bias Frames Dataset (ACL 2020) .\r\ndataset homepage: https:\/\/homes.cs.washington.edu\/~msap\/social-bias-frames\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/411\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/410","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/410\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/410\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/410\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/410","id":659242871,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUxMTEzMTI3","number":410,"title":"20newsgroup","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594991277000,"updated_at":1595228729000,"closed_at":1595228728000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/410","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/410","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/410.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/410.patch"},"body":"Add 20Newsgroup dataset.\r\n#353 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/410\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/409","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/409\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/409\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/409\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/409","id":659128611,"node_id":"MDU6SXNzdWU2NTkxMjg2MTE=","number":409,"title":"train_test_split error: 'dict' object has no attribute 'deepcopy'","user":{"login":"morganmcg1","id":20516801,"node_id":"MDQ6VXNlcjIwNTE2ODAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20516801?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/morganmcg1","html_url":"https:\/\/github.com\/morganmcg1","followers_url":"https:\/\/api.github.com\/users\/morganmcg1\/followers","following_url":"https:\/\/api.github.com\/users\/morganmcg1\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/morganmcg1\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/morganmcg1\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/morganmcg1\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/morganmcg1\/orgs","repos_url":"https:\/\/api.github.com\/users\/morganmcg1\/repos","events_url":"https:\/\/api.github.com\/users\/morganmcg1\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/morganmcg1\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["It was fixed in 2ddd18d139d3047c9c3abe96e1e7d05bb360132c.\r\nCould you pull the latest changes from master @morganmcg1 ?","Thanks @lhoestq, works fine now!"],"created_at":1594982188000,"updated_at":1595342092000,"closed_at":1595342092000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"`train_test_split` is giving me an error when I try and call it:\r\n\r\n`'dict' object has no attribute 'deepcopy'`\r\n\r\n## To reproduce\r\n\r\n```\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\ndataset = dataset.train_test_split(test_size=0.2)\r\n```\r\n\r\n## Full Stacktrace\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-12-feb740dbec9a> in <module>\r\n 1 dataset = load_dataset('glue', 'mrpc', split='train')\r\n----> 2 dataset = dataset.train_test_split(test_size=0.2)\r\n\r\n~\/anaconda3\/envs\/fastai2_me\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size)\r\n 1032 \"writer_batch_size\": writer_batch_size,\r\n 1033 }\r\n-> 1034 train_kwargs = cache_kwargs.deepcopy()\r\n 1035 train_kwargs[\"split\"] = \"train\"\r\n 1036 test_kwargs = cache_kwargs.deepcopy()\r\n\r\nAttributeError: 'dict' object has no attribute 'deepcopy'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/409\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/408","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/408\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/408\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/408\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/408","id":659064144,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0","number":408,"title":"Add tests datasets gcp","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594977807000,"updated_at":1594978017000,"closed_at":1594978016000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/408","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/408","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/408.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/408.patch"},"body":"Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.\r\nThese tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.\r\nThis should avoid future issues like #407 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/408\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/407","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/407\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/407\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/407\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/407","id":658672736,"node_id":"MDU6SXNzdWU2NTg2NzI3MzY=","number":407,"title":"MissingBeamOptions for Wikipedia 20200501.en","user":{"login":"mitchellgordon95","id":7490438,"node_id":"MDQ6VXNlcjc0OTA0Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7490438?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mitchellgordon95","html_url":"https:\/\/github.com\/mitchellgordon95","followers_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/followers","following_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/orgs","repos_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/repos","events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Fixed. Could you try again @mitchellgordon95 ?\r\nIt was due a file not being updated on S3.\r\n\r\nWe need to make sure all the datasets scripts get updated properly @julien-c ","Works for me! Thanks.","I found the same issue with almost any language other than English. (For English, it works). Will someone need to update the file on S3 again?","This is because only some languages are already preprocessed (en, de, fr, it) and stored on our google storage.\r\nWe plan to have a systematic way to preprocess more wikipedia languages in the future.\r\n\r\nFor the other languages you have to process them on your side using apache beam. That's why the lib asks for a Beam runner."],"created_at":1594943283000,"updated_at":1610451676000,"closed_at":1594995868000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available):\r\n\r\n```\r\nnlp.load_dataset('wikipedia', \"20200501.en\", split='train')\r\n```\r\n\r\nAnd now, having pulled master, I get:\r\n\r\n```\r\nDownloading and preparing dataset wikipedia\/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to \/home\/hltcoe\/mgordon\/.cache\/huggingface\/datasets\/wikipedia\/20200501.en\/1.0.0\/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd...\r\nTraceback (most recent call last):\r\n File \"scripts\/download.py\", line 11, in <module>\r\n fire.Fire(download_pretrain)\r\n File \"\/home\/hltcoe\/mgordon\/.conda\/envs\/huggingface\/lib\/python3.6\/site-packages\/fire\/core.py\", line 138, in Fire\r\n component_trace = _Fire(component, args, parsed_flag_args, context, name)\r\n File \"\/home\/hltcoe\/mgordon\/.conda\/envs\/huggingface\/lib\/python3.6\/site-packages\/fire\/core.py\", line 468, in _Fire\r\n target=component.__name__)\r\n File \"\/home\/hltcoe\/mgordon\/.conda\/envs\/huggingface\/lib\/python3.6\/site-packages\/fire\/core.py\", line 672, in _CallAndUpdateTrace\r\n component = fn(*varargs, **kwargs)\r\n File \"scripts\/download.py\", line 6, in download_pretrain\r\n nlp.load_dataset('wikipedia', \"20200501.en\", split='train')\r\n File \"\/exp\/mgordon\/nlp\/src\/nlp\/load.py\", line 534, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/exp\/mgordon\/nlp\/src\/nlp\/builder.py\", line 460, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/exp\/mgordon\/nlp\/src\/nlp\/builder.py\", line 870, in _download_and_prepare\r\n \"\\n\\t`{}`\".format(usage_example)\r\nnlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S\r\npark, etc. More information about Apache Beam runners at https:\/\/beam.apache.org\/documentation\/runners\/capability-matrix\/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).\r\nExample of usage:\r\n `load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')`\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/407\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/406","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/406\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/406\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/406\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/406","id":658581764,"node_id":"MDU6SXNzdWU2NTg1ODE3NjQ=","number":406,"title":"Faster Shuffling?","user":{"login":"mitchellgordon95","id":7490438,"node_id":"MDQ6VXNlcjc0OTA0Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7490438?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mitchellgordon95","html_url":"https:\/\/github.com\/mitchellgordon95","followers_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/followers","following_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/orgs","repos_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/repos","events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?","> @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?\r\n\r\nI just tried with `writer.write_table` with tables of 1000 elements and it's slower that the solution in #405 \r\n\r\nOn my side (select 10 000 examples):\r\n- Original implementation: 12s\r\n- Batched solution: 100ms\r\n- solution using arrow tables: 350ms\r\n\r\nI'll try with arrays and record batches to see if we can make it work.","I tried using `.take` from pyarrow recordbatches but it doesn't improve the speed that much:\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\ndset = nlp.Dataset.from_file(\"dummy_test_select.arrow\") # dummy dataset with 100000 examples like {\"a\": \"h\"*512}\r\nindices = np.random.randint(0, 100_000, 1000_000)\r\n```\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\",\r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n table = pa.concat_tables(dset._data.slice(int(i), 1) for i in indices[i : min(len(indices), i + batch_size)])\r\n batch = table.to_pydict()\r\n writer.write_batch(batch)\r\nwriter.finalize()\r\n# 9.12s\r\n```\r\n\r\n\r\n```python\r\n%%time\r\nbatch_size = 10_000\r\nwriter = ArrowWriter(schema=dset.schema, path=\"dummy_path\", \r\n writer_batch_size=1000, disable_nullable=False)\r\nfor i in tqdm(range(0, len(indices), batch_size)):\r\n batch_indices = indices[i : min(len(indices), i + batch_size)]\r\n # First, extract only the indices that we need with a mask\r\n mask = [False] * len(dset)\r\n for k in batch_indices:\r\n mask[k] = True\r\n t_batch = dset._data.filter(pa.array(mask))\r\n # Second, build the list of indices for the filtered table, and taking care of duplicates\r\n rev_positions = {}\r\n duplicates = 0\r\n for i, j in enumerate(sorted(batch_indices)):\r\n if j in rev_positions:\r\n duplicates += 1\r\n else:\r\n rev_positions[j] = i - duplicates\r\n rev_map = [rev_positions[j] for j in batch_indices]\r\n # Third, use `.take` from the combined recordbatch\r\n t_combined = t_batch.combine_chunks() # load in memory\r\n recordbatch = t_combined.to_batches()[0]\r\n table = pa.Table.from_arrays(\r\n [recordbatch[c].take(pa.array(rev_map)) for c in range(len(dset._data.column_names))],\r\n schema=writer.schema\r\n )\r\n writer.write_table(table)\r\nwriter.finalize()\r\n# 3.2s\r\n```\r\n","Shuffling is now significantly faster thanks to #513 \r\nFeel free to play with it now :)\r\n\r\nClosing this one, but feel free to re-open if you have other questions"],"created_at":1594934513000,"updated_at":1599489926000,"closed_at":1599489925000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Consider shuffling bookcorpus:\r\n\r\n```\r\ndataset = nlp.load_dataset('bookcorpus', split='train')\r\ndataset.shuffle()\r\n```\r\nAccording to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.\r\n\r\nBut I can also just write the lines to a text file:\r\n\r\n```\r\nbatch_size = 100000\r\nwith open('tmp.txt', 'w+') as out_f:\r\n for i in tqdm(range(0, len(dataset), batch_size)):\r\n batch = dataset[i:i+batch_size]['text']\r\n print(\"\\n\".join(batch), file=out_f)\r\n```\r\n\r\nWhich completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,\r\n\r\n```\r\ndataset = nlp.load_dataset('text', data_files='tmp2.txt')\r\n```\r\n\r\nWhich completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping. \r\n\r\nIs shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/406\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/405","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/405\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/405\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/405\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/405","id":658580192,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwNTI1MTc3","number":405,"title":"Make select() faster by batching reads","user":{"login":"mitchellgordon95","id":7490438,"node_id":"MDQ6VXNlcjc0OTA0Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7490438?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mitchellgordon95","html_url":"https:\/\/github.com\/mitchellgordon95","followers_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/followers","following_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/orgs","repos_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/repos","events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594934385000,"updated_at":1595005544000,"closed_at":1595004686000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/405","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/405","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/405.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/405.patch"},"body":"Here's a benchmark:\r\n\r\n```\r\ndataset = nlp.load_dataset('bookcorpus', split='train')\r\n\r\nstart = time.time()\r\ndataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False)\r\nend = time.time()\r\nprint(f'{end - start}')\r\n\r\nstart = time.time()\r\ndataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False)\r\nend = time.time()\r\nprint(f'{end - start}')\r\n```\r\n\r\nWithout batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/405\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/404","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/404\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/404\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/404\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/404","id":658400987,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4","number":404,"title":"Add seed in metrics","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594920425000,"updated_at":1595239955000,"closed_at":1595239954000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/404","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/404","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/404.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/404.patch"},"body":"With #361 we noticed that some metrics were not deterministic.\r\nIn this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.\r\nThe seed is set only when `compute` is called, and reset afterwards.\r\n\r\nMoreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.\r\n\r\nHowever, instantiating twice a metric (two different experiments) without specifying a seed can create different results.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/404\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/403","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/403\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/403\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/403\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/403","id":658325756,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwMzAzNjI2","number":403,"title":"return python objects instead of arrays by default","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594914712000,"updated_at":1594985821000,"closed_at":1594985820000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/403","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/403","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/403.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/403.patch"},"body":"We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.\r\nI fixed it by using to_pydict\/to_pylist instead.\r\n\r\nFix #387 \r\nIt was mentioned in https:\/\/github.com\/huggingface\/transformers\/issues\/5729\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/403\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/402","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/402\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/402\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/402\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/402","id":658001288,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwMDI2NTE0","number":402,"title":"Search qa","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594890010000,"updated_at":1594909620000,"closed_at":1594909619000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/402","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/402","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/402.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/402.patch"},"body":"add SearchQA dataset\r\n\r\n#336 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/402\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/401","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/401\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/401\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/401\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/401","id":657996252,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwMDIyNTc0","number":401,"title":"add web_questions","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["What does the `nlp-cli dummy_data` command returns ?","`test.json` -> `test` \r\nand \r\n`train.json` -> `train`\r\n\r\nas shown by the `nlp-cli dummy_data` command ;-)","LGTM for merge @lhoestq - I let you merge if you want to."],"created_at":1594889699000,"updated_at":1596694580000,"closed_at":1596694579000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/401","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/401","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/401.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/401.patch"},"body":"add Web Question dataset\r\n#336 \r\n\r\nMaybe @patrickvonplaten you can help with the dummy_data structure? it still broken","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/401\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/400","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/400\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/400\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/400\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/400","id":657975600,"node_id":"MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5","number":400,"title":"Web questions","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594888109000,"updated_at":1594889451000,"closed_at":1594888974000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/400","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/400","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/400.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/400.patch"},"body":"add the WebQuestion dataset\r\n#336 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/400\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/399","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/399\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/399\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/399\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/399","id":657841433,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5ODkxNTEy","number":399,"title":"Spelling mistake","user":{"login":"BlancRay","id":9410067,"node_id":"MDQ6VXNlcjk0MTAwNjc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9410067?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BlancRay","html_url":"https:\/\/github.com\/BlancRay","followers_url":"https:\/\/api.github.com\/users\/BlancRay\/followers","following_url":"https:\/\/api.github.com\/users\/BlancRay\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BlancRay\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BlancRay\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BlancRay\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BlancRay\/orgs","repos_url":"https:\/\/api.github.com\/users\/BlancRay\/repos","events_url":"https:\/\/api.github.com\/users\/BlancRay\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BlancRay\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks!"],"created_at":1594874278000,"updated_at":1594882188000,"closed_at":1594882177000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/399","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/399","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/399.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/399.patch"},"body":"In \"Formatting the dataset\" part, \"The two toehr modifications...\" should be \"The two other modifications...\" ,the word \"other\" wrong spelled as \"toehr\".","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/399\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/398","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/398\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/398\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/398\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/398","id":657511962,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1","number":398,"title":"Add inline links","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation?","Sure, I will do that too"],"created_at":1594832644000,"updated_at":1595412862000,"closed_at":1595412862000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/398","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/398","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/398.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/398.patch"},"body":"Add inline links to `Contributing.md`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/398\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/397","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/397\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/397\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/397\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/397","id":657510856,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5NjE1MDA4","number":397,"title":"Add contiguous sharding","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594832578000,"updated_at":1595005171000,"closed_at":1595005171000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/397","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/397","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/397.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/397.patch"},"body":"This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https:\/\/github.com\/huggingface\/nlp\/pull\/389 also uses it for splitting the dataset for distributed preprocessing.\r\n\r\nUsage:\r\n```\r\nnlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)])\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/397\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/396","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/396\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/396\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/396\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/396","id":657477952,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5NTg3MDQ4","number":396,"title":"Fix memory issue when doing select","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594829704000,"updated_at":1594886852000,"closed_at":1594886851000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/396","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/396","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/396.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/396.patch"},"body":"We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name.\r\nFix #395 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/396\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/395","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/395\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/395\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/395\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/395","id":657454983,"node_id":"MDU6SXNzdWU2NTc0NTQ5ODM=","number":395,"title":"Memory issue when doing select","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1594827818000,"updated_at":1594886851000,"closed_at":1594886851000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"As noticed in #389, the following code loads the entire wikipedia in memory.\r\n\r\n```python\r\nimport nlp\r\nw = nlp.load_dataset(\"wikipedia\", \"20200501.en\", split=\"train\")\r\nw.select([0])\r\n```\r\n\r\nThis is caused by [this line](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it.\r\n\r\nIt's not the case with `.map` or `.filter`.\r\nHowever functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/395\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/394","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/394\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/394\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/394\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/394","id":657425548,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5NTQzNTE0","number":394,"title":"Remove remaining nested dict","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594825552000,"updated_at":1594885192000,"closed_at":1594885191000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/394","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/394","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/394.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/394.patch"},"body":"This PR deletes the remaining unnecessary nested dict \r\n#378 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/394\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/393","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/393\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/393\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/393\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/393","id":657330911,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5NDY1MTAz","number":393,"title":"Fix extracted files directory for the DownloadManager","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594817995000,"updated_at":1595005336000,"closed_at":1595005334000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/393","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/393","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/393.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/393.patch"},"body":"The cache dir was often cluttered by extracted files because of the download manager.\r\n\r\nFor downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir\/downloads\/extracted.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/393\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/392","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/392\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/392\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/392\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/392","id":657313738,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx","number":392,"title":"Style change detection","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594816334000,"updated_at":1595337516000,"closed_at":1595006003000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/392","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/392","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/392.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/392.patch"},"body":"Another [PAN task](https:\/\/pan.webis.de\/clef20\/pan20-web\/style-change-detection.html). This time about identifying when the style\/author changes in documents.\r\n\r\n- There's the possibility of adding the [PAN19](https:\/\/zenodo.org\/record\/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now)\r\n- I've converted the integer 0,1 values to a boolean\r\n- Using manually downloaded data again. This might be changed at some point following the discussion in https:\/\/github.com\/huggingface\/nlp\/pull\/349.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/392\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/391","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/391\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/391\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/391\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/391","id":656991432,"node_id":"MDU6SXNzdWU2NTY5OTE0MzI=","number":391,"title":"\ud83c\udf1f [Metric Request] WOOD score","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[{"id":2459308248,"node_id":"MDU6TGFiZWwyNDU5MzA4MjQ4","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20request","name":"metric request","color":"d4c5f9","default":false,"description":"Requesting to add a new metric"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594775797000,"updated_at":1603813408000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"WOOD score paper : https:\/\/arxiv.org\/pdf\/2007.06898.pdf\r\n\r\nAbstract :\r\n\r\n>Models that surpass human performance on several popular benchmarks display significant degradation in performance on exposure to Out of Distribution (OOD) data. Recent research has shown that models overfit to spurious biases and \u2018hack\u2019 datasets, in lieu of learning generalizable features like humans. In order to stop the inflation in model performance \u2013 and thus overestimation in AI systems\u2019 capabilities \u2013 we propose a simple and novel evaluation metric, WOOD Score, that encourages generalization during evaluation.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/391\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/390","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/390\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/390\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/390\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/390","id":656956384,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3","number":390,"title":"Concatenate datasets","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks cool :)\r\n\r\nI feel like \r\n```python\r\nconcatenated_dataset = dataset1.concatenate(dataset2)\r\n```\r\ncould be more natural. What do you think ?\r\n\r\nAlso could you also concatenate the `nlp.Dataset._data_files` ?\r\n```python\r\nreturn cls(table, info=info, split=split, data_files=self._data_files + other_dataset._data_files)\r\n```","I feel like \"WikiBooks\" would be a multi task dataset that could fit in the #217 discussion.\r\nNot sure concatenate should be the solution for a multi task dataset.","Thanks for the suggestion! `dset1.concatenate(dset2)` does feel more natural. Although this seems to be a different \"class\" of transformation function than map() or filter(), acting on two datasets rather than on one. I would prefer the function signature treat both datasets symmetrically.\r\n\r\nPython lists have `list1 + list2` or `list1.extend(list2)`.\r\nNumPy has `np.concatenate((arr1, arr2))`.\r\nPandas has `pd.join((df1, df2))`.\r\nPyTorch has `ConcatDataset((dset1, dset2))`.\r\n\r\nGiven the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?","The multi-task discussion is interesting, thanks for pointing me to that! I'll be focusing on T5 in a few weeks, so I'm sure I'll have many opinions then :). For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.","> Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?\r\n\r\nYep I like this idea. Maybe `nlp.concatenate_datasets()` ?\r\n\r\n> For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.\r\n\r\nI agree :)","Great, just updated!"],"created_at":1594769077000,"updated_at":1595411398000,"closed_at":1595411398000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/390","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/390","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/390.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/390.patch"},"body":"I'm constructing the \"WikiBooks\" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.\r\n\r\nThis would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.\r\n\r\nUsage:\r\n```python\r\nfrom nlp import Dataset, load_dataset\r\n\r\ndata1, data2 = {\"id\": [0, 1, 2]}, {\"id\": [3, 4, 5]}\r\ndset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)\r\ndset_concat = Dataset.from_concat([dset1, dset2])\r\nprint(dset_concat)\r\n# Dataset(schema: {'id': 'int64'}, num_rows: 6)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/390\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/389","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/389\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/389\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/389\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/389","id":656921768,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5","number":389,"title":"Fix pickling of SplitDict","user":{"login":"mitchellgordon95","id":7490438,"node_id":"MDQ6VXNlcjc0OTA0Mzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7490438?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mitchellgordon95","html_url":"https:\/\/github.com\/mitchellgordon95","followers_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/followers","following_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/orgs","repos_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/repos","events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mitchellgordon95\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["By the way, the reason this is an issue for me is because I want to be able to \"save\" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data. \r\n\r\nIs pickling\/unpickling the Dataset object the \"sanctioned\" way of doing this? Or is there a better way that I'm missing?","I've had success with saving datasets to disk via:\r\n\r\n```python\r\ncache_file = \"\/my\/dset.cache\"\r\ndset = dset.map(whatever, cache_file_name=cache_file)\r\n# then, later\r\ndset = nlp.Dataset.from_file(cache_file)\r\n```\r\n\r\nThis restores the dataset with all the attributes I need.","Thanks @jarednielsen, that makes sense. I'm a little wary of messing with the cache files, since I still don't really understand what's going on under the hood with Apache Arrow. \r\n\r\nRelated question: I'd like to do parallel pre-processing of the dataset. I know how to break the dataset up via sharding, but is there any way to combine the shards back together again once the processing is done? Right now I'm probably just going to iterate over each shard, write the contexts to a txt file, and then cat the txt files, but it feels like there ought to be a nicer way to concatenate datasets.","Haha, opened a PR for that functionality about an hour ago: https:\/\/github.com\/huggingface\/nlp\/pull\/390. Glad we're on the same page :)","Datasets are not supposed to be pickled as pickle tries to put all the dataset in memory if I'm not wrong (and write all the data on disk).\r\nThe concatenate method however is a very cool feature, looking forward to having it merged :)","Ah, yes, you are correct. The pickle file contains the whole dataset, not just the cache names, which is not quite what I expected.\r\n\r\nI tried adding a warning when pickling a Dataset, to prevent others like me from trying it. Interestingly, however, the warning is raised whenever any function on the dataset is called (select, shard, etc.). \r\n\r\n```\r\nimport nlp\r\nwiki = nlp.load_dataset('wikipedia', split='train')\r\nwiki = wiki.shard(16, 0) # Triggers pickling of dataset\r\n```\r\n\r\nI believe this is because [this line](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/arrow_dataset.py#L626), which gets the function signature, is actually pickling the whole dataset (and thereby serializing all the data to text). I checked by printing that string, and sure enough it was full of Wikipedia articles.\r\n\r\nI don't think the whole pickling thing is worth the effort, so I'll close the PR. But I did want to mention this serialization behavior in case it's not intended.","Thanks for reporting. Indeed this line shouldn't serialize the data but only the function itself.\r\n","Keeping this open because I would like to keep brainstorming a bit on this.\r\n\r\nOne note on this is that we should have a clean serialization workflow, probably one that could serialize to a few formats (arrow, parquet and tfrecords come to mind).","This PR could be useful. My specific use case is `multiprocessing.Pool` for parallel preprocessing (because of the Python tokenization bottleneck at https:\/\/github.com\/huggingface\/transformers\/issues\/5729). I shard a large dataset, run map on each shard within a multiprocessing pool, and then concatenate them back together. This is only possible if a dataset can be pickled, otherwise the logic is much more complex. There's no reason to make it un-picklable, even if it's not the recommended usage.\r\n\r\n```python\r\nimport nlp\r\nimport multiprocessing\r\n\r\ndef func(ex):\r\n return {\"text\": \"Prefix: \" + ex[\"text\"]}\r\n\r\ndef map_helper(dset):\r\n return dset.map(func)\r\n\r\nn_shards = 16\r\ndset = nlp.load_dataset(\"wikitext-2-raw-v1\", split=\"train\")\r\nwith multiprocessing.Pool(processes=n_shards) as pool:\r\n shards = pool.map(map_helper, [dset.shard(n_shards, i, contiguous=True) for i in range(n_shards)])\r\ndset = nlp.concatenate_datasets(shards)\r\n```\r\n","Yes I agree.\r\n#423 just got merged and should allow serialization of `SplitDict`. Could you try it and see if it'ok on your side now ?","Closing this, assuming it was fixed in #423."],"created_at":1594763619000,"updated_at":1596551890000,"closed_at":1596551890000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/389","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/389","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/389.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/389.patch"},"body":"It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https:\/\/github.com\/patil-suraj\/exploring-T5\/blob\/master\/T5_on_TPU.ipynb). Example:\r\n\r\n```\r\nwiki = nlp.load_dataset('wikipedia', split='train')\r\ndef sentencize(examples):\r\n ...\r\n\r\nwiki = wiki.map(sentencize, batched=True)\r\ntorch.save(wiki, 'sentencized_wiki_dataset.pt')\r\n```\r\n\r\nHowever, upon unpickling the dataset via torch.load(...), this error is raised:\r\n\r\n```\r\nValueError(\"Cannot add elem. Use .add() instead.\")\r\n```\r\nOn line [492 of splits.py](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.\r\n\r\nThe workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.\r\n\r\nTesting:\r\n- Manually pickled and unpickled a modified wikipedia dataset.\r\n- Ran `make style`\r\n\r\nI would be happy to run any other tests, but I couldn't find any in the contributing guidelines.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/389\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/388","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/388\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/388\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/388\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/388","id":656707497,"node_id":"MDU6SXNzdWU2NTY3MDc0OTc=","number":388,"title":"\ud83d\udc1b [Dataset] Cannot download wmt14, wmt15 and wmt17","user":{"login":"SamuelCahyawijaya","id":2826602,"node_id":"MDQ6VXNlcjI4MjY2MDI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2826602?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya","html_url":"https:\/\/github.com\/SamuelCahyawijaya","followers_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/followers","following_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/orgs","repos_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/repos","events_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/SamuelCahyawijaya\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 658M\/658M [1:00:42<00:00, 181kB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 918M\/918M [1:39:38<00:00, 154kB\/s]\r\nDownloading: 2%|\u2589 | 40.9M\/2.37G [04:48<5:03:06, 128kB\/s]\r\n`\r\nCould we just download a specific subdataset in 'wmt14', such as 'newstest14'? ","> The code runs but the download speed is extremely slow, the same behaviour is not observed on wmt16 and wmt18\r\n\r\nThe original source for the files may provide slow download speeds.\r\nWe can probably host these files ourselves.\r\n\r\n> When trying to download wmt17 zh-en, I got the following error:\r\n> ConnectionError: Couldn't reach https:\/\/storage.googleapis.com\/tfdataset-data\/downloadataset\/uncorpus\/UNv1.0.en-zh.tar.gz\r\n\r\nLooks like the file`UNv1.0.en-zh.tar.gz` is missing, or the url changed. We need to fix that\r\n\r\n> Could we just download a specific subdataset in 'wmt14', such as 'newstest14'?\r\n\r\nRight now I don't think it's possible. Maybe @patrickvonplaten knows more about it\r\n","Yeah, the download speed is sadly always extremely slow :-\/. \r\nI will try to check out the `wmt17 zh-en` bug :-) ","Maybe this can be used - https:\/\/stuncorpusprod.blob.core.windows.net\/corpusfiles\/UNv1.0.en-zh.tar.gz.00 "],"created_at":1594741001000,"updated_at":1596639392000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:\r\n```\r\nnlp.load_dataset('wmt14','de-en')\r\nnlp.load_dataset('wmt15','de-en')\r\nnlp.load_dataset('wmt17','de-en')\r\nnlp.load_dataset('wmt19','de-en')\r\n```\r\nThe code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`\r\n\r\n2. When trying to download `wmt17 zh-en`, I got the following error:\r\n> ConnectionError: Couldn't reach https:\/\/storage.googleapis.com\/tfdataset-data\/downloadataset\/uncorpus\/UNv1.0.en-zh.tar.gz","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/388\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/387","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/387\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/387\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/387\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/387","id":656361357,"node_id":"MDU6SXNzdWU2NTYzNjEzNTc=","number":387,"title":"Conversion through to_pandas output numpy arrays for lists instead of python objects","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict\/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict\/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe we can have to_pydict\/to_pylist as the default and use to_numpy or to_pandas when the format (set by `set_format`) is 'numpy' or 'pandas'"],"created_at":1594707841000,"updated_at":1594985820000,"closed_at":1594985820000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.\r\n\r\nHere is an example:\r\n```python\r\n>>> dataset._data.slice(key, 1).to_pandas().to_dict(\"list\")\r\n{'sentence1': ['Amrozi accused his brother , whom he called \" the witness \" , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only \" the witness \" , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292,\r\n 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938,\r\n 4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\r\n 0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1])]}\r\n>>> type(dataset._data.slice(key, 1).to_pandas().to_dict(\"list\")['input_ids'][0])\r\n<class 'numpy.ndarray'>\r\n>>> dataset._data.slice(key, 1).to_pydict()\r\n{'sentence1': ['Amrozi accused his brother , whom he called \" the witness \" , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only \" the witness \" , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/387\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/386","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/386\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/386\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/386\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/386","id":655839067,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ4MjQ1NDI4","number":386,"title":"Update dataset loading and features - Add TREC dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)\r\n\r\nWell actually it seems there are some merge conflicts to fix first"],"created_at":1594645818000,"updated_at":1594887478000,"closed_at":1594887478000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/386","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/386","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/386.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/386.patch"},"body":"This PR:\r\n- add a template for a new dataset script\r\n- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script.\r\n- fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch\/TensorFlow tensors.\r\n- add the TREC-6 dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/386\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/385","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/385\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/385\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/385\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/385","id":655663997,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ4MTAzMjY5","number":385,"title":"Remove unnecessary nested dict","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe","@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!\/usr\/bin\/env python3\r\n\r\nfrom nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\nimport tempfile\r\n\r\n\r\ndef scan_for_nested_unnecessary_dict(dataset_name):\r\n\r\n def load_builder_class(dataset_name):\r\n module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n return import_main_class(module_path)\r\n\r\n def load_configs(dataset_name):\r\n builder_cls = load_builder_class(dataset_name)\r\n if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n return [None]\r\n return builder_cls.BUILDER_CONFIGS\r\n\r\n def scan_features_for_nested_dict(features):\r\n is_sequence = False\r\n if hasattr(features, \"_type\"):\r\n if features._type != 'Sequence':\r\n return False\r\n else:\r\n is_sequence = True\r\n features = features.feature\r\n\r\n if isinstance(features, list):\r\n for value in features:\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n\r\n elif isinstance(features, dict):\r\n for key, value in features.items():\r\n if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n return True\r\n if scan_features_for_nested_dict(value):\r\n return True\r\n return False\r\n elif hasattr(features, \"_type\"):\r\n return False\r\n else:\r\n raise ValueError(f\"{features} should be either a list, a dict or a feature\")\r\n\r\n configs = load_configs(dataset_name)\r\n\r\n for config in configs:\r\n with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n # create config and dataset\r\n dataset_builder_cls = load_builder_class(dataset_name)\r\n name = config.name if config is not None else None\r\n dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n\r\n is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n if is_nested_dict_in_dataset:\r\n print(f\"{dataset_name} with {name} needs refactoring\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n\r\n # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n# api = hf_api.HfApi()\r\n# all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n# for dataset in all_datasets:\r\n# scan_for_nested_unnecessary_dict(dataset)\r\n```","> @mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n> \r\n> ```python\r\n> #!\/usr\/bin\/env python3\r\n> \r\n> from nlp import prepare_module, DownloadConfig, import_main_class, hf_api\r\n> import tempfile\r\n> \r\n> \r\n> def scan_for_nested_unnecessary_dict(dataset_name):\r\n> \r\n> def load_builder_class(dataset_name):\r\n> module_path = prepare_module(dataset_name, download_config=DownloadConfig(force_download=True))\r\n> return import_main_class(module_path)\r\n> \r\n> def load_configs(dataset_name):\r\n> builder_cls = load_builder_class(dataset_name)\r\n> if len(builder_cls.BUILDER_CONFIGS) == 0:\r\n> return [None]\r\n> return builder_cls.BUILDER_CONFIGS\r\n> \r\n> def scan_features_for_nested_dict(features):\r\n> is_sequence = False\r\n> if hasattr(features, \"_type\"):\r\n> if features._type != 'Sequence':\r\n> return False\r\n> else:\r\n> is_sequence = True\r\n> features = features.feature\r\n> \r\n> if isinstance(features, list):\r\n> for value in features:\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> \r\n> elif isinstance(features, dict):\r\n> for key, value in features.items():\r\n> if is_sequence and len(features.keys()) == 1 and hasattr(features[key], \"_type\") and features[key]._type != \"Sequence\":\r\n> return True\r\n> if scan_features_for_nested_dict(value):\r\n> return True\r\n> return False\r\n> else:\r\n> raise ValueError(f\"{features} should be either a list of a dict\")\r\n> \r\n> configs = load_configs(dataset_name)\r\n> \r\n> for config in configs:\r\n> with tempfile.TemporaryDirectory() as processed_temp_dir:\r\n> # create config and dataset\r\n> dataset_builder_cls = load_builder_class(dataset_name)\r\n> name = config.name if config is not None else None\r\n> dataset_builder = dataset_builder_cls(name=name, cache_dir=processed_temp_dir)\r\n> \r\n> is_nested_dict_in_dataset = scan_features_for_nested_dict(dataset_builder._info().features)\r\n> if is_nested_dict_in_dataset:\r\n> print(f\"{dataset_name} with {name} needs refactoring\")\r\n> \r\n> \r\n> if __name__ == \"__main__\":\r\n> scan_for_nested_unnecessary_dict(\"race\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"mlqa\") # prints True\r\n> scan_for_nested_unnecessary_dict(\"squad\") # prints Nothing\r\n> \r\n> # ran the following lines for 1min and seems to work -> didn't check for all datasets though\r\n> # api = hf_api.HfApi()\r\n> # all_datasets = [x.id for x in api.dataset_list(with_community_datasets=False)]\r\n> # for dataset in all_datasets:\r\n> # scan_for_nested_unnecessary_dict(dataset)\r\n> ```\r\n\r\nGreat, I will try it","I'm not sure the work on this PR was finished @lhoestq cc @mariamabarham @patrickvonplaten ","Sorry for that, apparently there are other datasets that could have unnecessary nested dicts.\r\nWe can have another PR to scan and fix the other datasets.\r\n"],"created_at":1594629983000,"updated_at":1594812458000,"closed_at":1594807433000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/385","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/385","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/385.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/385.patch"},"body":"This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:\r\n\r\n- MLQA\r\n\r\n- RACE\r\n\r\nWill be adding more if necessary.\r\n\r\n#378 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/385\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/383","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/383\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/383\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/383\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/383","id":655291201,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ3ODI0OTky","number":383,"title":"Adding the Linguistic Code-switching Evaluation (LinCE) benchmark","user":{"login":"gaguilar","id":5833357,"node_id":"MDQ6VXNlcjU4MzMzNTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5833357?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gaguilar","html_url":"https:\/\/github.com\/gaguilar","followers_url":"https:\/\/api.github.com\/users\/gaguilar\/followers","following_url":"https:\/\/api.github.com\/users\/gaguilar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gaguilar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gaguilar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gaguilar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gaguilar\/orgs","repos_url":"https:\/\/api.github.com\/users\/gaguilar\/repos","events_url":"https:\/\/api.github.com\/users\/gaguilar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gaguilar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help me find out where I could have messed things up :)\r\n\r\nAlso, the real and dummy data tests passed before committing and pushing my changes.\r\n\r\nThanks a lot in advance!\r\n\r\n```\r\n=================================== FAILURES ===================================\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests\/test_dataset_common.py:243: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\/test_dataset_common.py:137: in check_load_dataset\r\n try_from_hf_gcs=False,\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7efa744ffb70>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7efb304c52b0>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\/text.py:24: TypeError\r\n=============================== warnings summary ===============================\r\n... \r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n====== 1 failed, 963 passed, 532 skipped, 5 warnings in 166.33s (0:02:46) ======\r\n\r\nExited with code exit status 1\r\n```","@lhoestq Hi Quentin, I was wondering if you could give some feedback on this error from the `run_dataset_script_tests` script. It seems that's coming from a different config builder than the one I added, so I am not sure why this error would occur. Thanks in advance!","Awesome! Thank you for all your comments! \ud83d\udc4c I will update the PR in a bit with all the required changes \ud83d\ude42 \r\n\r\nLet me just provide a bit of context for my changes:\r\n\r\nI was referring to the GLUE, XTREME and WNUT_17 dataset scripts to build mine (not sure if the new documentation was available last week). This is where I took the naming convention for the citation and description variables. Also, these scripts didn't have the `BUILDER_CONFIG_CLASS = LinceConfig` line so I commented this out thinking I didn't need that; I tried this line in my attempts to make the real and dummy data tests pass but it was not helping. \r\n\r\nThe problem I was facing was that the tests were passing a default `BuilderConfig` (i.e., `self.config.name` property was set to `'default'` and my custom properties were not available). This means, for example, that within the `def _info(...)` method, I was not able to access the specific fields of my `LinceConfig` class (which is why I have now a global variable `_LINCE_CITATIONS`, to detach the individual citations from the corresponding LinceConfig objects, as well as I am constructing manually the feature infos). This default `BuilderConfig` is why I added the `if not isinstance(self.config, LinceConfig): return []` statement. Otherwise, accessing custom properties like `self.config.colnames` was failing the test because such properties did not exist in the default config (i.e., it was not a `LinceConfig`).\r\n\r\nI will update the PR and see if these problems happen in the CI tests.\r\n\r\nThanks again for the follow-up! @lhoestq ","Ok I see !\r\n\r\nTo give you more details: the line `BUILDER_CONFIG_CLASS = LinceConfig` tells the tests how to instantiate a config for this dataset. Therefore if you have this line you should have all the fields of your config available.\r\n\r\nTo fix the errors you get you'll have to, first, have the `BUILDER_CONFIG_CLASS = LinceConfig` line, and second, add default values for the parameters of your config (or the tests functions will be unable to instantiate it by calling `LinceConfig()`.\r\n\r\nAn example of dataset with a custom config with additional filed like this one is [biomrc](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/biomrc\/biomrc.py).\r\nFeel free to give a look at it if you want.","Thanks for the reference!\r\n\r\nI just updated the PR with the suggested changes. It seems the CI failed on the same test you said we could ignore, so I guess it's okay :) \r\n\r\nPlease let me know if there is something else I may need to change."],"created_at":1594506920000,"updated_at":1594916386000,"closed_at":1594916386000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/383","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/383","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/383.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/383.patch"},"body":"Hi,\r\n\r\nFirst of all, this library is really cool! Thanks for putting all of this together! \r\n\r\nThis PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https:\/\/ritual.uh.edu\/lince). As described in the official website (FAQ):\r\n\r\n> 1. Why do we need LinCE?\r\n>LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details).\r\n>Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark.\r\n\r\n\r\nThe data comes from social media and here's the summary table of tasks per language pair:\r\n\r\n| Language Pairs | LID | POS | NER | SA |\r\n|----------------------------------------|-----|-----|-----|----|\r\n| Spanish-English | \u2705 | \u2705 | \u2705 | \u2705 |\r\n| Hindi-English | \u2705 | \u2705 | \u2705 | |\r\n| Modern Standard Arabic-Egyptian Arabic | \u2705 | | \u2705 | |\r\n| Nepali-English | \u2705 | | | |\r\n\r\nThe tasks are as follows:\r\n* LID: token-level language identification\r\n* POS: part-of-speech tagging\r\n* NER: named entity recognition\r\n* SA: sentiment analysis\r\n\r\nWith the exception of MSA-EA, the rest of the datasets contain token-level LID labels.\r\n\r\n## Usage\r\n\r\nFor Spanish-English LID, we can load the data as follows:\r\n```\r\nimport nlp\r\n\r\ndata = nlp.load_dataset('.\/datasets\/lince\/lince.py', 'lid_spaeng')\r\n\r\nfor split in data:\r\n print(data[split])\r\n```\r\n\r\nHere's the output:\r\n```\r\nDataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030)\r\nDataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332)\r\nDataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289)\r\n```\r\n\r\nHere's the list of shortcut names for every dataset available in LinCE:\r\n* `lid_spaeng`\r\n* `lid_hineng`\r\n* `lid_nepeng`\r\n* `lid_msaea`\r\n* `pos_spaeng`\r\n* `pos_hineng`\r\n* `ner_spaeng`\r\n* `ner_hineng`\r\n* `ner_msaea`\r\n* `sa_spaeng`\r\n\r\n\r\nAll the numbers match with Table 3 in the LinCE [paper](https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script.\r\n\r\n\r\n## Features\r\n\r\nHere is how the features look in the case of language identification (LID) tasks:\r\n\r\n| LID Feature | Type | Description |\r\n|----------------------|---------------|-------------------------------------------|\r\n| `idx` | `int` | Dataset index of current sentence |\r\n| `tokens` | `list<str>` | List of tokens (string) of a sentence |\r\n| `lid` | `list<str>` | List of LID labels (string) of a sentence |\r\n\r\nFor part-of-speech (POS) tagging:\r\n\r\n| POS Feature | Type | Description |\r\n|----------------------|---------------|-------------------------------------------|\r\n| `idx` | `int` | Dataset index of current sentence |\r\n| `tokens` | `list<str>` | List of tokens (string) of a sentence |\r\n| `lid` | `list<str>` | List of LID labels (string) of a sentence |\r\n| `pos` | `list<str>` | List of POS tags (string) of a sentence |\r\n\r\nFor named entity recognition (NER):\r\n\r\n| NER Feature | Type | Description |\r\n|----------------------|---------------|-------------------------------------------|\r\n| `idx` | `int` | Dataset index of current sentence |\r\n| `tokens` | `list<str>` | List of tokens (string) of a sentence |\r\n| `lid` | `list<str>` | List of LID labels (string) of a sentence |\r\n| `ner` | `list<str>` | List of NER labels (string) of a sentence |\r\n\r\n**NOTE**: the MSA-EA NER dataset does not contain the `lid` feature.\r\n\r\nFor sentiment analysis (SA):\r\n\r\n| SA Feature | Type | Description |\r\n|---------------------|-------------|-------------------------------------------|\r\n| `idx` | `int` | Dataset index of current sentence |\r\n| `tokens` | `list<str>` | List of tokens (string) of a sentence |\r\n| `lid` | `list<str>` | List of LID labels (string) of a sentence |\r\n| `sa` | `str` | Sentiment label (string) of a sentence |\r\n\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/383\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/382","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/382\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/382\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/382\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/382","id":655290482,"node_id":"MDU6SXNzdWU2NTUyOTA0ODI=","number":382,"title":"1080","user":{"login":"saq194","id":60942503,"node_id":"MDQ6VXNlcjYwOTQyNTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/60942503?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saq194","html_url":"https:\/\/github.com\/saq194","followers_url":"https:\/\/api.github.com\/users\/saq194\/followers","following_url":"https:\/\/api.github.com\/users\/saq194\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saq194\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saq194\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saq194\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saq194\/orgs","repos_url":"https:\/\/api.github.com\/users\/saq194\/repos","events_url":"https:\/\/api.github.com\/users\/saq194\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saq194\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594506547000,"updated_at":1594507778000,"closed_at":1594507778000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/382\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/381","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/381\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/381\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/381\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/381","id":655277119,"node_id":"MDU6SXNzdWU2NTUyNzcxMTk=","number":381,"title":"NLp","user":{"login":"Spartanthor","id":68147610,"node_id":"MDQ6VXNlcjY4MTQ3NjEw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/68147610?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Spartanthor","html_url":"https:\/\/github.com\/Spartanthor","followers_url":"https:\/\/api.github.com\/users\/Spartanthor\/followers","following_url":"https:\/\/api.github.com\/users\/Spartanthor\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Spartanthor\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Spartanthor\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Spartanthor\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Spartanthor\/orgs","repos_url":"https:\/\/api.github.com\/users\/Spartanthor\/repos","events_url":"https:\/\/api.github.com\/users\/Spartanthor\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Spartanthor\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594500614000,"updated_at":1594500639000,"closed_at":1594500639000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/381\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/378","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/378\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/378\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/378\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/378","id":655226316,"node_id":"MDU6SXNzdWU2NTUyMjYzMTY=","number":378,"title":"[dataset] Structure of MLQA seems unecessary nested","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Same for the RACE dataset: https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/race\/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?","You're right, I think we don't need to use the nested dictionary. \r\n"],"created_at":1594480568000,"updated_at":1594829840000,"closed_at":1594829840000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/mlqa\/mlqa.py#L90-L97\r\n\r\nShould we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?\r\n\r\n```python\r\n features=nlp.Features(\r\n {\r\n \"context\": nlp.Value(\"string\"),\r\n \"questions\": nlp.features.Sequence({\"question\": nlp.Value(\"string\")}),\r\n \"answers\": nlp.features.Sequence(\r\n {\"text\": nlp.Value(\"string\"), \"answer_start\": nlp.Value(\"int32\"),}\r\n ),\r\n \"ids\": nlp.features.Sequence({\"idx\": nlp.Value(\"string\")})\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/378\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/377","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/377\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/377\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/377\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/377","id":655215790,"node_id":"MDU6SXNzdWU2NTUyMTU3OTA=","number":377,"title":"Iyy!!!","user":{"login":"ajinomoh","id":68154535,"node_id":"MDQ6VXNlcjY4MTU0NTM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/68154535?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ajinomoh","html_url":"https:\/\/github.com\/ajinomoh","followers_url":"https:\/\/api.github.com\/users\/ajinomoh\/followers","following_url":"https:\/\/api.github.com\/users\/ajinomoh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ajinomoh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ajinomoh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ajinomoh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ajinomoh\/orgs","repos_url":"https:\/\/api.github.com\/users\/ajinomoh\/repos","events_url":"https:\/\/api.github.com\/users\/ajinomoh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ajinomoh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594476667000,"updated_at":1594477851000,"closed_at":1594477851000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/377\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/376","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/376\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/376\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/376\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/376","id":655047826,"node_id":"MDU6SXNzdWU2NTUwNDc4MjY=","number":376,"title":"to_pandas conversion doesn't always work","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["**Edit**: other topic previously in this message moved to a new issue: https:\/\/github.com\/huggingface\/nlp\/issues\/387","Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets use that).\r\nIt can cause issues when using dataset transforms like `filter` for example"],"created_at":1594416811000,"updated_at":1595239845000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.\r\n\r\nHere is an example using the official SQUAD v2 JSON file.\r\n\r\nThis example was found while investigating #373.\r\n\r\n```python\r\n>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: [\".\/train-v2.0.json\"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version=\"1.0.0\", field='data')\r\n>>> squad['train']\r\nDataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442)\r\n>>> squad['train'][0]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/Users\/thomwolf\/Documents\/GitHub\/datasets\/src\/nlp\/arrow_dataset.py\", line 589, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"\/Users\/thomwolf\/Documents\/GitHub\/datasets\/src\/nlp\/arrow_dataset.py\", line 529, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict(\"list\"))\r\n File \"pyarrow\/array.pxi\", line 559, in pyarrow.lib._PandasConvertible.to_pandas\r\n File \"pyarrow\/table.pxi\", line 1367, in pyarrow.lib.Table._to_pandas\r\n File \"\/Users\/thomwolf\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/pandas_compat.py\", line 766, in table_to_blockmanager\r\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\r\n File \"\/Users\/thomwolf\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/pyarrow\/pandas_compat.py\", line 1101, in _table_to_blocks\r\n list(extension_columns.keys()))\r\n File \"pyarrow\/table.pxi\", line 881, in pyarrow.lib.table_to_blocks\r\n File \"pyarrow\/error.pxi\", line 105, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n```\r\n\r\ncc @lhoestq would we have a way to detect this from the schema maybe?\r\n\r\nHere is the schema for this pretty complex JSON:\r\n```python\r\n>>> squad['train'].schema\r\ntitle: string\r\nparagraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>\r\n child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>\r\n child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>\r\n child 0, question: string\r\n child 1, id: string\r\n child 2, answers: list<item: struct<text: string, answer_start: int64>>\r\n child 0, item: struct<text: string, answer_start: int64>\r\n child 0, text: string\r\n child 1, answer_start: int64\r\n child 3, is_impossible: bool\r\n child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>>\r\n child 0, item: struct<text: string, answer_start: int64>\r\n child 0, text: string\r\n child 1, answer_start: int64\r\n child 1, context: string\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/376\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/375","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/375\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/375\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/375\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/375","id":655023307,"node_id":"MDU6SXNzdWU2NTUwMjMzMDc=","number":375,"title":"TypeError when computing bertscore","user":{"login":"willywsm1013","id":13269577,"node_id":"MDQ6VXNlcjEzMjY5NTc3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13269577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/willywsm1013","html_url":"https:\/\/github.com\/willywsm1013","followers_url":"https:\/\/api.github.com\/users\/willywsm1013\/followers","following_url":"https:\/\/api.github.com\/users\/willywsm1013\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/willywsm1013\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/willywsm1013\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/willywsm1013\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/willywsm1013\/orgs","repos_url":"https:\/\/api.github.com\/users\/willywsm1013\/repos","events_url":"https:\/\/api.github.com\/users\/willywsm1013\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/willywsm1013\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~\/.virtualenvs\/hf-datasets\/lib\/python3.7\/site-packages\/bert_score\/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_size, device, all_layers)\r\n 371 return sorted(list(set(l)), key=lambda x: len(x.split(\" \")))\r\n 372 \r\n--> 373 sentences = dedup_and_sort(refs + hyps)\r\n 374 embs = []\r\n 375 iter_range = range(0, len(sentences), batch_size)\r\n\r\nValueError: operands could not be broadcast together with shapes (0,) (2,)\r\n```\r\nThat's because it gets numpy arrays as input and not lists. See #387 ","The other issue was fixed by #403 \r\n\r\nDo you still get this issue @willywsm1013 ?\r\n"],"created_at":1594413464000,"updated_at":1599490212000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, \r\n\r\nI installed nlp 0.3.0 via pip, and my python version is 3.7.\r\nWhen I tried to compute bertscore with the code:\r\n```\r\nimport nlp \r\nbertscore = nlp.load_metric('bertscore') \r\n# load hyps and refs \r\n...\r\nprint (bertscore.compute(hyps, refs, lang='en'))\r\n```\r\n\r\nI got the following error.\r\n```\r\nTraceback (most recent call last):\r\n File \"bert_score_evaluate.py\", line 16, in <module>\r\n print (bertscore.compute(hyps, refs, lang='en'))\r\n File \"\/home\/willywsm\/anaconda3\/envs\/torcher\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 200, in compute\r\n output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n File \"\/home\/willywsm\/anaconda3\/envs\/torcher\/lib\/python3.7\/site-packages\/nlp\/metrics\/bertscore\/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08\/bertscore.py\", line 105, in _compute\r\n hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)\r\nTypeError: get_hash() takes 3 positional arguments but 4 were given\r\n```\r\n\r\nIt seems like there is something wrong with get_hash() function?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/375\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/374","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/374\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/374\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/374\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/374","id":654895066,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ3NTMxMzUy","number":374,"title":"Add dataset post processing for faiss indexes","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.\r\nThe datasets_infos.json and the data on GCS are updated.\r\n\r\nAnd I also added a check to make sure we don't have post processing resources in sub-directories.","I added a dummy config that can be loaded with:\r\n```python\r\nwiki = load_dataset(\"wiki_dpr\", \"dummy_psgs_w100_no_embeddings\", with_index=True, split=\"train\")\r\n```\r\nIt's only 6MB of arrow files and 30MB of index"],"created_at":1594398359000,"updated_at":1594647843000,"closed_at":1594647841000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/374","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/374","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/374.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/374.patch"},"body":"# Post processing of datasets for faiss indexes\r\n\r\nNow that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.\r\n\r\n## Implementation proposition\r\n\r\n- Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change)\r\n- The role of `_post_process` is to apply dataset transforms (filter\/map etc.) or indexing functions (add_faiss_index) to modify\/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method.\r\n- `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources`\r\n- as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files)\r\n\r\nI'd happy to discuss these choices !\r\n\r\n## The `wiki_dpr` index\r\n\r\nIt takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory.\r\nThis is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768.\r\nI couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product.\r\n\r\n## Example of usage\r\n\r\n```python\r\nimport nlp\r\ndset = nlp.load_dataset(\r\n \"wiki_dpr\",\r\n \"psgs_w100_with_nq_embeddings\",\r\n split=\"train\",\r\n with_index=True\r\n)\r\nprint(len(dset), dset.list_indexes()) # (21015300, ['embeddings'])\r\n```\r\n\r\n(it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too)\r\n\r\n## Demo\r\n\r\nYou can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers:\r\nhttps:\/\/colab.research.google.com\/drive\/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/374\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/373","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/373\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/373\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/373\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/373","id":654845133,"node_id":"MDU6SXNzdWU2NTQ4NDUxMzM=","number":373,"title":"Segmentation fault when loading local JSON dataset as of #372","user":{"login":"vegarab","id":24683907,"node_id":"MDQ6VXNlcjI0NjgzOTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24683907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vegarab","html_url":"https:\/\/github.com\/vegarab","followers_url":"https:\/\/api.github.com\/users\/vegarab\/followers","following_url":"https:\/\/api.github.com\/users\/vegarab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vegarab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vegarab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vegarab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vegarab\/orgs","repos_url":"https:\/\/api.github.com\/users\/vegarab\/repos","events_url":"https:\/\/api.github.com\/users\/vegarab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vegarab\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.json as paj\r\n\r\nimport nlp as hf_nlp\r\n\r\nfrom nlp import DatasetInfo, BuilderConfig, SplitGenerator, Split, utils\r\nfrom nlp.arrow_writer import ArrowWriter\r\n\r\n\r\nclass JSONDatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n\r\n```","Yes, deleting the directory solves the error whenever I try to rerun.\r\n\r\nBy replacing the json-loader, you mean the cached file in my `site-packages` directory? e.g. `\/home\/XXX\/.cache\/lib\/python3.7\/site-packages\/nlp\/datasets\/json\/(...)\/json.py` \r\n\r\nWhen I was testing this out before the #372 PR was merged I had issues installing it properly locally. Since the `json.py` script was downloaded instead of actually using the one provided in the local install. Manually updating that file seemed to solve it, but it didn't seem like a proper solution. Especially when having to run this on a remote compute cluster with no access to that directory.","I see, diving in the JSON file for SQuAD it's a pretty complex structure.\r\n\r\nThe best solution for you, if you have a dataset really similar to SQuAD would be to copy and modify the SQuAD data processing script. We will probably add soon an option to be able to specify file path to use instead of the automatic URL encoded in the script but in the meantime you can:\r\n- copy the [squad script](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/squad\/squad.py) in a new script for your dataset\r\n- in the new script replace [these `urls_to_download `](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/squad\/squad.py#L99-L102) by `urls_to_download=self.config.data_files`\r\n- load the dataset with `dataset = load_dataset('path\/to\/your\/new\/script', data_files={nlp.Split.TRAIN: \".\/datasets\/train-v2.0.json\"})`\r\n\r\nThis way you can reuse all the processing logic of the SQuAD loading script.","This seems like a more sensible solution! Thanks, @thomwolf. It's been a little daunting to understand what these scripts actually do, due to the level of abstraction and central documentation.\r\n\r\nAm I correct in assuming that the `_generate_examples()` function is the actual procedure for how the data is loaded from file? Meaning that essentially with a file containing another format, that is the only function that requires re-implementation? I'm working with a lot of datasets that, due to licensing and privacy, cannot be published. As this library is so neatly integrated with the transformers library and gives easy access to public sets such as SQUAD and increased performance, it is very neat to be able to load my private sets as well. As of now, I have just been working on scripts for translating all my data into the SQUAD-format before using the json script, but I see that it might not be necessary after all. ","Yes `_generate_examples()` is the main entry point. If you change the shape of the returned dictionary you also need to update the `features` in the `_info`.\r\n\r\nI'm currently writing the doc so it should be easier soon to use the library and know how to add your datasets.\r\n","Could you try to update pyarrow to >=0.17.0 @vegarab ?\r\nI don't have any segmentation fault with my version of pyarrow (0.17.1)\r\n\r\nI tested with\r\n```python\r\nimport nlp\r\ns = nlp.load_dataset(\"json\", data_files=\"train-v2.0.json\", field=\"data\", split=\"train\")\r\ns[0]\r\n# {'title': 'Normans', 'paragraphs': [{'qas': [{'question': 'In what country is Normandy located?', 'id':...\r\n```","Also if you want to have your own dataset script, we now have a new documentation !\r\nSee here:\r\nhttps:\/\/huggingface.co\/nlp\/add_dataset.html","@lhoestq \r\nFor some reason, I am not able to reproduce the segmentation fault, on pyarrow==0.16.0. Using the exact same environment and file.\r\n\r\nAnyhow, I discovered that pyarrow>=0.17.0 is required to read in a JSON file where the pandas structs contain lists. Otherwise, pyarrow complains when attempting to cast the struct:\r\n```py\r\nimport nlp\r\n>>> s = nlp.load_dataset(\"json\", data_files=\"datasets\/train-v2.0.json\", field=\"data\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> s[0]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 558, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 498, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict(\"list\"))\r\n File \"pyarrow\/array.pxi\", line 559, in pyarrow.lib._PandasConvertible.to_pandas\r\n File \"pyarrow\/table.pxi\", line 1367, in pyarrow.lib.Table._to_pandas\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/pyarrow\/pandas_compat.py\", line 766, in table_to_blockmanager\r\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\r\n File \"\/home\/vegarab\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/pyarrow\/pandas_compat.py\", line 1101, in _table_to_blocks\r\n list(extension_columns.keys()))\r\n File \"pyarrow\/table.pxi\", line 881, in pyarrow.lib.table_to_blocks\r\n File \"pyarrow\/error.pxi\", line 105, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n>>> s\r\nDataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 35)\r\n```\r\n\r\nUpgrading to >=0.17.0 provides the same dataset structure, but accessing the records is possible without the same exception. \r\n\r\n","Very happy to see some extended documentation! ","#376 seems to be reporting the same issue as mentioned above. ","This issue helped me a lot, thanks.\r\nHope this issue will be fixed soon."],"created_at":1594393465000,"updated_at":1608017240000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. \r\n\r\n```\r\ndataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: [\".\/datasets\/train-v2.0.json\"]}, field='data')\r\n```\r\ncauses\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset json\/default (download: Unknown size, generated: Unknown size, total: Unknown size) to \/home\/XXX\/.cache\/huggingface\/datasets\/json\/default\/0.0.0...\r\n0 tables [00:00, ? tables\/s]Segmentation fault (core dumped)\r\n```\r\nwhere `.\/datasets\/train-v2.0.json` is downloaded directly from https:\/\/rajpurkar.github.io\/SQuAD-explorer\/.\r\nThis is consistent with other SQuAD-formatted JSON files.\r\n\r\nWhen attempting to load the dataset again, I get the following:\r\n```\r\nUsing custom data configuration default\r\nTraceback (most recent call last):\r\n File \"dataloader.py\", line 6, in <module>\r\n 'json', data_files={nlp.Split.TRAIN: [\".\/datasets\/train-v2.0.json\"]}, field='data')\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 524, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 382, in download_and_prepare\r\n with incomplete_dir(self._cache_dir) as tmp_data_dir:\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/contextlib.py\", line 112, in __enter__\r\n return next(self.gen)\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 368, in incomplete_dir\r\n os.makedirs(tmp_dir)\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/os.py\", line 223, in makedirs\r\n mkdir(name, mode)\r\nFileExistsError: [Errno 17] File exists: '\/home\/XXX\/.cache\/huggingface\/datasets\/json\/default\/0.0.0.incomplete'\r\n```\r\n\r\n(Not sure if you wanted this in the previous issue #369 or not as it was closed.)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/373\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/372","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/372\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/372\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/372\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/372","id":654774420,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ3NDMzNTA4","number":372,"title":"Make the json script more flexible","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594386915000,"updated_at":1594392727000,"closed_at":1594392726000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/372","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/372","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/372.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/372.patch"},"body":"Fix https:\/\/github.com\/huggingface\/nlp\/issues\/359\r\nFix https:\/\/github.com\/huggingface\/nlp\/issues\/369\r\n\r\nJSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).\r\n\r\nIn this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts.\r\n\r\nE.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do:\r\n```python\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('json', data_files='\/PATH\/TO\/JSON', field='data')\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/372\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/371","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/371\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/371\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/371\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/371","id":654668242,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ3MzQ4NDgw","number":371,"title":"Fix cached file path for metrics with different config names","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for the fast fix!"],"created_at":1594375344000,"updated_at":1594388722000,"closed_at":1594388720000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/371","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/371","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/371.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/371.patch"},"body":"The config name was not taken into account to build the cached file path.\r\nIt should fix #368 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/371\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/370","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/370\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/370\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/370\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/370","id":654304193,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ3MDU3NTIw","number":370,"title":"Allow indexing Dataset via np.ndarray","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like a flaky CI, failed download from S3."],"created_at":1594323795000,"updated_at":1594389944000,"closed_at":1594389943000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/370","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/370","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/370.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/370.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/370\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/369","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/369\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/369\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/369\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/369","id":654186890,"node_id":"MDU6SXNzdWU2NTQxODY4OTA=","number":369,"title":"can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries","user":{"login":"vegarab","id":24683907,"node_id":"MDQ6VXNlcjI0NjgzOTA3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/24683907?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/vegarab","html_url":"https:\/\/github.com\/vegarab","followers_url":"https:\/\/api.github.com\/users\/vegarab\/followers","following_url":"https:\/\/api.github.com\/users\/vegarab\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/vegarab\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/vegarab\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/vegarab\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/vegarab\/orgs","repos_url":"https:\/\/api.github.com\/users\/vegarab\/repos","events_url":"https:\/\/api.github.com\/users\/vegarab\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/vegarab\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https:\/\/rajpurkar.github.io\/SQuAD-explorer\/","I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 but still getting this error. What could cause this error?"],"created_at":1594311413000,"updated_at":1608073642000,"closed_at":1594392726000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):\r\n```\r\ndataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: [\".\/path\/to\/file.json\"]})\r\n```\r\ncauses\r\n```\r\nTraceback (most recent call last):\r\n File \"dataloader.py\", line 9, in <module>\r\n [\".\/path\/to\/file.json\"]})\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 524, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 432, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 483, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 719, in _prepare_split\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/tqdm\/std.py\", line 1129, in __iter__\r\n for obj in iterable:\r\n File \"\/home\/XXX\/.conda\/envs\/torch\/lib\/python3.7\/site-packages\/nlp\/datasets\/json\/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b\/json.py\", line 53, in _generate_tables\r\n file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options,\r\n File \"pyarrow\/_json.pyx\", line 191, in pyarrow._json.read_json\r\n File \"pyarrow\/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)\r\n```\r\n\r\nI haven't been able to find any reports of this specific pyarrow error here or elsewhere. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/369\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/368","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/368\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/368\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/368\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/368","id":654087251,"node_id":"MDU6SXNzdWU2NTQwODcyNTE=","number":368,"title":"load_metric can't acquire lock anymore","user":{"login":"ydshieh","id":2521628,"node_id":"MDQ6VXNlcjI1MjE2Mjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2521628?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ydshieh","html_url":"https:\/\/github.com\/ydshieh","followers_url":"https:\/\/api.github.com\/users\/ydshieh\/followers","following_url":"https:\/\/api.github.com\/users\/ydshieh\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ydshieh\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ydshieh\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ydshieh\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ydshieh\/orgs","repos_url":"https:\/\/api.github.com\/users\/ydshieh\/repos","events_url":"https:\/\/api.github.com\/users\/ydshieh\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ydshieh\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'`."],"created_at":1594303449000,"updated_at":1594388720000,"closed_at":1594388720000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `\/home\/XXX\/.cache\/huggingface\/`, and the issue persisted. What are the steps to fix this?\r\n\r\n Traceback (most recent call last):\r\n File \"\/home\/XXX\/miniconda3\/envs\/ML-DL-py-3.7\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 101, in __init__\r\n self.filelock.acquire(timeout=1)\r\n File \"\/home\/XXX\/miniconda3\/envs\/ML-DL-py-3.7\/lib\/python3.7\/site-packages\/filelock.py\", line 278, in acquire\r\n raise Timeout(self._lock_file)\r\n filelock.Timeout: The file lock '\/home\/XXX\/.cache\/huggingface\/metrics\/glue\/1.0.0\/1-glue-0.arrow.lock' could not be acquired.\r\n\r\n During handling of the above exception, another exception occurred:\r\n\r\n Traceback (most recent call last):\r\n File \"examples_huggingface_nlp.py\", line 268, in <module>\r\n main()\r\n File \"examples_huggingface_nlp.py\", line 242, in main\r\n dataset, metric = get_dataset_metric(glue_task)\r\n File \"examples_huggingface_nlp.py\", line 77, in get_dataset_metric\r\n metric = nlp.load_metric('glue', glue_config, experiment_id=1)\r\n File \"\/home\/XXX\/miniconda3\/envs\/ML-DL-py-3.7\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 440, in load_metric\r\n **metric_init_kwargs,\r\n File \"\/home\/XXX\/miniconda3\/envs\/ML-DL-py-3.7\/lib\/python3.7\/site-packages\/nlp\/metric.py\", line 104, in __init__\r\n \"Cannot acquire lock, caching file might be used by another process, \"\r\n ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run.\r\n I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on \/home\/XXX\/.cache\/huggingface\/metrics\/glue\/1.0.0\/1-glue-0.arrow.lock\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/368\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/367","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/367\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/367\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/367\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/367","id":654012984,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2ODIxNTAz","number":367,"title":"Update Xtreme to add PAWS-X es","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594296877000,"updated_at":1594298231000,"closed_at":1594298230000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/367","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/367","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/367.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/367.patch"},"body":"This PR adds the `PAWS-X.es` in the Xtreme dataset #362 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/367\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/366","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/366\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/366\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/366\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/366","id":653954896,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2NzcyODE2","number":366,"title":"Add quora dataset","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Tests seem to be failing because of pandas","Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now"],"created_at":1594290862000,"updated_at":1594661721000,"closed_at":1594661721000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/366","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/366","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/366.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/366.patch"},"body":"Added the [Quora question pairs dataset](https:\/\/www.quora.com\/q\/quoradata\/First-Quora-Dataset-Release-Question-Pairs).\r\n\r\nImplementation Notes:\r\n- I used the original version provided on the quora website. There's also a [Kaggle competition](https:\/\/www.kaggle.com\/c\/quora-question-pairs) which has a nice train\/test split but I can't find an easy way to download it.\r\n- I've made the questions into a list:\r\n ```python\r\n {\r\n \"questions\": [\r\n {\"id\":0, \"text\": \"Is this an example question?\"},\r\n {\"id\":1, \"text\": \"Is this a sample question?\"},\r\n ],\r\n ...\r\n }\r\n ```\r\n rather than:\r\n ```python\r\n {\r\n \"question1\": \"Is this an example question?\",\r\n \"question2\": \"Is this a sample question?\"\r\n \"qid0\": 0\r\n \"qid1\": 1\r\n ...\r\n }\r\n ```\r\n Not sure if this was the right call.\r\n- Can't find a good citation for this dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/366\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/365","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/365\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/365\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/365\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/365","id":653845964,"node_id":"MDU6SXNzdWU2NTM4NDU5NjQ=","number":365,"title":"How to augment data ?","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?","Some samples in the dataset are too long, I want to divide them in several samples.","Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for augmentation.\r\n\r\nLet me know if you think there should be another way to do it. Or feel free to close the issue otherwise.","It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way.\r\n\r\nBut to be honest I have no idea of a good API...","Or for non-batched samples, how about returning a tuple ?\r\n\r\n```python\r\ndef aug(sample):\r\n # Simply copy the existing data to have x2 amount of data\r\n return sample, sample\r\n\r\ndataset = dataset.map(aug)\r\n```\r\n\r\nIt feels really natural and easy, but :\r\n\r\n* it means the behavior with batched data is different\r\n* I don't know how doable it is backend-wise\r\n\r\n@lhoestq ","As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples.\r\nIf we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example.\r\n\r\nIt's also a matter of coherence, as we don't want users to be confused whether they have to return dictionaries for some functions and tuples for others when they're doing batches."],"created_at":1594281157000,"updated_at":1594372327000,"closed_at":1594369335000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Is there any clean way to augment data ?\r\n\r\nFor now my work-around is to use batched map, like this :\r\n\r\n```python\r\ndef aug(samples):\r\n # Simply copy the existing data to have x2 amount of data\r\n for k, v in samples.items():\r\n samples[k].extend(v)\r\n return samples\r\n\r\ndataset = dataset.map(aug, batched=True)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/365\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/364","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/364\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/364\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/364\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/364","id":653821597,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NzM5","number":364,"title":"add MS MARCO dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. ","Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy.","The fact that the dummy data for v2.1 is missing shouldn't make the test fails I think. But as you mention the dummy data structure of v1.1 is wrong. I tried to rename files but it does not solve the issue.","Is MS mARCO added to nlp library?I am not able to view it?","> Is MS mARCO added to nlp library?I am not able to view it?\r\n\r\nHi @parthplc ,the PR is not merged yet. The dummy data structure is still failing. Maybe @patrickvonplaten can help with it.","Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!","> Dataset is fixed and should be ready for use. @mariamabarham @lhoestq feel free to merge whenever!\r\n\r\nthanks"],"created_at":1594278679000,"updated_at":1596694549000,"closed_at":1596694548000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/364","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/364","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/364.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/364.patch"},"body":"This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:\r\n\r\n- Passage and Document Retrieval\r\n\r\n- Keyphrase Extraction\r\n\r\n- QA and NLG\r\n\r\nThis PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https:\/\/arxiv.org\/pdf\/1611.09268.pdf \r\n\r\nTests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/364\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/363","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/363\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/363\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/363\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/363","id":653821172,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NDIy","number":363,"title":"Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets","user":{"login":"eltoto1219","id":14030663,"node_id":"MDQ6VXNlcjE0MDMwNjYz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14030663?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/eltoto1219","html_url":"https:\/\/github.com\/eltoto1219","followers_url":"https:\/\/api.github.com\/users\/eltoto1219\/followers","following_url":"https:\/\/api.github.com\/users\/eltoto1219\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/eltoto1219\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/eltoto1219\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/eltoto1219\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/eltoto1219\/orgs","repos_url":"https:\/\/api.github.com\/users\/eltoto1219\/repos","events_url":"https:\/\/api.github.com\/users\/eltoto1219\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/eltoto1219\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing script you sent me earlier since it ended up being tremendously helpful. ","Okay, I just converted the MultiArray class to Array2D, and got rid of all those \"globals()\"! \r\n\r\nThe main issues I had were that when including a \"pa.ExtensionType\" as a column, the ordinary methods to batch the data would not work and it would throw me some mysterious error, so I first cleaned up my code to order the row to match the schema (because when including extension types the row is disordered ) and then made each row a pa.Table and then concatenated all the tables. Also each n-dimensional vector class we implement will be size invariant which is some good news. ","Okay awesome! I just added your suggestions and changed up my recursive functions. \r\n\r\nHere is the traceback for the when I use the original code in the write_on_file method:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 33, in <module>\r\n File \"\/home\/eltoto\/nlp\/src\/nlp\/arrow_writer.py\", line 214, in finalize\r\n self.write_on_file()\r\n File \"\/home\/eltoto\/nlp\/src\/nlp\/arrow_writer.py\", line 134, in write_on_file\r\n pa_array = pa.array(self.current_rows, type=self._type)\r\n File \"pyarrow\/array.pxi\", line 269, in pyarrow.lib.array\r\n File \"pyarrow\/array.pxi\", line 38, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\/error.pxi\", line 106, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>\r\n\r\nshell returned 1\r\n```\r\n\r\nI think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround. \r\n\r\nIn the case that this new method causes bad compression\/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(***batch_size***) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.","> I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.\r\n\r\nIndeed that's weird.\r\n\r\n> In the case that this new method causes bad compression\/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(batch_size) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.\r\n\r\nThe argument of `pa.Table.to_batches` is not `batch_size` but `max_chunksize`, which means that right now it would have no effects (each chunk is of length 1).\r\n\r\nWe can fix that just by doing `entries.combine_chunks().to_batches(batch_size)`. In that case it would write by chunk of 1000 which is what we want. I don't think it will slow down the writing by much, but we may have to do a benchmark just to make sure. If speed is ok we could even replace the original code to always write chunks this way.\r\n\r\nDo you still have errors that need to be fixed ?","@lhoestq Nope all should be good! \r\n\r\nWould you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?","> @lhoestq Nope all should be good!\r\n\r\nAwesome :)\r\n\r\nI think it would be good to start to add some tests then.\r\nYou already have `test_multi_array.py` which is a good start, maybe you can place it in \/tests and make it a `unittest.TestCase` ?\r\n\r\n> Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n\r\nThat would be interesting. We don't want reading\/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n- write speed + read speed a dataset with `nlp.Array2D` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\nIt will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n\r\nWhat do you think ?","Well actually it looks like we're still having the `print(dataset[0])` error no ?","I just tested your code to try to understand better.\r\n\r\n\r\n- First thing you must know is that we've switched from `dataset._data.to_pandas` to `dataset._data.to_pydict` by default when we call `dataset[0]` in #423 . Right now it raises an error but it can be fixed by adding this method to `ExtensionArray2D`:\r\n\r\n```python\r\n def to_pylist(self):\r\n return self.to_numpy().tolist()\r\n```\r\n\r\n- Second, I noticed that `ExtensionArray2D.to_numpy()` always return a (5, 5) shape in your example. I thought `ExtensionArray` was for possibly multiple examples and so I was expecting a shape like (1, 5, 5) for example. Did I miss something ?\r\nTherefore when I apply the fix I mentioned (adding to_pylist), it returns one example per row in each image (in your example of 2 images of shape 5x5, I get `len(dataset._data.to_pydict()[\"image\"]) == 10 # True`)\r\n\r\n[EDIT] I changed the reshape step in `ExtensionArray2D.to_numpy()` by\r\n```python\r\nnumpy_arr = numpy_arr.reshape(len(self), *ExtensionArray2D._construct_shape(self.storage))\r\n```\r\nand it did the job: `len(dataset._data.to_pydict()[\"image\"]) == 2 # True`\r\n\r\n- Finally, I was able to make `to_pandas` work though, by implementing custom array dtype in pandas with arrow conversion (I got inspiration from [here](https:\/\/gist.github.com\/Eastsun\/a59fb0438f65e8643cd61d8c98ec4c08) and [here](https:\/\/pandas.pydata.org\/pandas-docs\/version\/1.0.0\/development\/extending.html#compatibility-with-apache-arrow))\r\n\r\nMaybe you could add me in your repo so I can open a PR to add these changes to your branch ?","`combine_chunks` doesn't seem to work btw:\r\n`ArrowNotImplementedError: concatenation of extension<arrow.py_extension_type>`","> > @lhoestq Nope all should be good!\r\n> \r\n> Awesome :)\r\n> \r\n> I think it would be good to start to add some tests then.\r\n> You already have `test_multi_array.py` which is a good start, maybe you can place it in \/tests and make it a `unittest.TestCase` ?\r\n> \r\n> > Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n> \r\n> That would be interesting. We don't want reading\/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n> \r\n> * write speed + read speed a dataset with `nlp.Array2D` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\n> It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n> \r\n> What do you think ?\r\n\r\nYa! that should be no problem at all, Ill use the timeit module and get back to you with the results sometime over the weekend.","Thank you for all your help getting the pandas and row indexing for the dataset to work! For `print(dataset[0])`, I considered the workaround of doing `print(dataset[\"col_name\"][0])` a temporary solution, but ya, I was never able to figure out how to previously get it to work. I'll add you to my repo right now, let me know if you do not see the invite. Also lastly, it is strange how the to_batches method is not working, so I can check that out while I add some speed tests + add the multi dim test under the unit tests this weekend. ","I created the PR :)\r\nI also tested `to_batches` and it works on my side","Sorry for the bit of delay! I just added the tests, the PR into my fork, and some speed tests. It should be fairly easy to add more tests if we need. Do you think there is anything else to checkout?","Cool thanks for adding the tests :) \r\n\r\nNext step is merge master into this branch.\r\nNot sure I understand what you did in your last commit, but it looks like you discarded all the changes from master ^^'\r\n\r\nWe've done some changes in the features logic on master, so let me know if you need help merging it.\r\n\r\nAs soon as we've merged from master, we'll have to make sure that we have extensive tests and we'll be good to do !\r\nAbout the lxmert dataset, we can probably keep it for another PR as soon as we have working 2d features. What do you think ?","We might want to merge this after tomorrow's release though to avoid potential side effects @lhoestq ","Yep I'm sure we can have it not for tomorrow's release but for the next one ;)","haha, when I tried to rebase, I ran into some conflicts. In that last commit, I restored the features.py from the previous commit on the branch in my fork because upon updating to master, the pandasdtypemanger and pandas extension types disappeared. If you actually could help me with merging in what is needed, that would actually help a lot. \r\n\r\nOther than that, ya let me go ahead and move the dataloader code out of this PR. Perhaps we could discuss in the slack channelk soon about what to do with that because we can either just support the pretraining corpus for lxmert or try to implement the full COCO and visual genome datasets (+VQA +GQA) which im sure people would be pretty happy about. \r\n\r\nAlso we can talk more tests soon too when you are free. \r\n\r\nGoodluck on the release tomorrow guys!","Not sure why github thinks there are conflicts here, as I just rebased from the current master branch.\r\nMerging into master locally works on my side without conflicts\r\n```\r\ngit checkout master\r\ngit reset --hard origin\/master\r\ngit merge --no-ff eltoto1219\/support_multi_dim_tensors_for_images\r\nMerge made by the 'recursive' strategy.\r\n datasets\/lxmert_pretraining_beta\/lxmert_pretraining_beta.py | 89 +++++++++++++++++++++++++++++++++++++\r\n datasets\/lxmert_pretraining_beta\/test_multi_array.py | 45 +++++++++++++++++++\r\n datasets\/lxmert_pretraining_beta\/to_arrow_data.py | 371 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n src\/nlp\/arrow_dataset.py | 24 +++++-----\r\n src\/nlp\/arrow_writer.py | 22 ++++++++--\r\n src\/nlp\/features.py | 229 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---\r\n tests\/test_array_2d.py | 210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n 7 files changed, 969 insertions(+), 21 deletions(-)\r\n create mode 100644 datasets\/lxmert_pretraining_beta\/lxmert_pretraining_beta.py\r\n create mode 100644 datasets\/lxmert_pretraining_beta\/test_multi_array.py\r\n create mode 100644 datasets\/lxmert_pretraining_beta\/to_arrow_data.py\r\n create mode 100644 tests\/test_array_2d.py\r\n```","I put everything inside one commit from the master branch but the merge conflicts on github'side were still there for some reason.\r\nClosing and re-opening the PR fixed the conflict check on github's side.","Almost done ! It still needs a pass on the docs\/comments and maybe a few more tests.\r\n\r\nI had to do several changes for type inference in the ArrowWriter to make it support custom types.","Ok this is now ready for review ! Thanks for your awesome work in this @eltoto1219 \r\n\r\nSummary of the changes:\r\n- added new feature type `Array2D`, that can be instantiated like `Array2D(\"float32\")` for example\r\n- added pyarrow extension type `Array2DExtensionType` and array `Array2DExtensionArray` that take care of converting from and to arrow. `Array2DExtensionType`'s storage is a list of list of any pyarrow array.\r\n- added pandas extension type `PandasArrayExtensionType` and array `PandasArrayExtensionArray` for conversion from and to arrow\/python objects\r\n- refactor of the `ArrowWriter` write and write_batch functions to support extension types while preserving the type inference behavior.\r\n- added a utility object `TypedSequence` that is helpful to combine extension arrays and type inference inside the writer's methods.\r\n- added speed test for sequences writing (printed as warnings in pytest)\r\n- breaking: set disable_nullable to False by default as pyarrow's type inference returns nullable fields\r\n\r\nAnd there are plenty of new tests, mainly in `test_array2d.py` and `test_arrow_writer.py`.\r\n\r\nNote that there are some collisions in `arrow_dataset.py` with #513 so let's be careful when we'll merge this one.\r\n\r\nI know this is a big PR so feel free to ask questions","I'll add Array3D, 4D.. tomorrow but it should take only a few lines. The rest won't change","I took your comments into account and I added Array[3-5]D.\r\nI changed the storage type to fixed lengths lists. I had to update the `to_numpy` function because of that. Indeed slicing a FixedLengthListArray returns a view a of the original array, while in the previous case slicing a ListArray copies the storage.\r\n"],"created_at":1594278630000,"updated_at":1598263175000,"closed_at":1598263175000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/363","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/363","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/363.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/363.patch"},"body":"nlp\/features.py:\r\n\r\nThe main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions\/shape. I provide examples on working with this in datasets\/lxmert_pretraining_beta\/test_multi_array.py\r\n\r\nsrc\/nlp\/arrow_writer.py\r\n\r\nI had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other \"array-like\" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!\r\n\r\ndatasets\/lxmert_pretraining_beta\/lxmert_pretraining_beta.py & datasets\/lxmert_pretraining_beta\/to_arrow_data.py:\r\n\r\nI have begun adding the data from the original LXMERT paper (https:\/\/arxiv.org\/abs\/1908.07490) hosted here: (https:\/\/github.com\/airsplay\/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ). \r\nFor now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as \"beta_pretraining\"!\r\n\r\n(still working on the pretraining, just wanted to push out the new functionality sooner than later)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/363\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/362","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/362\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/362\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/362\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/362","id":653766245,"node_id":"MDU6SXNzdWU2NTM3NjYyNDU=","number":362,"title":"[dateset subset missing] xtreme paws-x","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You're right, thanks for pointing it out. We will update it "],"created_at":1594271094000,"updated_at":1594298322000,"closed_at":1594298322000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error\r\nIt turns out that the subset for Spanish is missing\r\nhttps:\/\/github.com\/google-research-datasets\/paws\/tree\/master\/pawsx","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/362\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/361","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/361\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/361\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/361\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/361","id":653757376,"node_id":"MDU6SXNzdWU2NTM3NTczNzY=","number":361,"title":"\ud83d\udc1b [Metrics] ROUGE is non-deterministic","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, can you give a full self-contained example to reproduce this behavior?","> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)","> If I run the ROUGE metric 2 times, with same predictions \/ references, the scores are slightly different.\r\n> \r\n> Refer to [this Colab notebook](https:\/\/colab.research.google.com\/drive\/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.\r\n> \r\n> Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :\r\n> \r\n> > ['0.3350', '0.1470', '0.2329']\r\n> > ['0.3358', '0.1451', '0.2332']\r\n> \r\n> Why ROUGE is not deterministic ?\r\n\r\nThis is because of rouge's `BootstrapAggregator` that uses sampling to get confidence intervals (low, mid, high).\r\nYou can get deterministic scores per sentence pair by using\r\n```python\r\nscore = rouge.compute(rouge_types=[\"rouge1\", \"rouge2\", \"rougeL\"], use_agregator=False)\r\n```\r\nOr you can set numpy's random seed if you still want to use the aggregator.","Maybe we can set all the random seeds of numpy\/torch etc. while running `metric.compute` ?","We should probably indeed!","Now if you re-run the notebook, the two printed results are the same @colanim\r\n```\r\n['0.3356', '0.1466', '0.2318']\r\n['0.3356', '0.1466', '0.2318']\r\n```\r\nHowever across sessions, the results may change (as numpy's random seed can be different). You can prevent that by setting your seed:\r\n```python\r\nrouge = nlp.load_metric('rouge', seed=42)\r\n```"],"created_at":1594269577000,"updated_at":1595288917000,"closed_at":1595288917000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"If I run the ROUGE metric 2 times, with same predictions \/ references, the scores are slightly different.\r\n\r\nRefer to [this Colab notebook](https:\/\/colab.research.google.com\/drive\/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.\r\n\r\nExample of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run :\r\n\r\n> ['0.3350', '0.1470', '0.2329']\r\n['0.3358', '0.1451', '0.2332']\r\n\r\n---\r\n\r\nWhy ROUGE is not deterministic ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/361\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/360","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/360\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/360\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/360\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/360","id":653687176,"node_id":"MDU6SXNzdWU2NTM2ODcxNzY=","number":360,"title":"[Feature request] Add dataset.ragged_map() function for many-to-many transformations","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.","You're two steps ahead of me :) In my testing, it also works if `M` < `N`.\r\n\r\nA batched map of different length seems to work if you directly overwrite all of the original keys, but fails if any of the original keys are preserved.\r\n\r\nFor example,\r\n```python\r\n# Create a dummy dataset\r\ndset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")[\"test\"]\r\ndset = dset.map(lambda ex: {\"length\": len(ex[\"text\"]), \"foo\": 1})\r\n\r\n# Do an allreduce on each batch, overwriting both keys\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])], \"foo\": [1]})\r\n# Dataset(schema: {'length': 'int64', 'foo': 'int64'}, num_rows: 5)\r\n\r\n# Now attempt an allreduce without touching the `foo` key\r\ndset.map(lambda batch: {\"length\": [sum(batch[\"length\"])]})\r\n# This fails with the error message below\r\n```\r\n\r\n```bash\r\n File \"\/path\/to\/nlp\/src\/nlp\/arrow_dataset.py\", line 728, in map\r\n arrow_schema = pa.Table.from_pydict(test_output).schema\r\n File \"pyarrow\/io.pxi\", line 1532, in pyarrow.lib.Codec.detect\r\n File \"pyarrow\/table.pxi\", line 1503, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow\/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow\/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named foo expected length 1 but got length 2\r\n```\r\n\r\nAdding the `remove_columns=[\"length\", \"foo\"]` argument to `map()` solves the issue. Leaving the above error for future visitors. Perfect, thank you!"],"created_at":1594256683000,"updated_at":1594323111000,"closed_at":1594323111000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.\r\n`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero\/one example. This is helpful for removing portions from the dataset.\r\nHowever, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `[\"a\", \"b\", \"c\"] -> [\"a[SEP]b\", \"a[SEP]c\", \"b[SEP]c\", \"c[SEP]b\", ...]`\r\n\r\nI propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this.\r\n\r\nMy specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https:\/\/github.com\/google-research\/electra\/blob\/master\/build_pretraining_dataset.py, which are less general.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/360\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/359","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/359\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/359\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/359\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/359","id":653656279,"node_id":"MDU6SXNzdWU2NTM2NTYyNzk=","number":359,"title":"ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema\/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", data_files=rel_datafiles)\r\n```","The behavior I'm seeing is from the `json` script. \r\nI hacked this together to overcome the error with the `JSON` dataloader\r\n\r\n```\r\nclass DatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n # this is where the error is coming from\r\n # def parse_schema(schema, schema_dict):\r\n # for field in schema:\r\n # if pa.types.is_struct(field.type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type, schema_dict[field.name])\r\n # elif pa.types.is_list(field.type) and pa.types.is_struct(field.type.value_type):\r\n # schema_dict[field.name] = {}\r\n # parse_schema(field.type.value_type, schema_dict[field.name])\r\n # else:\r\n # schema_dict[field.name] = Value(str(field.type))\r\n # \r\n # parse_schema(writer.schema, features)\r\n # self.info.features = Features(features)\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n```\r\n\r\nSo I basically just don't populate the `self.info.features` though this doesn't seem to cause any problems in my downstream applications. \r\n\r\nThe other workaround I was doing was to just use pyarrow.json to build a table and then to create the Dataset with its constructor or from_table methods. `load_dataset` has nice split logic, so I'd prefer to use that.\r\n\r\n","Also noticed that if you for example in a loader script\r\n\r\n```\r\nfrom nlp import ArrowBasedBuilder\r\n\r\nclass MyBuilder(ArrowBasedBuilder):\r\n...\r\n\r\n```\r\nand use that in the subclass, it will be on the module's __dict__ and will be selected before the `MyBuilder` subclass, and it will raise `NotImplementedError` on its `_generate_examples` method... In the code it check for abstract classes but Builder and ArrowBasedBuilder aren't abstract classes, they're regular classes with `@abstract_methods`.","Indeed this is part of a more general limitation which is the fact that we should generate and update the `features` from the auto-inferred Arrow schema when they are not provided (also happen when a user change the schema using `map()`, the features should be auto-generated and guessed as much as possible to keep the `features` synced with the underlying Arrow table schema).\r\n\r\nWe will try to solve this soon."],"created_at":1594250645000,"updated_at":1594392726000,"closed_at":1594392726000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-23-9aecfbee53bd> in <module>\r\n 55 from nlp import load_dataset\r\n 56 \r\n---> 57 ds = load_dataset(\"..\/text2struct\/model\/dataset_builder.py\", data_files=rel_datafiles)\r\n 58 \r\n 59 \r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n--> 738 parse_schema(writer.schema, features)\r\n 739 self.info.features = Features(features)\r\n 740 \r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in parse_schema(schema, schema_dict)\r\n 734 parse_schema(field.type.value_type, schema_dict[field.name])\r\n 735 else:\r\n--> 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n 738 parse_schema(writer.schema, features)\r\n\r\n<string> in __init__(self, dtype, id, _type)\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/features.py in __post_init__(self)\r\n 55 \r\n 56 def __post_init__(self):\r\n---> 57 self.pa_type = string_to_arrow(self.dtype)\r\n 58 \r\n 59 def __call__(self):\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/features.py in string_to_arrow(type_str)\r\n 32 if str(type_str + \"_\") not in pa.__dict__:\r\n 33 raise ValueError(\r\n---> 34 f\"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. \"\r\n 35 f\"Please make sure to use a correct data type, see: \"\r\n 36 f\"https:\/\/arrow.apache.org\/docs\/python\/api\/datatypes.html#factory-functions\"\r\n\r\nValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https:\/\/arrow.apache.org\/docs\/python\/api\/datatypes.html#factory-functions\r\n```\r\n\r\nIf I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/359\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/358","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/358\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/358\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/358\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/358","id":653645121,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2NTI0NjQ5","number":358,"title":"Starting to add some real doc","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)\r\n\r\nThis first version of the doc can be explored here: https:\/\/2219-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/index.html"],"created_at":1594248783000,"updated_at":1594720697000,"closed_at":1594720695000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/358","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/358","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/358.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/358.patch"},"body":"Adding a lot of documentation for:\r\n- load a dataset\r\n- explore the dataset object\r\n- process data with the dataset\r\n- add a new dataset script\r\n- share a dataset script\r\n- full package reference\r\n\r\nThis version of the doc can be explored here: https:\/\/2219-250213286-gh.circle-artifacts.com\/0\/docs\/_build\/html\/index.html\r\n\r\nAlso:\r\n- fix a bug in `train_test_split`\r\n- update the `csv` script\r\n- add a verbose argument to the dataset processing methods\r\n\r\nStill missing:\r\n- doc for the metrics\r\n- how to directly upload a community provided dataset with the CLI\r\n- clean up more docstrings\r\n- add the `features` argument to `load_dataset` (should be another PR)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/358\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/357","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/357\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/357\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/357\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/357","id":653642292,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2NTIyMzU2","number":357,"title":"Add hashes to cnn_dailymail","user":{"login":"jbragg","id":2238344,"node_id":"MDQ6VXNlcjIyMzgzNDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2238344?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jbragg","html_url":"https:\/\/github.com\/jbragg","followers_url":"https:\/\/api.github.com\/users\/jbragg\/followers","following_url":"https:\/\/api.github.com\/users\/jbragg\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jbragg\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jbragg\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jbragg\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jbragg\/orgs","repos_url":"https:\/\/api.github.com\/users\/jbragg\/repos","events_url":"https:\/\/api.github.com\/users\/jbragg\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jbragg\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks you to me :)\r\n\r\nCould you also update the json file that goes with the dataset script by doing \r\n```\r\nnlp-cli test .\/datasets\/cnn_dailymail --save_infos --all_configs\r\n```\r\nIt will update the features metadata and the size of the dataset with your changes.","@lhoestq I ran that command.\r\n\r\nThanks for the helpful repository!"],"created_at":1594248321000,"updated_at":1594649798000,"closed_at":1594649798000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/357","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/357","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/357.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/357.patch"},"body":"The URL hashes are helpful for comparing results from other sources.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/357\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/356","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/356\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/356\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/356\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/356","id":653537388,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2NDM3MDQ5","number":356,"title":"Add text dataset","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594236113000,"updated_at":1594390743000,"closed_at":1594390743000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/356","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/356","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/356.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/356.patch"},"body":"Usage:\r\n\r\n```python\r\nfrom nlp import load_dataset\r\ndset = load_dataset(\"text\", data_files=\"\/path\/to\/file.txt\")[\"train\"]\r\n```\r\n\r\n\r\nI created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes\r\n\r\n```bash\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text\r\n```\r\n\r\nbut I would like a second set of eyes to ensure I did it right.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/356\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/355","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/355\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/355\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/355\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/355","id":653451013,"node_id":"MDU6SXNzdWU2NTM0NTEwMTM=","number":355,"title":"can't load SNLI dataset","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or the download speed is too slow, or sometimes the files take time to be processed.","Closing this one. Feel free to re-open if you have other questions :)","Thank you!"],"created_at":1594227254000,"updated_at":1595049357000,"closed_at":1594799941000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.\r\n\r\nIs there a plan to move these datasets to huggingface servers for a more stable solution?\r\n\r\nBtw, here's the stack trace:\r\n\r\n```\r\nFile \"\/content\/nlp\/src\/nlp\/builder.py\", line 432, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/content\/nlp\/src\/nlp\/builder.py\", line 466, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/content\/nlp\/src\/nlp\/datasets\/snli\/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d\/snli.py\", line 76, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DATA_URL)\r\n File \"\/content\/nlp\/src\/nlp\/utils\/download_manager.py\", line 217, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/content\/nlp\/src\/nlp\/utils\/download_manager.py\", line 156, in download\r\n lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n File \"\/content\/nlp\/src\/nlp\/utils\/py_utils.py\", line 190, in map_nested\r\n return function(data_struct)\r\n File \"\/content\/nlp\/src\/nlp\/utils\/download_manager.py\", line 156, in <lambda>\r\n lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n File \"\/content\/nlp\/src\/nlp\/utils\/file_utils.py\", line 198, in cached_path\r\n local_files_only=download_config.local_files_only,\r\n File \"\/content\/nlp\/src\/nlp\/utils\/file_utils.py\", line 356, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https:\/\/nlp.stanford.edu\/projects\/snli\/snli_1.0.zip\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/355\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/354","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/354\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/354\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/354\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/354","id":653357617,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2MjkyMTc4","number":354,"title":"More faiss control","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Ok, so we're getting rid of the `FaissGpuOptions`?\r\n\r\nWe support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index and then use `custom_index=...`"],"created_at":1594219520000,"updated_at":1594288494000,"closed_at":1594288491000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/354","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/354","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/354.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/354.patch"},"body":"Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/354\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/353","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/353\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/353\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/353\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/353","id":653250611,"node_id":"MDU6SXNzdWU2NTMyNTA2MTE=","number":353,"title":"[Dataset requests] New datasets for Text Classification","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892884,"node_id":"MDU6TGFiZWwxOTM1ODkyODg0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/help%20wanted","name":"help wanted","color":"008672","default":true,"description":"Extra attention is needed"},{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Pinging @mariamabarham as well","- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classification dataset","Thanks @jxmorris12 for pointing this out. \r\n\r\nIn glue we only have SST-2 maybe we can add separately SST-1.\r\n","This is the homepage for the Amazon dataset: https:\/\/www.kaggle.com\/datafiniti\/consumer-reviews-of-amazon-products\r\n\r\nIs there an easy way to download kaggle datasets programmatically? If so, I can add this one!","Hi @jxmorris12 for now I think our `dl_manager` does not download from Kaggle.\r\n@thomwolf , @lhoestq","Pretty sure the quora dataset is the same one I implemented here: https:\/\/github.com\/huggingface\/nlp\/pull\/366","Great list. Any idea if Amazon Reviews has been added?\r\n\r\n- ~40 GB of text (sadly no emoji)\r\n- popular MLM pre-training dataset before bigger datasets like WebText https:\/\/arxiv.org\/abs\/1808.01371\r\n- turns out that binarizing the 1-5 star rating leads to great Pos\/Neg\/Neutral dataset, T5 paper claims to get very high accuracy (98%!) on this with small amount of finetuning https:\/\/arxiv.org\/abs\/2004.14546\r\n\r\nApologies if it's been included (great to see where) and if not, it's one of the better medium\/large NLP dataset for semi-supervised learning, albeit a bit out of date. \r\n\r\nThanks!! \r\n\r\ncc @sshleifer ","On the Amazon Reviews dataset, the original UCSD website has noted these are now updated to include product reviews through 2018 -- actually quite recent compared to many other datasets. Almost certainly the largest NLP dataset out there with labels!\r\nhttps:\/\/jmcauley.ucsd.edu\/data\/amazon\/ \r\n\r\nAny chance someone has time to onboard this dataset in a HF way?\r\n\r\ncc @sshleifer "],"created_at":1594210678000,"updated_at":1603165283000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"We are missing a few datasets for Text Classification which is an important field.\r\n\r\nNamely, it would be really nice to add:\r\n- TREC-6 dataset (see here for instance: https:\/\/pytorchnlp.readthedocs.io\/en\/latest\/source\/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**\r\n- Yelp-5\r\n- Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]**\r\n- SST (Stanford Sentiment Treebank) **[include in glue]**\r\n- Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]**\r\n- Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification\r\n- 20 Newsgroups. The 20 Newsgroups dataset **[done]**\r\n- Sogou News dataset **[done]**\r\n- Reuters news. The Reuters-21578 dataset [165] **[done]**\r\n- DBpedia. The DBpedia dataset [170]\r\n- Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database\r\n- EUR-Lex. The EUR-Lex dataset\r\n- WOS. The Web Of Science (WOS) dataset **[done]**\r\n- PubMed. PubMed [173]\r\n- TREC-QA. TREC-QA\r\n- Quora. The Quora dataset [180]\r\n\r\nAll these datasets are cited in https:\/\/arxiv.org\/abs\/2004.03705","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/353\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/352","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/352\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/352\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/352\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/352","id":653128883,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky","number":352,"title":"\ud83d\udc1b[BugFix]fix seqeval","user":{"login":"AlongWY","id":20281571,"node_id":"MDQ6VXNlcjIwMjgxNTcx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20281571?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AlongWY","html_url":"https:\/\/github.com\/AlongWY","followers_url":"https:\/\/api.github.com\/users\/AlongWY\/followers","following_url":"https:\/\/api.github.com\/users\/AlongWY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AlongWY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AlongWY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AlongWY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AlongWY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AlongWY\/repos","events_url":"https:\/\/api.github.com\/users\/AlongWY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AlongWY\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think this is good but can you detail a bit the behavior before and after your fix?","examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']`\r\nbefore: `[('LOC', 0, 2), ('TIME', 4, 5)]`\r\nafter: `[('ARGM-LOC', 0, 2), ('ARGM-TIME', 4, 5)]`\r\n\r\nThis is my test code:\r\n\r\n```python\r\nfrom metrics.seqeval.seqeval import end_of_chunk, start_of_chunk\r\n\r\n\r\ndef before_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk.split('-')[0]\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk.split('-')[-1]\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef after_get_entities(seq, suffix=False):\r\n \"\"\"Gets entities from sequence.\r\n Args:\r\n seq (list): sequence of labels.\r\n Returns:\r\n list: list of (chunk_type, chunk_start, chunk_end).\r\n \"\"\"\r\n if any(isinstance(s, list) for s in seq):\r\n seq = [item for sublist in seq for item in sublist + ['O']]\r\n\r\n prev_tag = 'O'\r\n prev_type = ''\r\n begin_offset = 0\r\n chunks = []\r\n for i, chunk in enumerate(seq + ['O']):\r\n if suffix:\r\n tag = chunk[-1]\r\n type_ = chunk[:-1].rsplit('-', maxsplit=1)[0] or '_'\r\n else:\r\n tag = chunk[0]\r\n type_ = chunk[1:].split('-', maxsplit=1)[-1] or '_'\r\n\r\n if end_of_chunk(prev_tag, tag, prev_type, type_):\r\n chunks.append((prev_type, begin_offset, i - 1))\r\n if start_of_chunk(prev_tag, tag, prev_type, type_):\r\n begin_offset = i\r\n prev_tag = tag\r\n prev_type = type_\r\n\r\n return chunks\r\n\r\n\r\ndef main():\r\n examples_1 = ['B', 'I', 'I', 'O', 'B', 'I']\r\n print(before_get_entities(examples_1))\r\n print(after_get_entities(examples_1))\r\n examples_2 = ['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O', 'B-ARGM-TIME', 'I-ARGM-TIME']\r\n print(before_get_entities(examples_2))\r\n print(after_get_entities(examples_2))\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```","And we can get more examples not correct, such as:\r\n\r\ninput: `['B', 'I', 'I-I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2)]`\r\nafter: `[('_', 0, 1), ('I', 2, 2)]`\r\n\r\ninput: `['B-ARGM-TIME', 'I-ARGM-TIME', 'I-TIME']`\r\nbefore: `[('TIME', 0, 2)]`\r\nafter: `[('ARGM-TIME', 0, 1), ('TIME', 2, 2)]`","I think i didn't break any thing. Maybe the checks should be restart?","Could you please rebase from master @AlongWY ? This should fix the CI stuff","ok, i will do it","Indeed the official repo is quite stale. Let's merge it here, thanks @AlongWY "],"created_at":1594199532000,"updated_at":1594888006000,"closed_at":1594888006000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/352","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/352","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/352.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/352.patch"},"body":"Fix seqeval process labels such as 'B', 'B-ARGM-LOC'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/352\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/351","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/351\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/351\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/351\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/351","id":652424048,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ1NDk0NTE4","number":351,"title":"add pandas dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594136287000,"updated_at":1594217716000,"closed_at":1594217715000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/351","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/351","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/351.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/351.patch"},"body":"Create a dataset from serialized pandas dataframes.\r\nUsage:\r\n```python\r\nfrom nlp import load_dataset\r\ndset = load_dataset(\"pandas\", data_files=\"df.pkl\")[\"train\"]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/351\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/350","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/350\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/350\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/350\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/350","id":652398691,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ1NDczODYz","number":350,"title":"add from_pandas and from_dict","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594134233000,"updated_at":1594217673000,"closed_at":1594217672000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/350","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/350","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/350.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/350.patch"},"body":"I added two new methods to the `Dataset` class:\r\n- `from_pandas()` to create a dataset from a pandas dataframe\r\n- `from_dict()` to create a dataset from a dictionary (keys = columns)\r\n\r\nIt uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.\r\nIt is also possible to specify the features types via `features=...` if there are ambiguities (null\/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow.\r\n\r\nOne question that I have right now:\r\n+ Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/350\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/349","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/349\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/349\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/349\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/349","id":652231571,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ1MzQwMTQ1","number":349,"title":"Hyperpartisan news detection","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you so much for working on this! This is awesome!\r\n\r\nHow much would it help you if we would remove the manual request?\r\n\r\nWe are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove this small barrier on our side (so that we then still get the download count from your library).","This is an interesting aspect indeed!\r\nDo you want to send me an email (see my homepage) and I'll invite you on our slack channel to talk about that?\r\n@ghomasHudson wanna reach out to me as well? I tried to find your email to invite you without success."],"created_at":1594119997000,"updated_at":1594154847000,"closed_at":1594133831000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/349","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/349","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/349.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/349.patch"},"body":"Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.\r\n\r\nImplementation notes:\r\n- As with many PAN tasks, the data is hosted on [Zenodo](https:\/\/zenodo.org\/record\/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?\r\n- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?\r\n- Should we always subclass `nlp.BuilderConfig`?\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/349\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/348","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/348\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/348\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/348\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/348","id":652158308,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ1MjgwNjk3","number":348,"title":"Add OSCAR dataset","user":{"login":"pjox","id":635220,"node_id":"MDQ6VXNlcjYzNTIyMA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/635220?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pjox","html_url":"https:\/\/github.com\/pjox","followers_url":"https:\/\/api.github.com\/users\/pjox\/followers","following_url":"https:\/\/api.github.com\/users\/pjox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pjox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pjox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pjox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pjox\/orgs","repos_url":"https:\/\/api.github.com\/users\/pjox\/repos","events_url":"https:\/\/api.github.com\/users\/pjox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pjox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\n ","> @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\nBut can I do the dummy data without running `python nlp-cli test datasets\/<your-dataset-folder> --save_infos --all_configs` first? \ud83e\udd14 ","You make a good point! Do you know how big is it uncompressed?","Between 7T and 9T I think.","Hi ! I've been busy but I plan to compute the missing metadata soon !\r\nLooking forward to be able to load a memory mapped version of OSCAR :) ","> Hi ! I've been busy but I plan to compute the missing metadata soon !\r\n> Looking forward to be able to load a memory mapped version of OSCAR :)\r\n\r\nAmazing! Thanks! \ud83d\ude04 ","Hi there, are there any plans to complete this issue soon? I'm planning to use this dataset on a project. Let me know if there's anything I can do to help to finish this \ud83e\udd17 ","Yes it will be added soon :) \r\nRecently the OSCAR data files were moved to another host. We just need to update the script and compute the dataset_infos.json (it will probably take a few days).","@lhoestq I've seen in oscar.py that it isn't a dataset script with manual download way. Is that correct? \r\nSome time ago, @pjox had some troubles with his servers providing that dataset 'cause it's really huge. Providing it on an automatic download way seems to be a little bit dangerous for me \ud83d\ude04 ","Now thanks to @pjox 's help OSCAR is hosted on HF's S3, which is probably more robust that the previous servers :)\r\n\r\nAlso small update on my side:\r\nI launched the computation of the dataset_infos.json file, it will take a few days.","Now it seems to be a good plan for me \ud83e\udd17 ","But is there a plan to provide the OSCAR's unshuffled version too?","The one we have on S3 is currently the unshuffled version","I've thought that you won't provide the unshuffled version 'cause this comment on oscar.py:\r\n\r\n`# TODO(oscar): Implement unshuffled OSCAR`\r\n\r\n","That TODO is normal, I haven't touched the python script in months (I haven't had the time, sorry), but I guess @lhoestq fixed the paths if he's already working on the metadata. In any case from now on, only the unshuffled versions of OSCAR will be distributed through the hf\/datasets library as in any case it is the version most people use to train language models.\r\n\r\nIf for any reason, you need the shuffled version it will always be available on the [OSCAR website](https:\/\/oscar-corpus.com).\r\n\r\nAlso future versions of OSCAR will be unshuffled only.","Should we close this PR now that the other one was merged?","Sure.\r\nClosing since #1694 is merged","@lhoestq just a little detail, is the Oscar version that HF offers the same one that was available on INRIA? By that I mean, have you done any further filtering or removing of data inside it? Thanks a lot! ","Hello @jchwenger, this is exactly the same (unshuffled) version that's available at Inria. Sadly no further filtering is provided, but after the latest OSCAR audit (https:\/\/arxiv.org\/abs\/2103.12028) we're already working on future versions of OSCAR that will be \"filtered\" and that will be available on the OSCAR website and hopefully here as well.","@pjox brilliant, in my case I was hoping it would be unfiltered, good news!"],"created_at":1594113727000,"updated_at":1620079628000,"closed_at":1612865959000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/348","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/348","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/348.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/348.patch"},"body":"I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it \ud83d\ude05 \r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/348\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/347","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/347\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/347\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/347\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/347","id":652106567,"node_id":"MDU6SXNzdWU2NTIxMDY1Njc=","number":347,"title":"'cp950' codec error from load_dataset('xtreme', 'tydiqa')","user":{"login":"jerryIsHere","id":50871412,"node_id":"MDQ6VXNlcjUwODcxNDEy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/50871412?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jerryIsHere","html_url":"https:\/\/github.com\/jerryIsHere","followers_url":"https:\/\/api.github.com\/users\/jerryIsHere\/followers","following_url":"https:\/\/api.github.com\/users\/jerryIsHere\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jerryIsHere\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jerryIsHere\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jerryIsHere\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jerryIsHere\/orgs","repos_url":"https:\/\/api.github.com\/users\/jerryIsHere\/repos","events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jerryIsHere\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ","It should be in `xtreme.py:L755`:\r\n```python\r\n if self.config.name == \"tydiqa\" or self.config.name.startswith(\"MLQA\") or self.config.name == \"SQuAD\":\r\n with open(filepath) as f:\r\n data = json.load(f)\r\n```\r\n\r\nCould you try to add the encoding parameter:\r\n```python\r\nopen(filepath, encoding='utf-8')\r\n```","Hello @jerryIsHere :) Did it work ?\r\nIf so we may change the dataset script to force the utf-8 encoding","@lhoestq sorry for being that late, I found 4 copy of xtreme.py. I did the changes as what has been told to all of them.\r\nThe problem is not solved","Could you provide a better error message so that we can make sure it comes from the opening of the `tydiqa`'s json files ?\r\n","@lhoestq \r\nThe error message is same as before:\r\nException has occurred: UnicodeDecodeError\r\n'cp950' codec can't decode byte 0xe2 in position 111: illegal multibyte sequence\r\n File \"D:\\python\\test\\test.py\", line 3, in <module>\r\n dataset = load_dataset('xtreme', 'tydiqa')\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/50871412\/87748794-7c216880-c829-11ea-94f0-7caeacb4d865.png)\r\n\r\nI said that I found 4 copy of xtreme.py and add the \u300c, encoding='utf-8'\u300d parameter to the open() function\r\nthese python script was found under this directory\r\nC:\\Users\\USER\\AppData\\Local\\Programs\\Python\\Python37\\Lib\\site-packages\\nlp\\datasets\\xtreme\r\n","Hi there !\r\nI encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\nI added ```encoding='UTF-8'``` to both lines that have ```open``` in ```imdb.py``` (108 and 114) and it worked for me.\r\nThank you !","> Hi there !\r\n> I encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced.\r\n> I added `encoding='UTF-8'` to both lines that have `open` in `imdb.py` (108 and 114) and it worked for me.\r\n> Thank you !\r\n\r\nHello !\r\nGlad you managed to fix this issue on your side.\r\nDo you mind opening a PR for IMDB ?","> This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\n> Try to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\n> See issues #242 and #307\r\n\r\nSorry for not responding for about a month.\r\nI have just found that it is necessary to change \/ add the environment variable as what was told in #242.\r\nEverything works after I add the new environment variable and restart my PC.\r\n\r\nI think the encoding issue for windows isn't limited to the open() function call specific to few dataset, but actually in the entire library, depends on the machine \/ os you use.","Since #481 we shouldn't have other issues with encodings as they need to be set to \"utf-8\" be default.\r\n\r\nClosing this one, but feel free to re-open if you gave other questions"],"created_at":1594109663000,"updated_at":1599490305000,"closed_at":1599490305000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"![image](https:\/\/user-images.githubusercontent.com\/50871412\/86744744-67481680-c06c-11ea-8612-b77eba92a392.png)\r\n\r\nI guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :\r\nhttps:\/\/www.python.org\/dev\/peps\/pep-0263\/\r\n\r\nI guess the error was triggered by the code \" module = importlib.import_module(module_path)\" at line 57 in the source code: nlp\/src\/nlp\/load.py \/ (https:\/\/github.com\/huggingface\/nlp\/blob\/911d5596f9b500e39af8642fe3d1b891758999c7\/src\/nlp\/load.py#L51)\r\n\r\nAny ideas?\r\n\r\np.s. tried the same code on colab, that runs perfectly\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/347\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/346","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/346\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/346\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/346\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/346","id":652044151,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ1MTg4MTUz","number":346,"title":"Add emotion dataset","user":{"login":"lewtun","id":26859204,"node_id":"MDQ6VXNlcjI2ODU5MjA0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26859204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lewtun","html_url":"https:\/\/github.com\/lewtun","followers_url":"https:\/\/api.github.com\/users\/lewtun\/followers","following_url":"https:\/\/api.github.com\/users\/lewtun\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lewtun\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lewtun\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lewtun\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lewtun\/orgs","repos_url":"https:\/\/api.github.com\/users\/lewtun\/repos","events_url":"https:\/\/api.github.com\/users\/lewtun\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lewtun\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I've tried it and am getting the same error as you.\r\n\r\nYou could use the text files rather than the pickle:\r\n```\r\nhttps:\/\/www.dropbox.com\/s\/ikkqxfdbdec3fuj\/test.txt\r\nhttps:\/\/www.dropbox.com\/s\/1pzkadrvffbqw6o\/train.txt\r\nhttps:\/\/www.dropbox.com\/s\/2mzialpsgf9k5l3\/val.txt\r\n```\r\n\r\nThen you would get all 3 splits rather than just the train split.","Thanks a lot @ghomasHudson - silly me for not spotting that! \r\n\r\nI'll keep the PR open for now since I'm quite close to wrapping it up.","Hi @ghomasHudson your suggestion worked like a charm - the PR is now ready for review \ud83d\ude0e ","Hello, I probably have a silly question but the labels of the emotion dataset are in the form of numbers and not string, so I can not use the function classification_report because it mixes numbers and string (prediction). How can I access the label in the form of a string and not a number?\r\nThank you in advance.","Hi @juliette-sch! Yes, I believe that having the labels as integers is now the default for many classification datasets. You can access the string label via the `ClassLabel.int2str` function ([docs](https:\/\/huggingface.co\/docs\/datasets\/package_reference\/main_classes.html?highlight=int2str#datasets.ClassLabel.int2str)), so you could add a new column to the dataset as follows:\r\n\r\n```python\r\nfrom datasets import load_dataset \r\n\r\nemotions = load_dataset(\"emotion\")\r\n\r\ndef label_int2str(row):\r\n return {\"label_name\": emotions[\"train\"].features[\"label\"].int2str(row[\"label\"])}\r\n\r\n# adds a new column called `label_name`\r\nemotions = emotions.map(label_int2str)\r\n```","Great, thank you very much @lewtun !"],"created_at":1594103741000,"updated_at":1619162023000,"closed_at":1594651178000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/346","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/346","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/346.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/346.patch"},"body":"Hello \ud83e\udd17 team!\r\n\r\nI am trying to add an emotion classification dataset ([link](https:\/\/github.com\/dair-ai\/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https:\/\/www.dropbox.com\/s\/607ptdakxuh5i4s\/merged_training.pkl)).\r\n\r\nWith the current implementation, running\r\n\r\n```bash\r\npython nlp-cli test datasets\/emotion --save_infos --all_configs\r\n```\r\n\r\nthrows a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace).\r\n\r\nNote: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`. \r\n\r\nAny pointers on what I'm doing wrong would be greatly appreciated!\r\n\r\n**Stack trace**\r\n\r\n```\r\nINFO:nlp.load:Checking datasets\/emotion\/emotion.py for additional imports.\r\nINFO:filelock:Lock 140330435928512 acquired on datasets\/emotion\/emotion.py.lock\r\nINFO:nlp.load:Found main folder for dataset datasets\/emotion\/emotion.py at \/Users\/lewtun\/git\/nlp\/src\/nlp\/datasets\/emotion\r\nINFO:nlp.load:Creating specific version folder for dataset datasets\/emotion\/emotion.py at \/Users\/lewtun\/git\/nlp\/src\/nlp\/datasets\/emotion\/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b\r\nINFO:nlp.load:Copying script file from datasets\/emotion\/emotion.py to \/Users\/lewtun\/git\/nlp\/src\/nlp\/datasets\/emotion\/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b\/emotion.py\r\nINFO:nlp.load:Couldn't find dataset infos file at datasets\/emotion\/dataset_infos.json\r\nINFO:nlp.load:Creating metadata file for dataset datasets\/emotion\/emotion.py at \/Users\/lewtun\/git\/nlp\/src\/nlp\/datasets\/emotion\/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b\/emotion.json\r\nINFO:filelock:Lock 140330435928512 released on datasets\/emotion\/emotion.py.lock\r\nINFO:nlp.builder:Generating dataset emotion (\/Users\/lewtun\/.cache\/huggingface\/datasets\/emotion\/emotion\/1.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset emotion\/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to \/Users\/lewtun\/.cache\/huggingface\/datasets\/emotion\/emotion\/1.0.0...\r\nINFO:nlp.builder:Generating split train\r\n0 examples [00:00, ? examples\/s]FILE PATH \/Users\/lewtun\/.cache\/huggingface\/datasets\/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 37, in <module>\r\n service.run()\r\n File \"\/Users\/lewtun\/git\/nlp\/src\/nlp\/commands\/test.py\", line 83, in run\r\n builder.download_and_prepare(\r\n File \"\/Users\/lewtun\/git\/nlp\/src\/nlp\/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/Users\/lewtun\/git\/nlp\/src\/nlp\/builder.py\", line 483, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"\/Users\/lewtun\/git\/nlp\/src\/nlp\/builder.py\", line 664, in _prepare_split\r\n for key, record in utils.tqdm(generator, unit=\" examples\", total=split_info.num_examples, leave=False):\r\n File \"\/Users\/lewtun\/miniconda3\/envs\/nlp\/lib\/python3.8\/site-packages\/tqdm\/std.py\", line 1129, in __iter__\r\n for obj in iterable:\r\n File \"\/Users\/lewtun\/git\/nlp\/src\/nlp\/datasets\/emotion\/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b\/emotion.py\", line 87, in _generate_examples\r\n data = pickle.load(f)\r\n_pickle.UnpicklingError: invalid load key, '<'.\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/346\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/345","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/345\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/345\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/345\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/345","id":651761201,"node_id":"MDU6SXNzdWU2NTE3NjEyMDE=","number":345,"title":"Supporting documents in ELI5","user":{"login":"saverymax","id":29262273,"node_id":"MDQ6VXNlcjI5MjYyMjcz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/29262273?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saverymax","html_url":"https:\/\/github.com\/saverymax","followers_url":"https:\/\/api.github.com\/users\/saverymax\/followers","following_url":"https:\/\/api.github.com\/users\/saverymax\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saverymax\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saverymax\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saverymax\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saverymax\/orgs","repos_url":"https:\/\/api.github.com\/users\/saverymax\/repos","events_url":"https:\/\/api.github.com\/users\/saverymax\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saverymax\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps:\/\/github.com\/facebookresearch\/ELI5#downloading-support-documents-from-the-commoncrawl\r\n\r\nIn order to make the task accessible to people who may not have access to this kind of infrastructure, we suggest to use Wikipedia as a knowledge source rather than the full CommonCrawl. The following blog post shows how you can create Wikipedia support documents and get a performance that is on par with a system that uses CommonCrawl pages.\r\nhttps:\/\/yjernite.github.io\/lfqa.html#task_description\r\n\r\nHope that helps, using ElasticSearch to index Wiki40b and create the documents should take about 4 hours. Let us know if you have any trouble with the blog post though!","Hi, thanks for the quick response. The blog post is quite an interesting working example, thanks for sharing it.\r\nTwo follow-up points\/questions about my original question:\r\n\r\n1. Yes, I read that the facebook team could not share the CommonCrawl b\/c of licensing reasons. They state \"No, we are not allowed to host processed Reddit or CommonCrawl data,\" which indicates they could also not share the Reddit data for licensing reasons. But it seems that HuggingFace is able to share the Reddit data, so why not a subset of CommonCrawl?\r\n\r\n2. Thanks for the suggestion about ElasticSearch and Wiki40b. This is good to know about performance. I definitely could do the indexing and querying myself. What I like about the ELI5 dataset though, at least what is suggested by the paper, is that to create the dataset they had already selected the top 100 web sources and made a single support document from those. Though it doesn't appear to be too sophisticated an approach, having a single support document pre-computed (without having to run the facebook code or a replacement with another dataset) is super useful for my work, especially since I'm not working on developing the latest and greatest retrieval model. Of course, I don't expect HF NLP datasets to be perfectly tailored to my use-case. I know there is overhead to any project, I'm just illustrating a use-case of ELI5 which is not possible with the data provided as-is. If it's for licensing reasons, that is perfectly acceptable a reason, and I appreciate your response."],"created_at":1594062853000,"updated_at":1603813125000,"closed_at":1603813125000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least.\r\n\r\nIf you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/345\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/344","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/344\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/344\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/344\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/344","id":651495246,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ0NzQwMTIw","number":344,"title":"Search qa","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ?"],"created_at":1594038196000,"updated_at":1594889896000,"closed_at":1594889896000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/344","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/344","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/344.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/344.patch"},"body":"This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:\r\n\r\n- raw_jeopardy: raw data\r\n\r\n- train_test_val: which is the splitted version\r\n\r\n#336 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/344\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/343","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/343\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/343\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/343\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/343","id":651419630,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ0Njc4NDEw","number":343,"title":"Fix nested tensorflow format","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1594030425000,"updated_at":1594041112000,"closed_at":1594041111000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/343","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/343","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/343.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/343.patch"},"body":"In #339 and #337 we are thinking about adding a way to export datasets to tfrecords.\r\n\r\nHowever I noticed that it was not possible to do `dset.set_format(\"tensorflow\")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`.\r\n\r\nI also added tests on the `set_format` function.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/343\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/342","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/342\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/342\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/342\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/342","id":651333194,"node_id":"MDU6SXNzdWU2NTEzMzMxOTQ=","number":342,"title":"Features should be updated when `map()` changes schema","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["`dataset.column_names` are being updated but `dataset.features` aren't indeed..."],"created_at":1594022603000,"updated_at":1595499316000,"closed_at":1595499316000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"`dataset.map()` can change the schema and column names.\r\n\r\nWe should update the features in this case (with what is possible to infer).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/342\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/341","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/341\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/341\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/341\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/341","id":650611969,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ0MDcwMjEx","number":341,"title":"add fever dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593784387000,"updated_at":1594040628000,"closed_at":1594040627000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/341","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/341","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/341.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/341.patch"},"body":"This PR add the FEVER dataset https:\/\/fever.ai\/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https:\/\/arxiv.org\/pdf\/1803.05355.pdf).\r\n#336 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/341\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/340","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/340\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/340\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/340\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/340","id":650533920,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQ0MDA2Nzcy","number":340,"title":"Update cfq.py","user":{"login":"brainshawn","id":4437290,"node_id":"MDQ6VXNlcjQ0MzcyOTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4437290?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/brainshawn","html_url":"https:\/\/github.com\/brainshawn","followers_url":"https:\/\/api.github.com\/users\/brainshawn\/followers","following_url":"https:\/\/api.github.com\/users\/brainshawn\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/brainshawn\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/brainshawn\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/brainshawn\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/brainshawn\/orgs","repos_url":"https:\/\/api.github.com\/users\/brainshawn\/repos","events_url":"https:\/\/api.github.com\/users\/brainshawn\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/brainshawn\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @brainshawn for this update"],"created_at":1593775399000,"updated_at":1593779630000,"closed_at":1593779630000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/340","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/340","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/340.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/340.patch"},"body":"Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/340\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/339","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/339\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/339\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/339\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/339","id":650156468,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQzNzAyNTcw","number":339,"title":"Add dataset.export() to TFRecords","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Really cool @jarednielsen !\r\nDo you think we can make it work with dataset with nested features like `squad` ?\r\n\r\nI just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`.","For datasets with nested features we have two aspects to take into account:\r\n1) There can be nested dict of features. What is done in tensorflow_datasets to make things work is to flatten the dictionaries to end up with one single dictionary. A dict like `{\"column1\": {\"subfeature\": ...}}` is converted to `{\"column1\/subfeature\":...}`\r\n2) There can be ragged tensors, i.e. lists of objects with non-fixed shapes. For example in squad there are often multiple possible answers per question. What is done in tensorflow_datasets to make things work is to concatenate everything and add ragged attributes (cf serialization code [here](https:\/\/github.com\/tensorflow\/datasets\/blob\/master\/tensorflow_datasets\/core\/example_serializer.py))","Note that we have `flatten` method in `ArrowDataset`","I added support for nested dictionaries. A few more design decisions popped up:\r\n\r\n_Should we serialize from NumPy arrays or from tf.Tensors?_\r\n- The [tfds example serializer](url) works from NumPy arrays.\r\n- Calling `dset.set_format(\"tensorflow\")` makes `__getitem__` return a tf.Tensor. So serializing from NumPy arrays would mean calling `dset.export()` before setting the format, which is confusing.\r\n- NumPy arrays can be serialized as their underlying datatype (int, float), while tf.Tensors must be converted to strings before serialization. This adds another step when serializing and deserializing, and removes the static-typing advantages of the TFRecord format.\r\n\r\nI think we should export directly from the underlying NumPy arrays into TFRecords, rather than using an intermediate step of tf.Tensor.\r\n\r\n_Should we serialize lists of dictionaries?_\r\n- The test_format_nested() test creates a list of dictionaries: https:\/\/github.com\/huggingface\/nlp\/blob\/911d5596f9b500e39af8642fe3d1b891758999c7\/tests\/test_arrow_dataset.py#L278-L288\r\n- This is difficult to serialize effectively, and I'm not aware of any dataset that has this format. SQuAD has a dictionary of lists, such as the `answers` key. Is this necessary?","Thanks @thomwolf, used dset.flatten() to simplify. That handles the case of nested dictionaries, and then lists can be read into a tf.io.RaggedFeature in the case of something like squad answers.","@jarednielsen I just checked and indeed we don't have lists of dicts, we can just focus on the squad format as a reference then :) I'll change the test to remove this format that's not supposed to happen","Actually I realised that `flatten` also handles nested things like pyarrow's list<struct> so it's fine :D \r\nThis is so cool !\r\n\r\nCould you also add a test with a squad-like dataset ? As soon as we have that I think we'll be good to merge @jarednielsen :)\r\nGood job !","Great, done! I think this could be a great canonical way to generate a dataset.","I tried to match the format of Dataset.sort() and Dataset.shuffle() with the docstring. What difference are you referring to specifically?","Oh my bad they're fine actually (I was thinking of the backticks that we don't use in the docstrings of the transformers repo for argument names)","One final thing: now that we have a brand new documentation, could you just add `export` to the list of documented methods in [docs\/source\/package_reference\/main_classes.rst](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/docs\/source\/package_reference\/main_classes.rst) (so that it will appear in the docs [here](https:\/\/huggingface.co\/nlp\/package_reference\/main_classes.html)) ?\r\n","Done","Cool thanks :)","Since #403 (it just got merged), we return python objects and not numpy arrays anymore (unless format=\"numpy\" is specified).\r\nDo you think it can break the export method ? Could you try to rebase from master to run the CI to make sure it's fine ?","Good catch. I fixed it up so it works with the new format. By the way, when dset.format == \"numpy\", it now returns single items (like `0`) as a 0-dimensional NumPy array. Not sure if that is desired.","I played a little bit with the code and it works quite well :)\r\n\r\nI found two cases for which it doesn't work though:\r\n- if the features dict depth is > 2 (ex: wikisql), because `flatten` only flattens the first level of nesting (it can be fixed by calling `flatten` several times in a row, see [here](https:\/\/issues.apache.org\/jira\/browse\/ARROW-4090))\r\n- Or if there are 2d features (ex: wikisql, `table.rows` is a sequence of sequences of strings), because tf.train.Features only support 1-d lists. That's why tensorflow-datasets flattens these 2-d features to 1-d and adds ragged features that are the shapes of the arrays, so that they can be reconstructed.\r\n\r\nI think we can ignore the 2d stuff right now (some work is being done in #363 ), but I'd like to see the `flatten` issue fixed soon\r\n","That seems like a bug in `pyarrow`, or at least in `flatten()`. Looks like it should be a separate PR.","I made `.flatten` work on our side (it calls pyarrow's flatten several times until it's really flat).\r\n\r\nThe only datasets that won't work are those with lists of lists of features, which is a rare case. Hopefully we can make this work with the multi-dimensional arrays changes we're also doing.\r\n\r\nI think we can merge now :) cc @thomwolf "],"created_at":1593717987000,"updated_at":1595409372000,"closed_at":1595409372000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/339","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/339","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/339.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/339.patch"},"body":"Fixes https:\/\/github.com\/huggingface\/nlp\/issues\/337\r\n\r\nSome design decisions:\r\n\r\n- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting.\r\n- Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https:\/\/github.com\/huggingface\/nlp\/issues\/315 and https:\/\/github.com\/huggingface\/nlp\/issues\/193.\r\n- Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know.\r\n- There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know.\r\n\r\nAlso, I noticed that \r\n```python\r\ndataset = dataset.select(indices)\r\ndataset.set_format(\"tensorflow\")\r\n# dataset._format_type is \"tensorflow\"\r\n```\r\ngives a different output than\r\n```python\r\ndataset.set_format(\"tensorflow\")\r\ndataset = dataset.select(indices)\r\n# dataset._format_type is None\r\n```\r\nThe latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/339\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/338","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/338\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/338\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/338\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/338","id":650057253,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQzNjIxMTEx","number":338,"title":"Run `make style`","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593706787000,"updated_at":1593712990000,"closed_at":1593712990000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/338","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/338","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/338.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/338.patch"},"body":"These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/338\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/337","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/337\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/337\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/337\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/337","id":650035887,"node_id":"MDU6SXNzdWU2NTAwMzU4ODc=","number":337,"title":"[Feature request] Export Arrow dataset to TFRecords","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593704832000,"updated_at":1595409372000,"closed_at":1595409372000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API:\r\n\r\n```python\r\n# use these existing methods\r\nds = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"train\")\r\nds = ds.map(lambda ex: tokenizer(ex))\r\nds.set_format(\"tensorflow\", columns=[\"input_ids\", \"token_type_ids\", \"attention_mask\"])\r\n# then add this method\r\nds.export(folder=\"\/my\/tfrecords\", prefix=\"myrecord\", num_shards=8, format=\"tfrecord\")\r\n```\r\nwhich would create files like so:\r\n```bash\r\n\/my\/tfrecords\/myrecord_1.tfrecord\r\n\/my\/tfrecords\/myrecord_2.tfrecord\r\n...\r\n```\r\n\r\nI would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/337\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/336","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/336\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/336\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/336\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/336","id":649914203,"node_id":"MDU6SXNzdWU2NDk5MTQyMDM=","number":336,"title":"[Dataset requests] New datasets for Open Question Answering","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892884,"node_id":"MDU6TGFiZWwxOTM1ODkyODg0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/help%20wanted","name":"help wanted","color":"008672","default":true,"description":"Extra attention is needed"},{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1593694983000,"updated_at":1594890262000,"closed_at":1594890262000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"We are still a few datasets missing for Open-Question Answering which is currently a field in strong development.\r\n\r\nNamely, it would be really nice to add:\r\n- WebQuestions (Berant et al., 2013) [done]\r\n- CuratedTrec (Baudis et al. 2015) [not open-source]\r\n- MS-MARCO (NGuyen et al. 2016) [done]\r\n- SearchQA (Dunn et al. 2017) [done]\r\n- FEVER (Thorne et al. 2018) - [ done]\r\n\r\n \r\n\r\nAll these datasets are cited in http:\/\/arxiv.org\/abs\/2005.11401","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/336\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/335","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/335\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/335\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/335\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/335","id":649765179,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQzMzgwMjI1","number":335,"title":"BioMRC Dataset presented in BioNLP 2020 ACL Workshop","user":{"login":"PetrosStav","id":15162021,"node_id":"MDQ6VXNlcjE1MTYyMDIx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15162021?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/PetrosStav","html_url":"https:\/\/github.com\/PetrosStav","followers_url":"https:\/\/api.github.com\/users\/PetrosStav\/followers","following_url":"https:\/\/api.github.com\/users\/PetrosStav\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/PetrosStav\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/PetrosStav\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/PetrosStav\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/PetrosStav\/orgs","repos_url":"https:\/\/api.github.com\/users\/PetrosStav\/repos","events_url":"https:\/\/api.github.com\/users\/PetrosStav\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/PetrosStav\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-)","```\r\n=================================== FAILURES ===================================\r\n___________________ AWSDatasetTest.test_load_dataset_pandas ____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_pandas>\r\ndataset_name = 'pandas'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests\/test_dataset_common.py:231: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\/test_dataset_common.py:125: in check_load_dataset\r\n dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.pandas.91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926.pandas.Pandas object at 0x7f3b84f655c0>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b84f3d320>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\/pandas.py:23: TypeError\r\n------------------------------ Captured log call -------------------------------\r\nINFO filelock:filelock.py:274 Lock 139893169180856 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py not found in cache or force_download set to True, downloading to \/home\/circleci\/.cache\/huggingface\/datasets\/tmpwmbk8e8d\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py in cache at \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO filelock:filelock.py:318 Lock 139893169180856 released on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:157 Checking \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893610536912 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\r\nINFO nlp.load:load.py:346 Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py to \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\/pandas.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\/pandas.json\r\nINFO filelock:filelock.py:318 Lock 139893610536912 released on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO filelock:filelock.py:274 Lock 139893610533608 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py not found in cache or force_download set to True, downloading to \/home\/circleci\/.cache\/huggingface\/datasets\/tmp00hpyxrs\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py in cache at \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py\r\nINFO filelock:filelock.py:318 Lock 139893610533608 released on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:157 Checking \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893610371224 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\r\nINFO nlp.load:load.py:346 Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py to \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\/pandas.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/pandas\/pandas.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/pandas\/91271af5018cf7184c27d5cd64802a1b234b3cf0e37dbca0d60f03179b13e926\/pandas.json\r\nINFO filelock:filelock.py:318 Lock 139893610371224 released on \/home\/circleci\/.cache\/huggingface\/datasets\/e5827d40e7a41d66bc5a2eded8dbc90694265f47d9e7cb0273ff6ff11ba426d9.aa556094028e27447a02bb38655ff97b3f4e06db1ac04c1bcdcf5b283b0f75b6.py.lock\r\nWARNING nlp.builder:builder.py:215 Using custom data configuration default\r\nINFO nlp.builder:builder.py:349 Generating dataset pandas (\/tmp\/tmp296h8eeg\/pandas\/default\/0.0.0)\r\nINFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source\r\n____________________ AWSDatasetTest.test_load_dataset_text _____________________\r\n\r\nself = <tests.test_dataset_common.AWSDatasetTest testMethod=test_load_dataset_text>\r\ndataset_name = 'text'\r\n\r\n def test_load_dataset(self, dataset_name):\r\n configs = self.dataset_tester.load_all_configs(dataset_name)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs)\r\n\r\ntests\/test_dataset_common.py:231: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests\/test_dataset_common.py:125: in check_load_dataset\r\n dl_manager=mock_dl_manager, download_mode=GenerateMode.FORCE_REDOWNLOAD, ignore_verifications=True\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/builder.py:432: in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/builder.py:466: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = <nlp.datasets.text.bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b.text.Text object at 0x7f3b6a111550>\r\ndl_manager = <nlp.utils.mock_download_manager.MockDownloadManager object at 0x7f3b85582908>\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" The `datafiles` kwarg in load_dataset() can be a str, List[str], Dict[str,str], or Dict[str,List[str]].\r\n \r\n If str or List[str], then the dataset returns only the 'train' split.\r\n If dict, then keys should be from the `nlp.Split` enum.\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n # Handle case with only one split\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"files\": files})]\r\n else:\r\n # Handle case with several splits and a dict mapping\r\n splits = []\r\n for split_name in [nlp.Split.TRAIN, nlp.Split.VALIDATION, nlp.Split.TEST]:\r\n> if split_name in self.config.data_files:\r\nE TypeError: argument of type 'NoneType' is not iterable\r\n\r\n..\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\/text.py:24: TypeError\r\n------------------------------ Captured log call -------------------------------\r\nINFO filelock:filelock.py:274 Lock 139893159303656 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py not found in cache or force_download set to True, downloading to \/home\/circleci\/.cache\/huggingface\/datasets\/tmpk63omy4v\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py in cache at \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO filelock:filelock.py:318 Lock 139893159303656 released on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:157 Checking \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893159171352 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\r\nINFO nlp.load:load.py:346 Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py to \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\/text.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\/text.json\r\nINFO filelock:filelock.py:318 Lock 139893159171352 released on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO filelock:filelock.py:274 Lock 139893618479176 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.utils.file_utils:file_utils.py:386 https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py not found in cache or force_download set to True, downloading to \/home\/circleci\/.cache\/huggingface\/datasets\/tmpkeykru_f\r\nINFO nlp.utils.file_utils:file_utils.py:391 storing https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py in cache at \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO nlp.utils.file_utils:file_utils.py:394 creating metadata file for \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py\r\nINFO filelock:filelock.py:318 Lock 139893618479176 released on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:157 Checking \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 139893618423848 acquired on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\r\nINFO nlp.load:load.py:346 Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py to \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\/text.py\r\nINFO nlp.load:load.py:354 Couldn't find dataset infos file at https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/dataset_infos.json\r\nINFO nlp.load:load.py:371 Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/text\/text.py at \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/text\/bf5568367c6707640e5601a44ed0af98f40a8483db81a7db99b85fab6606fc8b\/text.json\r\nINFO filelock:filelock.py:318 Lock 139893618423848 released on \/home\/circleci\/.cache\/huggingface\/datasets\/3e34209a2741375a1db1ff03bf1abba1a9bd0e6016912d3ead0114b9d1ca2685.88f858fae8ed77fdff99fe23b726fce01f73388251e0a09a226e6f82cd4ffe6c.py.lock\r\nWARNING nlp.builder:builder.py:215 Using custom data configuration default\r\nINFO nlp.builder:builder.py:349 Generating dataset text (\/tmp\/tmpbu67mvue\/text\/default\/0.0.0)\r\nINFO nlp.builder:builder.py:397 Dataset not on Hf google storage. Downloading and preparing it from source\r\n=============================== warnings summary ===============================\r\n\/home\/circleci\/.local\/lib\/python3.6\/site-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py:15\r\n \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/tensorflow\/python\/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\ntests\/test_dataset_common.py::LocalDatasetTest::test_builder_class_tydiqa\r\n \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/tydiqa\/42d88245bde7c0db6c0d48c822dcaa26c7299e0b40cace7e8d6a9e3628135125\/tydiqa.py:85: DeprecationWarning: invalid escape sequence \\G\r\n \"\"\"\r\n\r\ntests\/test_dataset_common.py::AWSDatasetTest::test_builder_class_mwsc\r\n \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/mwsc\/53c0daac11b6794ff62b52a3a46c4f9da1bef68fd664a2f97b8918917aead715\/mwsc.py:70: DeprecationWarning: invalid escape sequence \\[\r\n pattern = \"\\[.*\\]\"\r\n\r\ntests\/test_dataset_common.py::AWSDatasetTest::test_builder_class_squadshifts\r\n \/home\/circleci\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/squadshifts\/15536d7296a785325b99f6d84dfdceafa427419dd6caad110eabb5e5b4156cc2\/squadshifts.py:47: DeprecationWarning: invalid escape sequence \\ \r\n \"\"\"\r\n\r\n-- Docs: https:\/\/docs.pytest.org\/en\/latest\/warnings.html\r\n=========================== short test summary info ============================\r\nFAILED tests\/test_dataset_common.py::AWSDatasetTest::test_load_dataset_pandas\r\nFAILED tests\/test_dataset_common.py::AWSDatasetTest::test_load_dataset_text\r\n===== 2 failed, 934 passed, 516 skipped, 4 warnings in 1562.46s (0:26:02) ======\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\nI get this failed test on CircleCI , but all the tests that I run locally where successful. The error also seems not to have any, obvious at least, connection with my code.\r\n\r\nAny suggestions? Thanks! :-) "],"created_at":1593680621000,"updated_at":1594800127000,"closed_at":1594800127000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/335","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/335","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/335.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/335.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/335\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/334","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/334\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/334\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/334\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/334","id":649661791,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0","number":334,"title":"Add dataset.shard() method","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great, done!"],"created_at":1593669919000,"updated_at":1594038936000,"closed_at":1594038936000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/334","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/334","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/334.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/334.patch"},"body":"Fixes https:\/\/github.com\/huggingface\/nlp\/issues\/312","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/334\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/333","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/333\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/333\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/333\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/333","id":649236516,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0","number":333,"title":"fix variable name typo","user":{"login":"stas00","id":10676103,"node_id":"MDQ6VXNlcjEwNjc2MTAz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10676103?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stas00","html_url":"https:\/\/github.com\/stas00","followers_url":"https:\/\/api.github.com\/users\/stas00\/followers","following_url":"https:\/\/api.github.com\/users\/stas00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stas00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stas00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stas00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stas00\/orgs","repos_url":"https:\/\/api.github.com\/users\/stas00\/repos","events_url":"https:\/\/api.github.com\/users\/stas00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stas00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```","Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it."],"created_at":1593630830000,"updated_at":1595605411000,"closed_at":1595579536000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/333","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/333","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/333.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/333.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/333\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/332","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/332\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/332\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/332\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/332","id":649140135,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz","number":332,"title":"Add wiki_dpr","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.","It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings"],"created_at":1593623520000,"updated_at":1594038077000,"closed_at":1594038076000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/332","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/332","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/332.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/332.patch"},"body":"Presented in the [Dense Passage Retrieval paper](https:\/\/arxiv.org\/pdf\/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.\r\n\r\nNote on the implementation:\r\n- There are two configs: with and without the embeddings (73GB vs 14GB)\r\n- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)\r\n- I added the case for lists of urls as input of the download_manager","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/332\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/331","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/331\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/331\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/331\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/331","id":648533199,"node_id":"MDU6SXNzdWU2NDg1MzMxOTk=","number":331,"title":"Loading CNN\/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```","here's the log\r\n```\r\n>>> import nlp\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nnlp.load_dataset('cnn_dailymail', '3.0.0')\r\n>>> import logging\r\n>>> logging.basicConfig(level=logging.INFO)\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nINFO:nlp.load:Checking \/u\/jm8wx\/.cache\/huggingface\/datasets\/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\nINFO:filelock:Lock 140443095301136 acquired on \/u\/jm8wx\/.cache\/huggingface\/datasets\/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.load:Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py at \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\r\nINFO:nlp.load:Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py at \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.load:Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py to \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\/cnn_dailymail.py\r\nINFO:nlp.load:Updating dataset infos file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/dataset_infos.json to \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py at \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\/cnn_dailymail.json\r\nINFO:filelock:Lock 140443095301136 released on \/u\/jm8wx\/.cache\/huggingface\/datasets\/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.info:Loading Dataset Infos from \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.builder:Generating dataset cnn_dailymail (\/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0...\r\nINFO:nlp.utils.info_utils:All the checksums matched successfully.\r\nINFO:nlp.builder:Generating split train\r\nINFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0.incomplete\/cnn_dailymail-train.arrow.\r\nINFO:nlp.builder:Generating split validation\r\nINFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0.incomplete\/cnn_dailymail-validation.arrow.\r\nINFO:nlp.builder:Generating split test\r\nINFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0.incomplete\/cnn_dailymail-test.arrow.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/utils\/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```","> here's the log\r\n> \r\n> ```\r\n> >>> import nlp\r\n> import logging\r\n> logging.basicConfig(level=logging.INFO)\r\n> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> >>> import logging\r\n> >>> logging.basicConfig(level=logging.INFO)\r\n> >>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> INFO:nlp.load:Checking \/u\/jm8wx\/.cache\/huggingface\/datasets\/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\n> INFO:filelock:Lock 140443095301136 acquired on \/u\/jm8wx\/.cache\/huggingface\/datasets\/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.load:Found main folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py at \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\r\n> INFO:nlp.load:Found specific version folder for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py at \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.load:Found script file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py to \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\/cnn_dailymail.py\r\n> INFO:nlp.load:Updating dataset infos file from https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/dataset_infos.json to \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\/dataset_infos.json\r\n> INFO:nlp.load:Found metadata file for dataset https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/cnn_dailymail\/cnn_dailymail.py at \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\/cnn_dailymail.json\r\n> INFO:filelock:Lock 140443095301136 released on \/u\/jm8wx\/.cache\/huggingface\/datasets\/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.info:Loading Dataset Infos from \/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/datasets\/cnn_dailymail\/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.builder:Generating dataset cnn_dailymail (\/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0)\r\n> INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\n> Downloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0...\r\n> INFO:nlp.utils.info_utils:All the checksums matched successfully.\r\n> INFO:nlp.builder:Generating split train\r\n> INFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0.incomplete\/cnn_dailymail-train.arrow.\r\n> INFO:nlp.builder:Generating split validation\r\n> INFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0.incomplete\/cnn_dailymail-validation.arrow.\r\n> INFO:nlp.builder:Generating split test\r\n> INFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0.incomplete\/cnn_dailymail-test.arrow.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/load.py\", line 520, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/builder.py\", line 431, in download_and_prepare\r\n> self._download_and_prepare(\r\n> File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/builder.py\", line 488, in _download_and_prepare\r\n> verify_splits(self.info.splits, split_dict)\r\n> File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/utils\/info_utils.py\", line 70, in verify_splits\r\n> raise NonMatchingSplitsSizesError(str(bad_splits))\r\n> nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n> ```\r\n\r\nWith `nlp == 0.3.0` version, I'm not able to reproduce this error on my side.\r\nWhich version are you using for reproducing your bug?\r\n\r\n```\r\n>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n8.90k\/8.90k [00:18<00:00, 486B\/s]\r\n\r\nDownloading: 100%\r\n9.37k\/9.37k [00:00<00:00, 234kB\/s]\r\n\r\nDownloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to \/root\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0...\r\nDownloading:\r\n159M\/? [00:09<00:00, 16.7MB\/s]\r\n\r\nDownloading:\r\n376M\/? [00:06<00:00, 62.6MB\/s]\r\n\r\nDownloading:\r\n2.11M\/? [00:06<00:00, 333kB\/s]\r\n\r\nDownloading:\r\n46.4M\/? [00:02<00:00, 18.4MB\/s]\r\n\r\nDownloading:\r\n2.43M\/? [00:00<00:00, 2.62MB\/s]\r\n\r\nDataset cnn_dailymail downloaded and prepared to \/root\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 11490),\r\n 'train': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 287113),\r\n 'validation': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 13368)}\r\n\r\n>> ...\r\n\r\n```","In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either\r\n1) corrupted cached files\r\n2) decoding errors\r\n\r\nI just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into the processing of the dataset, could you try to clear your cache ? Just to make sure that it isn't 1)","Yes thanks for the support! I cleared out my cache folder and everything works fine now"],"created_at":1593555693000,"updated_at":1594299820000,"closed_at":1594299820000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\n>>> import nlp\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nDownloading and preparing dataset cnn_dailymail\/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to \/u\/jm8wx\/.cache\/huggingface\/datasets\/cnn_dailymail\/3.0.0\/3.0.0...\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/p\/qdata\/jm8wx\/datasets\/nlp\/src\/nlp\/utils\/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/331\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/330","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/330\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/330\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/330\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/330","id":648525720,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw","number":330,"title":"Doc red","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593554731000,"updated_at":1594037439000,"closed_at":1593952049000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/330","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/330","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/330.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/330.patch"},"body":"Adding [DocRED](https:\/\/github.com\/thunlp\/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:\r\n\r\n- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `\"train_annotated\"` and `\"train_distant\"` to reflect this.\r\n- As well as the relation id, the full relation name is mapped from `rel_info.json`\r\n- I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable.\r\n- Used the fix from #319 to allow nested sequences of dicts.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/330\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/329","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/329\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/329\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/329\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/329","id":648446979,"node_id":"MDU6SXNzdWU2NDg0NDY5Nzk=","number":329,"title":"[Bug] FileLock dependency incompatible with filesystem","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, can you give details on your environment\/os\/packages versions\/etc?","Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile that isn't writable, and thus there's no way to acquire it by removing the .lock file. But Python is able to create new files and write to them outside of the FileLock package.\r\n\r\nWhen I attempt to use FileLock within a Docker container by writing to `\/root\/.cache\/hello.txt`, it succeeds. So there's some permissions issue. But it's not a Docker configuration issue; I've replicated it without Docker.\r\n```bash\r\necho \"hello world\" >> hello.txt\r\nls -l\r\n\r\n-rw-rw-r-- 1 ubuntu ubuntu 10 Jun 30 19:52 hello.txt\r\n```","Looks like the `flock` syscall does not work on Lustre filesystems by default: https:\/\/github.com\/benediktschmitt\/py-filelock\/issues\/67.\r\n\r\nI added the `-o flock` option when mounting the filesystem, as [described here](https:\/\/docs.aws.amazon.com\/fsx\/latest\/LustreGuide\/getting-started-step2.html), which fixed the issue.","Awesome, thanks a lot for sharing your fix!"],"created_at":1593546331000,"updated_at":1593586558000,"closed_at":1593552786000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm downloading a dataset successfully with\r\n`load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")`\r\n\r\nBut when I attempt to cache it on an external volume, it hangs indefinitely:\r\n`load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", cache_dir=\"\/fsx\") # \/fsx is an external volume mount`\r\n\r\nThe filesystem when hanging looks like this:\r\n```bash\r\n\/fsx\r\n----downloads\r\n ----94be...73.lock\r\n----wikitext\r\n ----wikitext-2-raw\r\n ----wikitext-2-raw-1.0.0.incomplete\r\n```\r\n\r\nIt appears that on this filesystem, the FileLock object is forever stuck in its \"acquire\" stage. I have verified that the issue lies specifically with the `filelock` dependency:\r\n```python\r\nopen(\"\/fsx\/hello.txt\").write(\"hello\") # succeeds\r\n\r\nfrom filelock import FileLock\r\nwith FileLock(\"\/fsx\/hello.lock\"):\r\n open(\"\/fsx\/hello.txt\").write(\"hello\") # hangs indefinitely\r\n```\r\n\r\nHas anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/329\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/328","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/328\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/328\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/328\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/328","id":648326841,"node_id":"MDU6SXNzdWU2NDgzMjY4NDE=","number":328,"title":"Fork dataset","user":{"login":"timothyjlaurent","id":2000204,"node_id":"MDQ6VXNlcjIwMDAyMDQ=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2000204?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/timothyjlaurent","html_url":"https:\/\/github.com\/timothyjlaurent","followers_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/followers","following_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/orgs","repos_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/repos","events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/timothyjlaurent\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/squad\/squad.py) script for example). Custom dataset scripts can be called locally with `nlp.load_dataset(path_to_my_script_directory)`.\r\n\r\nThis should help you get what you call \"Dataset1\".\r\n\r\nThen using some dataset transforms like `.map` for example you can get to \"DatasetNER\" and \"DatasetREL\".\r\n","Thanks for the helpful advice, @lhoestq -- I wasn't quite able to get the json recipe working - \r\n\r\n```\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/pyarrow\/ipc.py in __init__(self, source)\r\n 60 \r\n 61 def __init__(self, source):\r\n---> 62 self._open(source)\r\n 63 \r\n 64 \r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/pyarrow\/ipc.pxi in pyarrow.lib._RecordBatchStreamReader._open()\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\nArrowInvalid: Tried reading schema message, was null or length 0\r\n```\r\n\r\nBut I'm going to give the generator_dataset_builder a try.\r\n\r\n1 more quick question -- can .map be used to output different length mappings -- could I skip one, or yield 2, can you map_batch ","You can use `.map(my_func, batched=True)` and return less examples, or more examples if you want","Thanks this answers my question. I think the issue I was having using the json loader were due to using gzipped jsonl files.\r\n\r\nThe error I get now is :\r\n\r\n```\r\n\r\nUsing custom data configuration test\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-38-29082a31e5b2> in <module>\r\n 5 print(ner_datafiles)\r\n 6 \r\n----> 7 ds = nlp.load_dataset(\"json\", \"test\", data_files=ner_datafiles[0])\r\n 8 \r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n--> 738 parse_schema(writer.schema, features)\r\n 739 self.info.features = Features(features)\r\n 740 \r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/builder.py in parse_schema(schema, schema_dict)\r\n 734 parse_schema(field.type.value_type, schema_dict[field.name])\r\n 735 else:\r\n--> 736 schema_dict[field.name] = Value(str(field.type))\r\n 737 \r\n 738 parse_schema(writer.schema, features)\r\n\r\n<string> in __init__(self, dtype, id, _type)\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/features.py in __post_init__(self)\r\n 55 \r\n 56 def __post_init__(self):\r\n---> 57 self.pa_type = string_to_arrow(self.dtype)\r\n 58 \r\n 59 def __call__(self):\r\n\r\n~\/.virtualenvs\/inv-text2struct\/lib\/python3.6\/site-packages\/nlp\/features.py in string_to_arrow(type_str)\r\n 32 if str(type_str + \"_\") not in pa.__dict__:\r\n 33 raise ValueError(\r\n---> 34 f\"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. \"\r\n 35 f\"Please make sure to use a correct data type, see: \"\r\n 36 f\"https:\/\/arrow.apache.org\/docs\/python\/api\/datatypes.html#factory-functions\"\r\n\r\nValueError: Neither list<item: int64> nor list<item: int64>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https:\/\/arrow.apache.org\/docs\/python\/api\/datatypes.html#factory-functions.\r\n```\r\n\r\nIf I just create a pa- table manually like is done in the jsonloader -- it seems to work fine. Ths JSON I'm trying to load isn't overly complex - 1 integer field, the rest text fields with a nested list of objects with text fields .","I'll close this -- It's still unclear how to go about troubleshooting the json example as I mentioned above. If I decide it's worth the trouble, I'll create another issue, or wait for a better support for using nlp for making custom data-loaders."],"created_at":1593535373000,"updated_at":1594071839000,"closed_at":1594071839000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. \r\n\r\nWe're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.\r\n\r\nOur preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads.\r\n\r\nIs there some good way to \"fork\" dataset-\r\n\r\nEG\r\n\r\n1. text + json -> Dataset1\r\n1. Dataset1 -> DatasetNER\r\n1. Dataset1 -> DatasetREL\r\n\r\nor \r\n\r\n1. text + json -> Dataset1\r\n1. Dataset1 -> DatasetNER\r\n1. Dataset1 + DatasetNER -> DatasetREL\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/328\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/327","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/327\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/327\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/327\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/327","id":648312858,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQyMTQyOTQw","number":327,"title":"set seed for suffling tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593534094000,"updated_at":1593678845000,"closed_at":1593678844000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/327","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/327","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/327.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/327.patch"},"body":"Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/327\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/326","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/326\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/326\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/326\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/326","id":648126103,"node_id":"MDU6SXNzdWU2NDgxMjYxMDM=","number":326,"title":"Large dataset in Squad2-format","user":{"login":"flozi00","id":47894090,"node_id":"MDQ6VXNlcjQ3ODk0MDkw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47894090?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/flozi00","html_url":"https:\/\/github.com\/flozi00","followers_url":"https:\/\/api.github.com\/users\/flozi00\/followers","following_url":"https:\/\/api.github.com\/users\/flozi00\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/flozi00\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/flozi00\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/flozi00\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/flozi00\/orgs","repos_url":"https:\/\/api.github.com\/users\/flozi00\/repos","events_url":"https:\/\/api.github.com\/users\/flozi00\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/flozi00\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to let the users do their training\/evaluations with the exact same version of the dataset.\r\nWe allow for each dataset to specify a version (ex: 1.0.0) and increment this number every time there are new samples in the dataset for example. Does it look like a good solution for you ? Or would you rather have one final version with the full dataset ?","It would also be good if there is any possibility for versioning, I think this way is much better than the dynamic way.\nIf you mean that part to put the tiles into one is the generation it would take up to 15-20 minutes on home computer hardware.\nAre there any compression or optimization algorithms while generating the dataset ?\nOtherwise the hardware limit is around 32 GB ram at the moment.\nIf everything works well we will add some more gigabytes of data in future what would make it pretty memory costly.","15-20 minutes is fine !\r\nAlso there's no RAM limitations as we save to disk every 1000 elements while generating the dataset by default.\r\nAfter generation, the dataset is ready to use with (again) no RAM limitations as we do memory-mapping.","Wow, that sounds pretty cool.\nActually I have the problem of running out of memory while tokenization on our local machine.\nThat wouldn't happen again, would it ?","You can do the tokenization step using `my_tokenized_dataset = my_dataset.map(my_tokenize_function)` that writes the tokenized texts on disk as well. And then `my_tokenized_dataset` will be a memory-mapped dataset too, so you should be fine :)","Does it have an affect to the trainings speed ?","In your training loop, loading the tokenized texts is going to be fast and pretty much negligible compared to a forward pass. You shouldn't expect any slow down.","Closing this one. Feel free to re-open if you have other questions"],"created_at":1593519539000,"updated_at":1594285310000,"closed_at":1594285310000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.\r\nCaused the computing power we splitted it into multiple tiles, but they are all in the same format.\r\nRight now the most important facts about are this:\r\n- Contexts: 1.047.671\r\n- questions: 1.677.732\r\n- Answers: 6.742.406\r\n- unanswerable: 377.398\r\n\r\nIt is already cleaned\r\n\r\n<pre><code>\r\ntrain_data = [\r\n {\r\n 'context': \"this is the context\",\r\n 'qas': [\r\n {\r\n 'id': \"00002\",\r\n 'is_impossible': False,\r\n 'question': \"whats is this\",\r\n 'answers': [\r\n {\r\n 'text': \"answer\",\r\n 'answer_start': 0\r\n }\r\n ]\r\n },\r\n {\r\n 'id': \"00003\",\r\n 'is_impossible': False,\r\n 'question': \"question2\",\r\n 'answers': [\r\n {\r\n 'text': \"answer2\",\r\n 'answer_start': 1\r\n }\r\n ]\r\n }\r\n ]\r\n }\r\n]\r\n<\/code><\/pre>\r\n\r\nCause it is growing every day we are thinking about an structure like this:\r\nWe host an Json file, containing all the download links and the script can load it dynamically.\r\nAt the moment it is around ~20GB\r\n\r\nAny advice how to handle this, or an ready to use template ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/326\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/325","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/325\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/325\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/325\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/325","id":647601592,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQxNTk3NTgw","number":325,"title":"Add SQuADShifts dataset","user":{"login":"millerjohnp","id":8953195,"node_id":"MDQ6VXNlcjg5NTMxOTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8953195?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/millerjohnp","html_url":"https:\/\/github.com\/millerjohnp","followers_url":"https:\/\/api.github.com\/users\/millerjohnp\/followers","following_url":"https:\/\/api.github.com\/users\/millerjohnp\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/millerjohnp\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/millerjohnp\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/millerjohnp\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/millerjohnp\/orgs","repos_url":"https:\/\/api.github.com\/users\/millerjohnp\/repos","events_url":"https:\/\/api.github.com\/users\/millerjohnp\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/millerjohnp\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Very cool to have this dataset, thank you for adding it :)"],"created_at":1593457876000,"updated_at":1593536851000,"closed_at":1593536851000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/325","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/325","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/325.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/325.patch"},"body":"This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https:\/\/arxiv.org\/abs\/2004.14444) to facilitate evaluating model robustness to distribution shift.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/325\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/324","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/324\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/324\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/324\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/324","id":647525725,"node_id":"MDU6SXNzdWU2NDc1MjU3MjU=","number":324,"title":"Error when calculating glue score","user":{"login":"D-i-l-r-u-k-s-h-i","id":47185867,"node_id":"MDQ6VXNlcjQ3MTg1ODY3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47185867?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i","html_url":"https:\/\/github.com\/D-i-l-r-u-k-s-h-i","followers_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/followers","following_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/orgs","repos_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/repos","events_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/D-i-l-r-u-k-s-h-i\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.","I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertTokenizer;\r\n```\r\nencoded_reference=tokenizer.encode(reference, add_special_tokens=False)\r\nencoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)\r\n```\r\n\r\n`glue_score = glue_metric.compute(encoded_prediction, encoded_reference)`\r\n```\r\n\r\nValueError Traceback (most recent call last)\r\n<ipython-input-9-4c3a3ce7b583> in <module>()\r\n----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)\r\n\r\n6 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 198 predictions = self.data[\"predictions\"]\r\n 199 references = self.data[\"references\"]\r\n--> 200 output = self._compute(predictions=predictions, references=references, **metrics_kwargs)\r\n 201 return output\r\n 202 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/metrics\/glue\/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302\/glue.py in _compute(self, predictions, references)\r\n 101 return pearson_and_spearman(predictions, references)\r\n 102 elif self.config_name in [\"mrpc\", \"qqp\"]:\r\n--> 103 return acc_and_f1(predictions, references)\r\n 104 elif self.config_name in [\"sst2\", \"mnli\", \"mnli_mismatched\", \"mnli_matched\", \"qnli\", \"rte\", \"wnli\", \"hans\"]:\r\n 105 return {\"accuracy\": simple_accuracy(predictions, references)}\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/metrics\/glue\/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302\/glue.py in acc_and_f1(preds, labels)\r\n 60 def acc_and_f1(preds, labels):\r\n 61 acc = simple_accuracy(preds, labels)\r\n---> 62 f1 = f1_score(y_true=labels, y_pred=preds)\r\n 63 return {\r\n 64 \"accuracy\": acc,\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/sklearn\/metrics\/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)\r\n 1097 pos_label=pos_label, average=average,\r\n 1098 sample_weight=sample_weight,\r\n-> 1099 zero_division=zero_division)\r\n 1100 \r\n 1101 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/sklearn\/metrics\/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)\r\n 1224 warn_for=('f-score',),\r\n 1225 sample_weight=sample_weight,\r\n-> 1226 zero_division=zero_division)\r\n 1227 return f\r\n 1228 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/sklearn\/metrics\/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)\r\n 1482 raise ValueError(\"beta should be >=0 in the F-beta score\")\r\n 1483 labels = _check_set_wise_labels(y_true, y_pred, average, labels,\r\n-> 1484 pos_label)\r\n 1485 \r\n 1486 # Calculate tp_sum, pred_sum, true_sum ###\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/sklearn\/metrics\/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)\r\n 1314 raise ValueError(\"Target is %s but average='binary'. Please \"\r\n 1315 \"choose another average setting, one of %r.\"\r\n-> 1316 % (y_type, average_options))\r\n 1317 elif pos_label not in (None, 1):\r\n 1318 warnings.warn(\"Note that pos_label (set to %r) is ignored when \"\r\n\r\nValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].\r\n\r\n```","MRPC is also a binary classification task, so its metric is a binary classification metric.\r\n\r\nTo evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).","Closing this one. Feel free to re-open if you have other questions :)"],"created_at":1593449628000,"updated_at":1594286014000,"closed_at":1594286014000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I was trying glue score along with other metrics here. But glue gives me this error;\r\n\r\n```\r\nimport nlp\r\nglue_metric = nlp.load_metric('glue',name=\"cola\")\r\n\r\nglue_score = glue_metric.compute(predictions, references)\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-8-b9210a524504> in <module>()\r\n----> 1 glue_score = glue_metric.compute(predictions, references)\r\n\r\n6 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)\r\n 191 \"\"\"\r\n 192 if predictions is not None:\r\n--> 193 self.add_batch(predictions=predictions, references=references)\r\n 194 self.finalize(timeout=timeout)\r\n 195 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/metric.py in add_batch(self, predictions, references, **kwargs)\r\n 207 if self.writer is None:\r\n 208 self._init_writer()\r\n--> 209 self.writer.write_batch(batch)\r\n 210 \r\n 211 def add(self, prediction=None, reference=None, **kwargs):\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 155 if self.pa_writer is None:\r\n 156 self._build_writer(pa_table=pa.Table.from_pydict(batch_examples))\r\n--> 157 pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)\r\n 158 if writer_batch_size is None:\r\n 159 writer_batch_size = self.writer_batch_size\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/pyarrow\/types.pxi in __iter__()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.asarray()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\nTypeError: an integer is required (got type str)\r\n```\r\nI'm not sure whether I'm doing this wrong or whether it's an issue. I would like to know a workaround. Thank you.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/324\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/323","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/323\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/323\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/323\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/323","id":647521308,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQxNTMxOTY3","number":323,"title":"Add package path to sys when downloading package as github archive","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sorry for the long diff, everything after the imports comes from `black` for code quality :\/ "," I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the code ^^'\r\nWe could check if external imports have a `__init__.py` and if it is the case then we can add to directory to the `PYTHONPATH`"],"created_at":1593449161000,"updated_at":1596117623000,"closed_at":1596117623000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/323","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/323","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/323.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/323.patch"},"body":"This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)\r\n\r\n@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importlib` but there might be a more elegant method.\r\n\r\nThis PR fixes https:\/\/github.com\/huggingface\/nlp\/issues\/305","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/323\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/322","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/322\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/322\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/322\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/322","id":647483850,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQxNTAyMjc2","number":322,"title":"output nested dict in get_nearest_examples","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593445667000,"updated_at":1593678813000,"closed_at":1593678812000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/322","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/322","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/322.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/322.patch"},"body":"As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example:\r\n```python\r\nmy_examples = dataset[0:10]\r\nprint(type(my_examples))\r\n# >>> dict\r\nprint(my_examples[\"my_column\"][0]\r\n# >>> this is the first element of the column 'my_column'\r\n```\r\n\r\nTherefore I wanted to keep this logic when calling `get_nearest_examples` that returns the top 10 nearest examples:\r\n```python\r\ndataset.add_faiss_index(column=\"embeddings\")\r\nscores, examples = dataset.get_nearest_examples(\"embeddings\", query=my_numpy_embedding)\r\nprint(type(examples))\r\n# >>> dict\r\n```\r\n\r\nPreviously it was returning a list[dict]. It was the only place that was using this output format.\r\n\r\nTo make it work I had to implement `__getitem__(key)` where `key` is a list.\r\nThis is different from `.select` because `.select` is a dataset transform (it returns a new dataset object) while `__getitem__` is an extraction method (it returns python dictionaries).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/322\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/321","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/321\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/321\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/321\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/321","id":647271526,"node_id":"MDU6SXNzdWU2NDcyNzE1MjY=","number":321,"title":"ERROR:root:mwparserfromhell","user":{"login":"Shiro-LK","id":26505641,"node_id":"MDQ6VXNlcjI2NTA1NjQx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/26505641?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Shiro-LK","html_url":"https:\/\/github.com\/Shiro-LK","followers_url":"https:\/\/api.github.com\/users\/Shiro-LK\/followers","following_url":"https:\/\/api.github.com\/users\/Shiro-LK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Shiro-LK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Shiro-LK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Shiro-LK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Shiro-LK\/orgs","repos_url":"https:\/\/api.github.com\/users\/Shiro-LK\/repos","events_url":"https:\/\/api.github.com\/users\/Shiro-LK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Shiro-LK\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets\/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashes.\r\n\r\nIt will help us know if we have to fix it on our side or if it is a `mwparserfromhell` issue.","Hi, \r\n\r\nThank you for you answer.\r\nI have try to print the bad section using `try` and `except`, but it is a bit weird as the error seems to appear 3 times for instance, but the two first error does not print anything (as if the function did not go in the `except` part).\r\nFor the third one, I got that (I haven't display the entire text) :\r\n\r\n> error : ==== Parque nacional Cajas ====\r\n> {{AP|Parque nacional Cajas}}\r\n> [[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\n> El parque nacional Cajas est\u00e1 situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n> [[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos m\u00e1s comunes al parque inician todos en Cuenca: Desde all\u00ed, la v\u00eda Cuenca-Mol\r\n> leturo atraviesa en Control de [[Surocucho]] en poco m\u00e1s de 30 minutos de viaje; m\u00e1s adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde est\u00e1n el Centro Administrativo y de Informaci\u00f3n del parque. Siguiendo de largo hacia [[Molleturo]], por esta v\u00eda se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n> Para acceder al parque desde la costa, la v\u00eda Molleturo-Cuenca es tambi\u00e9n la mejor opci\u00f3n.\r\n\r\nHow can I display the link instead of the text ? I suppose it will help you more ","The error appears several times as Apache Beam retries to process examples up to 4 times irc.\r\n\r\nI just tried to run this text into `mwparserfromhell` but it worked without the issue.\r\n\r\nI used this code (from the `wikipedia.py` script):\r\n```python\r\nimport mwparserfromhell as parser\r\nimport re\r\nimport six\r\n\r\nraw_content = r\"\"\"==== Parque nacional Cajas ====\r\n{{AP|Parque nacional Cajas}}\r\n[[Archivo:Ecuador cajas national park.jpg|thumb|left|300px|Laguna del Cajas]]\r\nEl parque nacional Cajas est\u00e1 situado en los [[Cordillera de los Andes|Andes]], al sur del [[Ecuador]], en la provincia de [[Provincia de Azuay|Azuay]], a 33\r\n[[km]] al noroccidente de la ciudad de [[Cuenca (Ecuador)|Cuenca]]. Los accesos m\u00e1s comunes al parque inician todos en Cuenca: Desde all\u00ed, la v\u00eda Cuenca-Mol\r\nleturo atraviesa en Control de [[Surocucho]] en poco m\u00e1s de 30 minutos de viaje; m\u00e1s adelante, esta misma carretera pasa a orillas de la laguna La Toreadora donde est\u00e1n el Centro Administrativo y de Informaci\u00f3n del parque. Siguiendo de largo hacia [[Molleturo]], por esta v\u00eda se conoce el sector norte del Cajas y se serpentea entre varias lagunas mayores y menores.\r\n\"\"\"\r\n\r\nwikicode = parser.parse(raw_content)\r\n\r\n# Filters for references, tables, and file\/image links.\r\nre_rm_wikilink = re.compile(\"^(?:File|Image|Media):\", flags=re.IGNORECASE | re.UNICODE)\r\n\r\ndef rm_wikilink(obj):\r\n return bool(re_rm_wikilink.match(six.text_type(obj.title)))\r\n\r\ndef rm_tag(obj):\r\n return six.text_type(obj.tag) in {\"ref\", \"table\"}\r\n\r\ndef rm_template(obj):\r\n return obj.name.lower() in {\"reflist\", \"notelist\", \"notelist-ua\", \"notelist-lr\", \"notelist-ur\", \"notelist-lg\"}\r\n\r\ndef try_remove_obj(obj, section):\r\n try:\r\n section.remove(obj)\r\n except ValueError:\r\n # For unknown reasons, objects are sometimes not found.\r\n pass\r\n\r\nsection_text = []\r\nfor section in wikicode.get_sections(flat=True, include_lead=True, include_headings=True):\r\n for obj in section.ifilter_wikilinks(matches=rm_wikilink, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_templates(matches=rm_template, recursive=True):\r\n try_remove_obj(obj, section)\r\n for obj in section.ifilter_tags(matches=rm_tag, recursive=True):\r\n try_remove_obj(obj, section)\r\n\r\n section_text.append(section.strip_code().strip())\r\n```","Not sure why we're having this issue. Maybe could you get also the file that's causing that ?","thanks for your answer.\r\nHow can I know which file is causing the issue ? \r\nI am trying to load the spanish wikipedia data. ","Because of the way Apache Beam works we indeed don't have access to the file name at this point in the code.\r\nWe'll have to use some tricks I think :p \r\n\r\nYou can append `filepath` to `title` in `wikipedia.py:L512` for example. [[EDIT: it's L494 my bad]]\r\nThen just do `try:...except:` on the call of `_parse_and_clean_wikicode` L500 I guess.\r\n\r\nThanks for diving into this ! I tried it myself but I run out of memory on my laptop\r\nAs soon as we have the name of the file it should be easier to find what's wrong.","Thanks for your help.\r\n\r\nI tried to print the \"title\" of the document inside the` except (mwparserfromhell.parser.ParserError) as e`,the title displayed was : \"Campeonato Mundial de futsal de la AMF 2015\". (Wikipedia ES) Is it what you were looking for ?","Thanks a lot @Shiro-LK !\r\n\r\nI was able to reproduce the issue. It comes from [this table on wikipedia](https:\/\/es.wikipedia.org\/wiki\/Campeonato_Mundial_de_futsal_de_la_AMF_2015#Clasificados) that can't be parsed.\r\n\r\nThe file in which the problem occurs comes from the wikipedia dumps, and it can be downloaded [here](https:\/\/dumps.wikimedia.org\/eswiki\/20200501\/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2)\r\n\r\nParsing the file this way raises the parsing issue:\r\n\r\n```python\r\nimport mwparserfromhell as parser\r\nfrom tqdm.auto import tqdm\r\nimport bz2\r\nimport six\r\nimport logging\r\nimport codecs\r\nimport xml.etree.cElementTree as etree\r\n\r\nfilepath = \"path\/to\/eswiki-20200501-pages-articles-multistream6.xml-p6424816p7924815.bz2\"\r\n\r\ndef _extract_content(filepath):\r\n \"\"\"Extracts article content from a single WikiMedia XML file.\"\"\"\r\n logging.info(\"generating examples from = %s\", filepath)\r\n with open(filepath, \"rb\") as f:\r\n f = bz2.BZ2File(filename=f)\r\n if six.PY3:\r\n # Workaround due to:\r\n # https:\/\/github.com\/tensorflow\/tensorflow\/issues\/33563\r\n utf_f = codecs.getreader(\"utf-8\")(f)\r\n else:\r\n utf_f = f\r\n # To clear root, to free-up more memory than just `elem.clear()`.\r\n context = etree.iterparse(utf_f, events=(\"end\",))\r\n context = iter(context)\r\n unused_event, root = next(context)\r\n for unused_event, elem in tqdm(context, total=949087):\r\n if not elem.tag.endswith(\"page\"):\r\n continue\r\n namespace = elem.tag[:-4]\r\n title = elem.find(\".\/{0}title\".format(namespace)).text\r\n ns = elem.find(\".\/{0}ns\".format(namespace)).text\r\n id_ = elem.find(\".\/{0}id\".format(namespace)).text\r\n # Filter pages that are not in the \"main\" namespace.\r\n if ns != \"0\":\r\n root.clear()\r\n continue\r\n raw_content = elem.find(\".\/{0}revision\/{0}text\".format(namespace)).text\r\n root.clear()\r\n\r\n if \"Campeonato Mundial de futsal de la AMF 2015\" in title:\r\n yield (id_, title, raw_content)\r\n\r\nfor id_, title, raw_content in _extract_content(filepath):\r\n wikicode = parser.parse(raw_content)\r\n```\r\n\r\nThe copied the raw content that can't be parsed [here](https:\/\/pastebin.com\/raw\/ZbmevLyH).\r\n\r\nThe minimal code to reproduce is:\r\n```python\r\nimport mwparserfromhell as parser\r\nimport requests\r\n\r\nraw_content = requests.get(\"https:\/\/pastebin.com\/raw\/ZbmevLyH\").content.decode(\"utf-8\")\r\nwikicode = parser.parse(raw_content)\r\n\r\n```\r\n\r\nI will create an issue on mwparserfromhell's repo to see if we can fix that\r\n","This going to be fixed in the next `mwparserfromhell` release :)"],"created_at":1593429043000,"updated_at":1595521714000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am trying to download some wikipedia data but I got this error for spanish \"es\" (but there are maybe some others languages which have the same error I haven't tried all of them ).\r\n\r\n`ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token stack.`\r\n\r\nThe code I have use was : \r\n`dataset = load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')`\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/321\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/320","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/320\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/320\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/320\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/320","id":647188167,"node_id":"MDU6SXNzdWU2NDcxODgxNjc=","number":320,"title":"Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I wonder if this means downloading failed? That corpus has a really slow server.","This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."],"created_at":1593416195000,"updated_at":1593441882000,"closed_at":1593441882000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Selecting `blog_authorship_corpus` in the nlp viewer throws the following error: \r\n\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation', num_bytes=32553710, num_examples=28521, dataset_name='blog_authorship_corpus')}]\r\nTraceback:\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/ScriptRunner.py\", line 322, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 172, in <module>\r\n dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/caching.py\", line 591, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"\/home\/sasha\/streamlit\/lib\/streamlit\/caching.py\", line 575, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"\/home\/sasha\/nlp-viewer\/run.py\", line 132, in get\r\n builder_instance.download_and_prepare()\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 432, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\nFile \"\/home\/sasha\/.local\/share\/virtualenvs\/lib-ogGKnCK_\/lib\/python3.7\/site-packages\/nlp\/utils\/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\n```\r\n@srush @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/320\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/319","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/319\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/319\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/319\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/319","id":646792487,"node_id":"MDU6SXNzdWU2NDY3OTI0ODc=","number":319,"title":"Nested sequences with dicts","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/features.py#L409\r\n\r\nTo avoid this behavior, you can just define the list in the feature with a simple list or a tuple (which is also simpler to write).\r\nIn your case, the features could be as follow:\r\n``` python\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": [[{\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n }]],\r\n ...\r\n }),\r\n...\r\n```"],"created_at":1593301517000,"updated_at":1593771720000,"closed_at":1593771720000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Am pretty much finished [adding a dataset](https:\/\/github.com\/ghomasHudson\/nlp\/blob\/DocRED\/datasets\/docred\/docred.py) for [DocRED](https:\/\/github.com\/thunlp\/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`. \r\n\r\nThe original data is in this format:\r\n```python\r\n{\r\n 'title': \"Title of wiki page\",\r\n 'vertexSet': [\r\n [\r\n { 'name': \"mention_name\", \r\n 'sent_id': \"mention in which sentence\", \r\n 'pos': [\"postion of mention in a sentence\"], \r\n 'type': \"NER_type\"},\r\n {another mention}\r\n ], \r\n [another entity]\r\n ]\r\n ...\r\n}\r\n```\r\nSo to represent this I've attempted to write:\r\n```\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": nlp.features.Sequence(nlp.features.Sequence({\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n })),\r\n ...\r\n }),\r\n...\r\n```\r\nThis is giving me the error:\r\n```\r\npyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], \"type\": [\"ORG\", \"ORG\", \"ORG\"], \"name\": [\"Lark Force\", \"Lark Force\", \"Lark Force\", \"sent_id\": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type\r\n```\r\nDo we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value(\"string\"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict.\r\n\r\nIf it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/319\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/318","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/318\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/318\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/318\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/318","id":646682840,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQwOTExOTYy","number":318,"title":"Multitask","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's definitely going in the right direction ! Thanks for giving it a try\r\n\r\nI really like the API.\r\nIMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.\r\nAll the formatting methods could easily be added though.\r\n\r\nI think there are some parts that will require some work with apache arrow like slicing. I can find a way to do it using pyarrow tables concatenation (I did something similar when implementing `__getitem__` with an input that is a list of indices [here](https:\/\/github.com\/huggingface\/nlp\/pull\/322\/files#diff-73270df8d7f08c62a27e40806e1a5fb0R463-R469)). It is very fast and it allows to have the same output format as a normal Dataset.\r\n\r\nAlso maybe we should check that not only the columns but also the schemas match ?\r\nAnd maybe add the `seed` of the shuffling step as an argument ?\r\n\r\n","Maybe we should remove the methods that are not implemented for now, WDYT @thomwolf ?","That's an interesting first draft, thanks a lot for that and the user facing API is really nice.\r\n\r\nI think we should dive more into this and the questions of #217 before merging the first version though.\r\n\r\nIn particular, the typical way to do multi-tasking is usually to sample a task and then sample a batch within the selected task. I think we should probably stay be closer to this traditional approach, or at least make it very easy to do, rather than go to close to the T5 approach which is very specific to this paper.\r\n\r\nIn this regard, it seems important to find some way to address the remarks of @zphang. I'm still wondering if we should not adopt more of a sampling approach rather than an iteration approach.","@thomwolf Thanks! I mainly wanted to get something working quickly for my own MTL research. I agree with a lot of the points you made so I'll convert this pull request back to a draft.\r\n\r\nFor your specific point about 'batch-level' multitask mixing, it would be a pretty trivial change to add a `batch_size` parameter and ensure every `batch_size` examples are from the same task. This would certainly work, but would add a notion of 'batches' to a Dataset, which does feel like a 'Sampler-level' concept and not a Dataset one. There's also the possibility of wanting some specific task-level sampling functionality (e.g. applying `SortishSampler` to each task) which would only work with this kind of 2 step sampling approach. My first proposal in the transformers repo was actually a Sampler https:\/\/github.com\/huggingface\/transformers\/issues\/4340. I wonder whether functionality at the sampler-level has a place in the vision for the `nlp` repo?\r\n\r\nI imagine following a sampling approach you'd have to abandon maintaining the same user-facing API as a standard dataset (A shame because replacing a single dataset seamlessly with a multitask one is a really nice user-experience).\r\n\r\nRandom half-Idea: You could have a class which accepts a list of any iterables (either a Dataset or a DataLoader which already is doing the batching). Not sure what interface you'd present though. hmmm. \r\n\r\nThere's definitely more discussion to have. \r\n"],"created_at":1593264449000,"updated_at":1596475449000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/318","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/318","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/318.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/318.patch"},"body":"Following our discussion in #217, I've implemented a first working version of `MultiDataset`.\r\n\r\nThere's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.\r\n\r\nI've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.\r\n\r\nThis will need some tests which I haven't written yet.\r\n\r\nThere's definitely room for improvements but I think the general approach is sound. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/318\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/317","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/317\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/317\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/317\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/317","id":646555384,"node_id":"MDU6SXNzdWU2NDY1NTUzODQ=","number":317,"title":"Adding a dataset with multiple subtasks","user":{"login":"erickrf","id":294483,"node_id":"MDQ6VXNlcjI5NDQ4Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/294483?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/erickrf","html_url":"https:\/\/github.com\/erickrf","followers_url":"https:\/\/api.github.com\/users\/erickrf\/followers","following_url":"https:\/\/api.github.com\/users\/erickrf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/erickrf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/erickrf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/erickrf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/erickrf\/orgs","repos_url":"https:\/\/api.github.com\/users\/erickrf\/repos","events_url":"https:\/\/api.github.com\/users\/erickrf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/erickrf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit different from your case though because each configuration is a dataset by itself (sst2, mnli).\r\nAnother example is `wikipedia` that has one configuration per language."],"created_at":1593213259000,"updated_at":1603813012000,"closed_at":1603813012000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks.\r\n\r\nFor example, in [QE 2019,](http:\/\/www.statmt.org\/wmt19\/qe-task.html) we had the same English-Russian and English-German data for word-level and sentence-level QE. \r\n\r\nI suppose these datasets could have both their word and sentence-level labels inside `nlp.Features`; but what about other subtasks? Should they be considered a different dataset altogether?\r\n\r\nI read the discussion on #217 but the case of QE seems a lot simpler.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/317\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/316","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/316\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/316\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/316\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/316","id":646366450,"node_id":"MDExOlB1bGxSZXF1ZXN0NDQwNjY5NzY5","number":316,"title":"add AG News dataset","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks @jxmorris12 for adding this adding. \r\nCan you please add a small description of the PR?"],"created_at":1593187918000,"updated_at":1593511088000,"closed_at":1593505915000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/316","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/316","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/316.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/316.patch"},"body":"adds support for the AG-News topic classification dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/316\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/315","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/315\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/315\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/315\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/315","id":645888943,"node_id":"MDU6SXNzdWU2NDU4ODg5NDM=","number":315,"title":"[Question] Best way to batch a large dataset?","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Update: I think I've found a solution.\r\n\r\n```python\r\noutput_types = {\"input_ids\": tf.int64, \"token_type_ids\": tf.int64, \"attention_mask\": tf.int64}\r\ndef train_dataset_gen():\r\n for i in range(len(train_dataset)):\r\n yield train_dataset[i]\r\ntf_dataset = tf.data.Dataset.from_generator(train_dataset_gen, output_types=output_types)\r\n```\r\n\r\nloads WikiText-2 in 20 ms, and WikiText-103 in 20 ms. It appears to be lazily loading via indexing train_dataset.","Yes this is the current best solution. We should probably show it in the tutorial notebook.\r\n\r\nNote that this solution unfortunately doesn't allow to train on TPUs (yet). See #193 ","This approach still seems quite slow. When using TFRecords with a similar training loop, I get ~3.0-3.5 it\/s on multi-node, multi-GPU training. I notice a pretty severe performance regression when scaling, with observed performance numbers. Since the allreduce step takes less than 100ms\/it and I've achieved 80% scaling efficiency up to 64 GPUs, it must be the data pipeline.\r\n\r\n| Nodes | GPUs | Iterations\/Second |\r\n| --- | --- | --- |\r\n| 1 | 2 | 2.01 |\r\n| 1 | 8 | 0.81 |\r\n| 2 | 16 | 0.37 |\r\n\r\nHere are performance metrics over 10k steps. The iteration speed appears to follow some sort of caching pattern. I would love to use `nlp` in my project, but a slowdown from 3.0 it\/s to 0.3 it\/s is too great to stomach.\r\n\r\n<img width=\"1361\" alt=\"Screen Shot 2020-07-02 at 8 29 22 AM\" src=\"https:\/\/user-images.githubusercontent.com\/4564897\/86378156-2f8d3900-bc3e-11ea-918b-c395c3df5377.png\">\r\n","An interesting alternative to investigate here would be to use the tf.io library which has some support for Arrow to TF conversion: https:\/\/www.tensorflow.org\/io\/api_docs\/python\/tfio\/arrow\/ArrowDataset\r\n\r\nThere are quite a few types supported, including lists so if the unsupported columns are dropped then we could maybe have a zero-copy mapping from Arrow to TensorFlow, including tokenized inputs and 1D tensors like the ones we mostly use in NLP: https:\/\/github.com\/tensorflow\/io\/blob\/322b3170c43ecac5c6af9e39dbd18fd747913e5a\/tensorflow_io\/arrow\/python\/ops\/arrow_dataset_ops.py#L44-L72\r\n\r\nHere is an introduction on Arrow to TF using tf.io: https:\/\/medium.com\/tensorflow\/tensorflow-with-apache-arrow-datasets-cdbcfe80a59f","Interesting. There's no support for strings, but it does enable int and floats so that would work for tokenized inputs. \r\n\r\nArrowStreamDataset requires loading from a \"record batch iterator\", which can be instantiated from in-memory arrays as described here: https:\/\/arrow.apache.org\/docs\/python\/ipc.html. \r\n\r\nBut the nlp.Dataset stores its data as a `pyarrow.lib.Table`, and the underlying features are `pyarrow.lib.ChunkedArray`. I can't find any documentation about lazily creating a record batch iterator from a ChunkedArray or a Table. Have you had any success?\r\n\r\nI can't find [any uses](https:\/\/grep.app\/search?q=ArrowDataset&filter[lang][0]=Python) of tfio.arrow.ArrowDataset on GitHub.","You can use `to_batches` maybe?\r\nhttps:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.Table.html#pyarrow.Table.to_batches","Also note that since #322 it is now possible to do\r\n```python\r\nids = [1, 10, 42, 100]\r\nbatch = dataset[ids]\r\n```\r\nFrom my experience it is quite fast but it can take lots of memory for large batches (haven't played that much with it).\r\nLet me know if you think there could be a better way to implement it. (current code is [here](https:\/\/github.com\/huggingface\/nlp\/blob\/78628649962671b4aaa31a6b24e7275533416845\/src\/nlp\/arrow_dataset.py#L463))","Thanks @lhoestq! That format is much better to work with.\r\n\r\nI put together a benchmarking script. This doesn't measure the CPU-to-GPU efficiency, nor how it scales with multi-GPU multi-node training where many processes are making the same demands on the same dataset. But it does show some interesting results:\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\nimport tensorflow as tf\r\nimport time\r\n\r\ndset = nlp.load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"train\")\r\ndset = dset.filter(lambda ex: len(ex[\"text\"]) > 0)\r\nbsz = 1024\r\nn_batches = 100\r\n\r\ndef single_item_gen():\r\n for i in range(len(dset)):\r\n yield dset[i]\r\n\r\ndef sequential_batch_gen():\r\n for i in range(0, len(dset), bsz):\r\n yield dset[i:i+bsz]\r\n\r\ndef random_batch_gen():\r\n for i in range(len(dset)):\r\n indices = list(np.random.randint(len(dset), size=(bsz,)))\r\n yield dset[indices]\r\n\r\noutput_types = {\"text\": tf.string}\r\nsingle_item = tf.data.Dataset.from_generator(single_item_gen, output_types=output_types).batch(bsz)\r\ninterleaved = tf.data.Dataset.range(10).interleave(\r\n lambda idx: tf.data.Dataset.from_generator(single_item_gen, output_types=output_types),\r\n cycle_length=10,\r\n)\r\nsequential_batch = tf.data.Dataset.from_generator(sequential_batch_gen, output_types=output_types)\r\nrandom_batch = tf.data.Dataset.from_generator(random_batch_gen, output_types=output_types)\r\n\r\ndef iterate(tf_dset):\r\n start = time.perf_counter()\r\n for i, batch in enumerate(tf_dset.take(n_batches)):\r\n pass\r\n elapsed = time.perf_counter() - start\r\n print(f\"{tf_dset} took {elapsed:.3f} secs\")\r\n\r\niterate(single_item)\r\niterate(interleaved)\r\niterate(sequential_batch)\r\niterate(random_batch)\r\n```\r\n\r\nResults:\r\n```\r\n<BatchDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 23.005 secs\r\n<InterleaveDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.135 secs\r\n<FlatMapDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.074 secs\r\n<FlatMapDataset shapes: {text: <unknown>}, types: {text: tf.string}> took 0.550 secs\r\n```\r\n\r\n- Batching a generator which fetches a single item is terrible.\r\n- Interleaving performs well on a single process, but doesn't scale well to multi-GPU training. I believe the bottleneck here is in Arrow dataset locking or something similar. The numbers from the table above are with interleaving.\r\n- The sequential access dominates the random access (7x faster). Is there any way to bring random access times closer to sequential access? Maybe re-indexing the dataset after shuffling each pass over the data.","Hey @jarednielsen \r\n\r\nThanks for this very interesting analysis!! IMHO to read text data one should use `tf.data.TextLineDataset`. It would be interesting to compare what you have done with simply load with a `TextLineDataset` and see if there is a difference.\r\n\r\nA good example can be found here https:\/\/www.tensorflow.org\/tutorials\/load_data\/text","Thanks! I'm not actually loading in raw text data, that was just the synthetic data I created for this benchmark. A more realistic use case would be a dataset of tokenized examples, which would be a dict of lists of integers. TensorFlow's TextLineDataset greedily loads the dataset into the graph itself, which can lead to out-of-memory errors - one of the main reason I'm so drawn to the `nlp` library is its zero-copy no-RAM approach to dataset loading and mapping. \r\n\r\nIt's quite helpful for running a preprocessing pipeline - a sample ELECTRA pipeline I've built is here: https:\/\/github.com\/jarednielsen\/deep-learning-models\/blob\/nlp\/models\/nlp\/common\/preprocess.py.","Sorry, I think I badly expressed myself, my bad. What I suggested is to compare with the usual loading textual data in pure TF with `TextLineDataset` with `nlp`. I know it is not recommended with very large datasets to use it, but I was curious to see how it behaves compared to a processing with `nlp` on smaller datasets.\r\n\r\nBTW your script looks very interesting, thanks for sharing!!"],"created_at":1593124220000,"updated_at":1603813097000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https:\/\/colab.research.google.com\/github\/huggingface\/nlp\/blob\/master\/notebooks\/Overview.ipynb), I see the following recommended for TensorFlow:\r\n\r\n```python\r\ntrain_tf_dataset = train_tf_dataset.filter(remove_none_values, load_from_cache_file=False)\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']\r\ntrain_tf_dataset.set_format(type='tensorflow', columns=columns)\r\nfeatures = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]} \r\nlabels = {\"output_1\": train_tf_dataset[\"start_positions\"].to_tensor(default_value=0, shape=[None, 1])}\r\nlabels[\"output_2\"] = train_tf_dataset[\"end_positions\"].to_tensor(default_value=0, shape=[None, 1])\r\n### Question about this last line ###\r\ntfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)\r\n```\r\n\r\nThis code works for something like WikiText-2. However, scaling up to WikiText-103, the last line takes 5-10 minutes to run. I assume it is because tf.data.Dataset.from_tensor_slices() is pulling everything into memory, not lazily loading. This approach won't scale up to datasets 25x larger such as Wikipedia.\r\n\r\nSo I tried manual batching using `dataset.select()`:\r\n\r\n```python\r\nidxs = np.random.randint(len(dataset), size=bsz)\r\nbatch = dataset.select(idxs).map(lambda example: {\"input_ids\": tokenizer(example[\"text\"])})\r\ntf_batch = tf.constant(batch[\"ids\"], dtype=tf.int64)\r\n```\r\n\r\nThis appears to create a new Apache Arrow dataset with every batch I grab, and then tries to cache it. The runtime of `dataset.select([0, 1])` appears to be much worse than `dataset[:2]`. So using `select()` doesn't seem to be performant enough for a training loop.\r\n\r\nIs there a performant scalable way to lazily load batches of nlp Datasets?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/315\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/314","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/314\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/314\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/314\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/314","id":645461174,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM5OTM4MTMw","number":314,"title":"Fixed singlular very minor spelling error","user":{"login":"BatJedi","id":40696362,"node_id":"MDQ6VXNlcjQwNjk2MzYy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/40696362?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/BatJedi","html_url":"https:\/\/github.com\/BatJedi","followers_url":"https:\/\/api.github.com\/users\/BatJedi\/followers","following_url":"https:\/\/api.github.com\/users\/BatJedi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/BatJedi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/BatJedi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/BatJedi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/BatJedi\/orgs","repos_url":"https:\/\/api.github.com\/users\/BatJedi\/repos","events_url":"https:\/\/api.github.com\/users\/BatJedi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/BatJedi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thank you BatJeti! The storm-joker, aka the typo, finally got caught!"],"created_at":1593081959000,"updated_at":1593161201000,"closed_at":1593089039000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/314","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/314","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/314.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/314.patch"},"body":"An instance of \"independantly\" was changed to \"independently\". That's all.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/314\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/313","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/313\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/313\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/313\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/313","id":645390088,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM5ODc4MDg5","number":313,"title":"Add MWSC","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Looks good to me"],"created_at":1593076922000,"updated_at":1593505691000,"closed_at":1593505691000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/313","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/313","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/313.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/313.patch"},"body":"Adding the [Modified Winograd Schema Challenge](https:\/\/github.com\/salesforce\/decaNLP\/blob\/master\/local_data\/schema.txt) dataset which formed part of the [decaNLP](http:\/\/decanlp.com\/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose.\r\n\r\nCode is heavily borrowed from the [decaNLP repo](https:\/\/github.com\/salesforce\/decaNLP\/blob\/1e9605f246b9e05199b28bde2a2093bc49feeeaa\/text\/torchtext\/datasets\/generic.py#L773-L877).\r\n\r\nThere's a few (possibly overly opinionated) design choices I made:\r\n\r\n- I used the train\/test\/dev split [buried in the decaNLP code](https:\/\/github.com\/salesforce\/decaNLP\/blob\/1e9605f246b9e05199b28bde2a2093bc49feeeaa\/text\/torchtext\/datasets\/generic.py#L852-L855)\r\n- I split out each example into the 2 alternatives. Originally the data uses the format:\r\n ```\r\n The city councilmen refused the demonstrators a permit because they [feared\/advocated] violence. \r\n Who [feared\/advocated] violence? \r\n councilmen\/demonstrators\r\n ```\r\n I split into the 2 variants:\r\n ```\r\n The city councilmen refused the demonstrators a permit because they feared violence. \r\n Who feared violence? \r\n councilmen\/demonstrators\r\n \r\n The city councilmen refused the demonstrators a permit because they advocated violence. \r\n Who advocated violence? \r\n councilmen\/demonstrators\r\n ```\r\n I can't see any use for having the options combined into a single example (splitting them is [the way decaNLP processes](https:\/\/github.com\/salesforce\/decaNLP\/blob\/1e9605f246b9e05199b28bde2a2093bc49feeeaa\/text\/torchtext\/datasets\/generic.py#L846-L850)) them. You can't train on both versions with them combined, and splitting the examples later would be a pain to do. I think [winogrande.py](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/winogrande\/winogrande.py) presents the data in this way?\r\n\r\n- I've not used the decaNLP framing (appending the options to the question e.g. `Who feared violence? \r\n -- councilmen or demonstrators?`) but left it more generic by adding the options as a new key: `\"options\":[\"councilmen\",\"demonstrators\"]` This should be an easy thing to change using `map` if needed by a specific application.\r\n\r\nDataset is working as-is but if anyone has any thoughts\/preferences on the design decisions here I'm definitely open to different choices.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/313\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/312","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/312\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/312\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/312\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/312","id":645025561,"node_id":"MDU6SXNzdWU2NDUwMjU1NjE=","number":312,"title":"[Feature request] Add `shard()` method to dataset","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?","Thanks for the pointer to those functions! It's still a little more verbose since you have to manually calculate which ids each rank would keep, but definitely works.\r\n\r\nMy use case is multi-node, multi-GPU training and avoiding global batches of duplicate elements. I'm using horovod. You can shuffle indices, or set random seeds, but explicitly sharding the dataset up front is the safest and clearest way I've found to do so."],"created_at":1593038913000,"updated_at":1594038936000,"closed_at":1594038936000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Currently, to shard a dataset into 10 pieces on different ranks, you can run\r\n\r\n```python\r\nrank = 3 # for example\r\nsize = 10\r\ndataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f\"train[{rank*10}%:{(rank+1)*10}%]\")\r\n```\r\n\r\nHowever, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this?\r\n\r\n```python\r\nrank = 3\r\nsize = 64\r\ndataset = nlp.load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"train\").shard(rank=rank, size=size)\r\n```\r\n\r\nTensorFlow has a similar API: https:\/\/www.tensorflow.org\/api_docs\/python\/tf\/data\/Dataset#shard. I'd be happy to contribute this code.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/312\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/311","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/311\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/311\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/311\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/311","id":645013131,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM5NTQ3OTg0","number":311,"title":"Add qa_zre","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1593037042000,"updated_at":1593448658000,"closed_at":1593448658000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/311","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/311","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/311.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/311.patch"},"body":"Adding the QA-ZRE dataset from [\"Zero-Shot Relation Extraction via Reading Comprehension\"](http:\/\/nlp.cs.washington.edu\/zeroshot\/).\r\n\r\nA common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/311\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/310","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/310\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/310\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/310\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/310","id":644806720,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM5MzY1MDg5","number":310,"title":"add wikisql","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That's great work @ghomasHudson !"],"created_at":1593021635000,"updated_at":1593088345000,"closed_at":1593088345000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/310","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/310","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/310.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/310.patch"},"body":"Adding the [WikiSQL](https:\/\/github.com\/salesforce\/WikiSQL) dataset.\r\n\r\nInteresting things to note:\r\n- Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications.\r\n- `conds` was originally a tuple but is converted to a dictionary to support differing types.\r\n\r\nWould be nice to add the logical_form metrics too at some point.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/310\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/309","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/309\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/309\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/309\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/309","id":644783822,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM5MzQ1NzYz","number":309,"title":"Add narrative qa","user":{"login":"Varal7","id":8019486,"node_id":"MDQ6VXNlcjgwMTk0ODY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8019486?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Varal7","html_url":"https:\/\/github.com\/Varal7","followers_url":"https:\/\/api.github.com\/users\/Varal7\/followers","following_url":"https:\/\/api.github.com\/users\/Varal7\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Varal7\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Varal7\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Varal7\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Varal7\/orgs","repos_url":"https:\/\/api.github.com\/users\/Varal7\/repos","events_url":"https:\/\/api.github.com\/users\/Varal7\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Varal7\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Does it make sense to download the full stories? I remember attempting to implement this dataset a while ago and ended up with something like:\r\n```python\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n dl_dir = dl_manager.download_and_extract(_DOWNLOAD_URL)\r\n data_dir = os.path.join(dl_dir, \"narrativeqa-master\")\r\n\r\n urls = {\"test\":{}, \"train\": {},\"valid\":{}}\r\n with open(os.path.join(data_dir,\"documents.csv\")) as f_in:\r\n csv_reader = csv.reader(f_in)\r\n next(csv_reader) # discard header row\r\n for i,row in enumerate(csv_reader):\r\n if i > 1572:\r\n break\r\n if row != []:\r\n urls[row[1]][row[0]] = row[3]\r\n\r\n url_files = {}\r\n for key in urls.keys():\r\n url_files[key] = dl_manager.download_and_extract(urls[key])\r\n\r\n return [\r\n nlp.SplitGenerator(\r\n name=nlp.Split.TRAIN,\r\n gen_kwargs={\r\n \"data_dir\":data_dir,\r\n \"split\":\"train\",\r\n \"doc_id_to_path\":url_files[\"train\"]\r\n }\r\n ),\r\n ....\r\n```\r\nIt does end up cluttering your huggingface cache dir though.","Also since there doesn't seem to be any meaning in the order of answer_1 and answer_2, it might make sense to combine them (see [squad.py](https:\/\/github.com\/huggingface\/nlp\/blob\/8b0ffc85e4e52ae1f18d31be99b6c70b82c991ca\/datasets\/squad\/squad.py#L86-L88)):\r\n```python\r\n\"answers\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokenized\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n})\r\n```\r\n(the tokenized features should also probably be lists of strings not just strings - see [natural_questions.py](https:\/\/github.com\/huggingface\/nlp\/blob\/4cd34287300a1135ce7b22f6dd209ca305c71b3a\/datasets\/natural_questions\/natural_questions.py#L83))\r\n\r\nAgain, this is a personal preference thing, but it might be useful to combine the document-related features:\r\n```python\r\n{\r\n \"document\": {\r\n \"id\": nlp.Value(\"string\"),\r\n \"kind\": nlp.Value(\"string\"),\r\n \"url\": nlp.Value(\"string\"),\r\n \"file_size\": nlp.Value(\"int32\"),\r\n \"word_count\": nlp.Value(\"int32\"),\r\n \"start\": nlp.Value(\"string\"),\r\n \"end\": nlp.Value(\"string\"),\r\n \"wiki_url\": nlp.Value(\"string\"),\r\n \"wiki_title\": nlp.Value(\"string\"),\r\n \"summary\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokens\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n }),\r\n \"text\": nlp.Value(\"string\"),\r\n },\r\n \"question\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokens\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n }),\r\n \"answers\": nlp.features.Sequence({\r\n \"text\": nlp.Value(\"string\"),\r\n \"tokens\": nlp.features.Sequence(nlp.Value(\"string\"))\r\n })\r\n}\r\n```","Did you manage to fix the dummy data @Varal7 ?","@lhoestq do you think it's acceptable for the `dl_manager` to go grab all the individual stories from project gutenburg? I've got a working version of that but it does clutter up your huggingface cache somewhat.\r\n\r\nThe real value (and original purpose) of this dataset is doing question answering on the full text.","> @lhoestq do you think it's acceptable for the `dl_manager` to go grab all the individual stories from project gutenburg? I've got a working version of that but it does clutter up your huggingface cache somewhat.\r\n> \r\n> The real value (and original purpose) of this dataset is doing question answering on the full text.\r\n\r\nWhat's the problem exactly with the cache ?","Nothing, just that because each story is a separate download it gets a bit messy as all 1573 files are under `~\/.cache\/hugginface\/datasets` rather than organized under a subdir.\r\n\r\nProbably doesn't matter to the end user though.","Yea I agree it's a mess. I just created #393 to make things easier.","I got the PR merged to have a cleaner the cache directory (everything is downloaded inside the 'downloads' sub-directory).\r\nFeel free to download all the stories then @ghomasHudson @Varal7 x)\r\nIf you have the possibility of downloading a compressed file with most of the stories at once it would be better though.","Looks good @lhoestq . The problem I'm having at the moment is that stories from project Gutenberg occasionally fail. All books are out of copyright so we should be able to host them. \r\n\r\nHere's a zip file of the full text if we have anywhere to put them: https:\/\/drive.google.com\/file\/d\/17jOR7NqvzDwSlPXrlHaYV-PGI8JG-KY5\/view?usp=sharing\r\n","I put the zip file here @ghomasHudson \r\nhttps:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/narrative_qa\/narrativeqa_full_text.zip\r\n\r\nSorry for the delay","Closing in favor of #499"],"created_at":1593019578000,"updated_at":1599123730000,"closed_at":1599123729000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/309","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/309","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/309.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/309.patch"},"body":"Test cases for dummy data don't pass\r\n\r\nOnly contains data for summaries (not whole story)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/309\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/308","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/308\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/308\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/308\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/308","id":644195251,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM4ODYyMzYy","number":308,"title":"Specify utf-8 encoding for MRPC files","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592952276000,"updated_at":1593089541000,"closed_at":1593087370000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/308","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/308","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/308.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/308.patch"},"body":"Fixes #307, again probably a Windows-related issue.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/308\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/307","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/307\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/307\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/307\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/307","id":644187262,"node_id":"MDU6SXNzdWU2NDQxODcyNjI=","number":307,"title":"Specify encoding for MRPC","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592951089000,"updated_at":1593087369000,"closed_at":1593087369000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset:\r\n```python\r\ndataset = nlp.load_dataset('glue', 'mrpc')\r\n```\r\n\r\n```python\r\nDownloading and preparing dataset glue\/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\\Users\\Python\\.cache\\huggingface\\datasets\\glue\\mrpc\\1.0.0...\r\n---------------------------------------------------------------------------\r\nUnicodeDecodeError Traceback (most recent call last)\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\builder.py in incomplete_dir(dirname)\r\n 369 try:\r\n--> 370 yield tmp_dir\r\n 371 if os.path.isdir(dirname):\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n--> 431 self._download_and_prepare(\r\n 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\builder.py in _prepare_split(self, split_generator)\r\n 663 generator = self._generate_examples(**split_generator.gen_kwargs)\r\n--> 664 for key, record in utils.tqdm(generator, unit=\" examples\", total=split_info.num_examples, leave=False):\r\n 665 example = self.info.features.encode_example(record)\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\tqdm\\notebook.py in __iter__(self, *args, **kwargs)\r\n 217 try:\r\n--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 219 # return super(tqdm...) will not catch exception\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\tqdm\\std.py in __iter__(self)\r\n 1128 try:\r\n-> 1129 for obj in iterable:\r\n 1130 yield obj\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\datasets\\glue\\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\\glue.py in _generate_examples(self, data_file, split, mrpc_files)\r\n 514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split)\r\n--> 515 for example in examples:\r\n 516 yield example[\"idx\"], example\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\datasets\\glue\\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\\glue.py in _generate_example_mrpc_files(self, mrpc_files, split)\r\n 576 reader = csv.DictReader(f, delimiter=\"\\t\", quoting=csv.QUOTE_NONE)\r\n--> 577 for n, row in enumerate(reader):\r\n 578 is_row_in_dev = [row[\"#1 ID\"], row[\"#2 ID\"]] in dev_ids\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\csv.py in __next__(self)\r\n 110 self.fieldnames\r\n--> 111 row = next(self.reader)\r\n 112 self.line_num = self.reader.line_num\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\encodings\\cp1252.py in decode(self, input, final)\r\n 22 def decode(self, input, final=False):\r\n---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n 24 \r\n\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined>\r\n```\r\nThe fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE. \r\nI am going to propose a new PR :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/307\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/306","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/306\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/306\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/306\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/306","id":644176078,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM4ODQ2MTI3","number":306,"title":"add pg19 dataset","user":{"login":"lucidrains","id":108653,"node_id":"MDQ6VXNlcjEwODY1Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/108653?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lucidrains","html_url":"https:\/\/github.com\/lucidrains","followers_url":"https:\/\/api.github.com\/users\/lucidrains\/followers","following_url":"https:\/\/api.github.com\/users\/lucidrains\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lucidrains\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lucidrains\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lucidrains\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lucidrains\/orgs","repos_url":"https:\/\/api.github.com\/users\/lucidrains\/repos","events_url":"https:\/\/api.github.com\/users\/lucidrains\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lucidrains\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lucidrains - Thanks a lot for making the PR - PG19 is a super important dataset! Thanks for making it. Many people are asking for PG-19, so it would be great to have that in the library as soon as possible @thomwolf .","@mariamabarham yup! around 11GB!","I'm looking forward to our first deep learning written novel already lol. It's definitely happening","Good to merge IMO.","Oh I just noticed but as we changed the urls to download the files, we have to update `dataset_infos.json`.\r\nCould you re-rurn `nlp-cli test .\/datasets\/pg19 --save_infos` ?","@lhoestq on it!","should be good!","@lhoestq - I think it's good to merge no?","`dataset_infos.json` is still not up to date with the new urls (we can see that there are urls like `gs:\/\/deepmind-gutenberg\/train\/*` instead of `https:\/\/storage.googleapis.com\/deepmind-gutenberg\/train\/*` in the json file)\r\n\r\nCan you check that you re-ran the command to update the json file, and that you pushed the changes @lucidrains ?","@lhoestq ohhh, I made the change in this commit https:\/\/github.com\/lucidrains\/nlp\/commit\/f3e23d823ad9942031be80b7c4e4212c592cd90c , that's interesting that the pull request didn't pick it up. maybe it's because I did it on another machine, let me check and get back to you!","@lhoestq wrong branch \ud83d\ude05 thanks for catching! ","Awesome thanks \ud83c\udf89"],"created_at":1592949832000,"updated_at":1594022159000,"closed_at":1594022159000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/306","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/306","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/306.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/306.patch"},"body":"https:\/\/github.com\/huggingface\/nlp\/issues\/274\r\n\r\nAdd functioning PG19 dataset with dummy data\r\n\r\n`cos_e.py` was just auto-linted by `make style`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/306\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/305","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/305\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/305\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/305\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/305","id":644148149,"node_id":"MDU6SXNzdWU2NDQxNDgxNDk=","number":305,"title":"Importing downloaded package repository fails","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592946545000,"updated_at":1596127463000,"closed_at":1596127463000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The `get_imports` function in `src\/nlp\/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics\/coval.py` file, and would be useful to add BLEURT (@ankparikh).\r\n\r\nCurrently however, the code seems to have trouble with imports within the package. For example:\r\n```\r\nimport nlp\r\ncoval = nlp.load_metric('coval')\r\n```\r\nyields:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/yacine\/Code\/nlp\/src\/nlp\/load.py\", line 432, in load_metric\r\n metric_cls = import_main_class(module_path, dataset=False)\r\n File \"\/home\/yacine\/Code\/nlp\/src\/nlp\/load.py\", line 57, in import_main_class\r\n module = importlib.import_module(module_path)\r\n File \"\/home\/yacine\/anaconda3\/lib\/python3.7\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"\/home\/yacine\/Code\/nlp\/src\/nlp\/metrics\/coval\/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952\/coval.py\", line 21, in <module>\r\n from .coval_backend.conll import reader # From: https:\/\/github.com\/ns-moosavi\/coval\r\n File \"\/home\/yacine\/Code\/nlp\/src\/nlp\/metrics\/coval\/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952\/coval_backend\/conll\/reader.py\", line 2, in <module>\r\n from conll import mention\r\nModuleNotFoundError: No module named 'conll'\r\n```\r\n\r\nNot sure what the fix would be there.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/305\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/304","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/304\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/304\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/304\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/304","id":644091970,"node_id":"MDU6SXNzdWU2NDQwOTE5NzA=","number":304,"title":"Problem while printing doc string when instantiating multiple metrics.","user":{"login":"codehunk628","id":51091425,"node_id":"MDQ6VXNlcjUxMDkxNDI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51091425?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/codehunk628","html_url":"https:\/\/github.com\/codehunk628","followers_url":"https:\/\/api.github.com\/users\/codehunk628\/followers","following_url":"https:\/\/api.github.com\/users\/codehunk628\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/codehunk628\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/codehunk628\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/codehunk628\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/codehunk628\/orgs","repos_url":"https:\/\/api.github.com\/users\/codehunk628\/repos","events_url":"https:\/\/api.github.com\/users\/codehunk628\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/codehunk628\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592940725000,"updated_at":1595411458000,"closed_at":1595411458000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy.\r\nAttached [Colab](https:\/\/colab.research.google.com\/drive\/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem clarification..","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/304\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/303","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/303\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/303\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/303\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/303","id":643912464,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM4NjI3Nzcw","number":303,"title":"allow to move files across file systems","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592924168000,"updated_at":1592924924000,"closed_at":1592924923000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/303","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/303","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/303.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/303.patch"},"body":"Users are allowed to use the `cache_dir` that they want.\r\nTherefore it can happen that we try to move files across filesystems.\r\nWe were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`.\r\n\r\nThis should fix #301","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/303\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/302","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/302\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/302\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/302\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/302","id":643910418,"node_id":"MDU6SXNzdWU2NDM5MTA0MTg=","number":302,"title":"Question - Sign Language Datasets","user":{"login":"AmitMY","id":5757359,"node_id":"MDQ6VXNlcjU3NTczNTk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5757359?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/AmitMY","html_url":"https:\/\/github.com\/AmitMY","followers_url":"https:\/\/api.github.com\/users\/AmitMY\/followers","following_url":"https:\/\/api.github.com\/users\/AmitMY\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/AmitMY\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/AmitMY\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/AmitMY\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/AmitMY\/orgs","repos_url":"https:\/\/api.github.com\/users\/AmitMY\/repos","events_url":"https:\/\/api.github.com\/users\/AmitMY\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/AmitMY\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans \/ underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"plans\" addon.\r\n\r\nSame for sign language - if there is a dataset of videos, one addon can be to run OpenPose, another to run ARKit4 pose estimation, and another to run PoseNet, or even just a video embedding addon. (which are expensive to run individually for everyone who wants to use these data)\r\n\r\nThis is something I dabbled with my own implementation to a [research datasets library](https:\/\/github.com\/AmitMY\/meta-scholar\/) and I love to get the discussion going on these topics.","This is a really cool idea !\r\nThe example for data objects you gave for the RWTH-PHOENIX-Weather 2014 T dataset can totally fit inside the library.\r\n\r\nFor your point about formats like `ilex`, `eaf`, or `srt`, it is possible to use any library in your dataset script.\r\nHowever most user probably won't need these libraries, as most datasets don't need them, and therefore it's unlikely that we will have them in the minimum requirements to use `nlp` (we want to keep it as light-weight as possible). If a user wants to load your dataset and doesn't have the libraries you need, an error is raised asking the user to install them.\r\n\r\nMore generally, we plan to have something like a `requirements.txt` per dataset. This could also be a place for addons as you said. What do you think ?","Thanks, Quentin, I think a `requirements.txt` per dataset will be a good thing.\r\nI will work on adding this dataset next week, and once we sort all of the kinks, I'll add more."],"created_at":1592924020000,"updated_at":1606303533000,"closed_at":1606303533000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"An emerging field in NLP is SLP - sign language processing.\r\n\r\nI was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable.\r\nThe metrics for sign language to text translation are the same.\r\n\r\nSo, what do you think about (me, or others) adding datasets here?\r\n\r\n\r\nAn example dataset would be [RWTH-PHOENIX-Weather 2014 T](https:\/\/www-i6.informatik.rwth-aachen.de\/~koller\/RWTH-PHOENIX-2014-T\/)\r\nFor every item in the dataset, the data object includes:\r\n1. video_path - path to mp4 file\r\n2. pose_path - a path to `.pose` file with human pose landmarks\r\n3. openpose_path - a path to a `.json` file with human pose landmarks\r\n4. gloss - string\r\n5. text - string\r\n6. video_metadata - height, width, frames, framerate\r\n\r\n\r\n------\r\n\r\nTo make it a tad more complicated - what if sign language libraries add requirements to `nlp`? for example, sign language is commonly annotated using `ilex`, `eaf`, or `srt` files, which are all loadable as text, but there is no reason for the dataset to parse that file by itself, if libraries exist to do so.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/302\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/301","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/301\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/301\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/301\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/301","id":643763525,"node_id":"MDU6SXNzdWU2NDM3NjM1MjU=","number":301,"title":"Setting cache_dir gives error on wikipedia download","user":{"login":"hallvagi","id":33862536,"node_id":"MDQ6VXNlcjMzODYyNTM2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33862536?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/hallvagi","html_url":"https:\/\/github.com\/hallvagi","followers_url":"https:\/\/api.github.com\/users\/hallvagi\/followers","following_url":"https:\/\/api.github.com\/users\/hallvagi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/hallvagi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/hallvagi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/hallvagi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/hallvagi\/orgs","repos_url":"https:\/\/api.github.com\/users\/hallvagi\/repos","events_url":"https:\/\/api.github.com\/users\/hallvagi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/hallvagi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?","Now it works, thanks!"],"created_at":1592911904000,"updated_at":1592982307000,"closed_at":1592982307000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error:\r\n```\r\nnlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path)\r\n```\r\n```\r\nOSError Traceback (most recent call last)\r\n<ipython-input-2-23551344d7bc> in <module>\r\n 1 import nlp\r\n----> 2 nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=path)\r\n\r\n~\/anaconda3\/envs\/fastai2\/lib\/python3.7\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~\/anaconda3\/envs\/fastai2\/lib\/python3.7\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 385 with utils.temporary_assignment(self, \"_cache_dir\", tmp_data_dir):\r\n 386 reader = ArrowReader(self._cache_dir, self.info)\r\n--> 387 reader.download_from_hf_gcs(self._cache_dir, self._relative_data_dir(with_version=True))\r\n 388 downloaded_info = DatasetInfo.from_directory(self._cache_dir)\r\n 389 self.info.update(downloaded_info)\r\n\r\n~\/anaconda3\/envs\/fastai2\/lib\/python3.7\/site-packages\/nlp\/arrow_reader.py in download_from_hf_gcs(self, cache_dir, relative_data_dir)\r\n 231 remote_dataset_info = os.path.join(remote_cache_dir, \"dataset_info.json\")\r\n 232 downloaded_dataset_info = cached_path(remote_dataset_info)\r\n--> 233 os.rename(downloaded_dataset_info, os.path.join(cache_dir, \"dataset_info.json\"))\r\n 234 if self._info is not None:\r\n 235 self._info.update(self._info.from_directory(cache_dir))\r\n\r\nOSError: [Errno 18] Invalid cross-device link: '\/home\/local\/NTU\/nn\/.cache\/huggingface\/datasets\/025fa4fd4f04aaafc9e939260fbc8f0bb190ce14c61310c8ae1ddd1dcb31f88c.9637f367b6711a79ca478be55fe6989b8aea4941b7ef7adc67b89ff403020947' -> '\/data\/nn\/nlp\/wikipedia\/20200501.de\/1.0.0.incomplete\/dataset_info.json'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/301\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/300","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/300\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/300\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/300\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/300","id":643688304,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM4NDQ4Mjk1","number":300,"title":"Fix bertscore references","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592905139000,"updated_at":1592923658000,"closed_at":1592923657000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/300","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/300","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/300.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/300.patch"},"body":"I added some type checking for metrics. There was an issue where a metric could interpret a string a a list. A `ValueError` is raised if a string is given instead of a list.\r\n\r\nMoreover I added support for both strings and lists of strings for `references` in `bertscore`, as it is the case in the original code.\r\n\r\nBoth ways work:\r\n```\r\nimport nlp\r\n\r\nscorer = nlp.load_metric(\"bertscore\")\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n scorer.add(lp, [lg])\r\nscore = scorer.compute(lang=\"en\")\r\n```\r\n\r\n```\r\nimport nlp\r\n\r\nscorer = nlp.load_metric(\"bertscore\")\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n scorer.add(lp, lg)\r\nscore = scorer.compute(lang=\"en\")\r\n```\r\n\r\nThis should fix #295 and #238 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/300\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/299","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/299\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/299\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/299\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/299","id":643611557,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM4Mzg0NDgw","number":299,"title":"remove some print in snli file","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I guess you can just rebase from master to fix the CI"],"created_at":1592898366000,"updated_at":1592899846000,"closed_at":1592899844000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/299","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/299","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/299.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/299.patch"},"body":"This PR removes unwanted `print` statements in some files such as `snli.py`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/299\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/298","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/298\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/298\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/298\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/298","id":643603804,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM4Mzc4MDM4","number":298,"title":"Add searchable datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks very cool! Only looked at it superficially though","Alright I think I've checked all your comments, thanks :)\r\n\r\nMoreover I just added a way to serialize faiss indexes.\r\nThis is important because for big datasets the index construction can take some time.\r\n\r\nExamples:\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))\r\nds_with_embeddings.add_faiss_index(column='embeddings')\r\n# query\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n# save index\r\nds_with_embeddings.get_index('embeddings').save('my_index.faiss')\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\n# load index\r\nfaiss_index = nlp.search.FaissIndex.load('my_index.faiss')\r\nds.add_faiss_index('embeddings', faiss_index=faiss_index)\r\n# query\r\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n```\r\n\r\nLet me know what you think","Nice!\r\n\r\nHere are a few comments:\r\n\r\nI think it would be good to separate (1) the name of the column we use for indexing and (2) the name of the index itself, at least in our head. As I understand it, once the index is created, the column we used to create it is irrelevant so the column name will only be relevant in the `add_faiss_index` and we should be able to supply a different index name, e.g. `my_faiss_index`. When we reload an index, we don't really care about the column that was used to create it, right? so it's maybe better to have an `index_name` (which default to the column name for a simple user experience but it can also be something else and this should be clear in our head when we define the API).\r\n\r\nI'm wondering if we should not have a triple of methods for each retrieval engine: `add_xxx_index`, `save_xxx_index` and `load_xxx_index` when `xxx` can be `faiss` or `elasticsearch`. I'm not a fan of exposing `nlp.search.FaissIndex` unless you think there is a strong reason to have the user learn this abstraction.\r\n\r\nLast but not least, I think we should already think about hosting index on our S3. I would maybe go for something like this: host the index serialized with the cached dataset on user-provided namespaces:\r\n```python\r\nwiki_indexed = load_dataset('thom\/wiki_indexed_with_dpr_faiss')\r\n```","I agree, I just changed to using `index_name` and having add\/save\/load methods","To summarize:\r\n\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']}))\r\nds_with_embeddings.add_faiss_index(column='embeddings')\r\n# query\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n# save index\r\nds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\n# load index\r\nds.load_faiss_index('embeddings', 'my_index.faiss')\r\n# query\r\nscores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10)\r\n```","Good to me. I understand that for now there is no check that the index matches the dataset on loading.\r\nMaybe just add a basic test on the number of examples?","Ok I think this one is ready now","Looks like the CI is having troubles to pass because of `tests\/test_dataset_common.py::AWSDatasetTest::test_builder_configs_{<insert_rando_dataset_name_here>}`, `requests.exceptions.ConnectionError` :\/"],"created_at":1592897583000,"updated_at":1593157844000,"closed_at":1593157843000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/298","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/298","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/298.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/298.patch"},"body":"# Better support for Numpy format + Add Indexed Datasets\r\n\r\nI was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib.\r\n\r\n## Better support for Numpy format\r\n\r\nNew features:\r\n- New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up) using Pandas.\r\n- Allow to output Numpy arrays in batched `.map`, which was the only missing part to fully support Numpy arrays.\r\n\r\nPandas offers fast zero-copy Numpy arrays conversion from Arrow structures.\r\nUsing it we can speed up the reading of memory-mapped Numpy array stored in Arrow format.\r\n\r\nWith these changes you can easily compute embeddings of texts using `.map()`. For example:\r\n```python\r\ndef embed(text):\r\n tokenized_example = tokenizer.encode(text, return_tensors=\"pt\")\r\n embeddings = bert_encoder(tokenized_examples).numpy()\r\n return embeddings\r\ndset_with_embeddings = dset.map(lambda example: {\"embeddings\": embed(example[\"text])})\r\n```\r\nAnd then reading the embeddings from the arrow format is be very fast.\r\n\r\nPS1: Note that right now only 1d arrays are supported.\r\nPS2: It seems possible to do without pandas but it will require more _trickery_.\r\nPS3: I did a simple benchmark with google colab that you can view here:\r\nhttps:\/\/colab.research.google.com\/drive\/1QlLTR6LRwYOKGJ-hTHmHyolE3wJzvfFg?usp=sharing\r\n\r\n## Add Indexed Datasets\r\n\r\nFor many retrieval tasks it is convenient to index a dataset to be able to run fast queries.\r\nFor example for models like DPR, REALM, RAG etc. that are models for Open Domain QA, the retrieval step is very important.\r\n\r\nTherefore I added two ways to add an index to a column of a dataset:\r\n1) You can index it using a Dense Index like Faiss. It is used to index vectors.\r\n Faiss is a library for efficient similarity search and clustering of dense vectors.\r\n It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM.\r\n2) You can index it using a Sparse Index like Elasticsearch. It is used to index text and run queries based on BM25 similarity.\r\n\r\nExample of usage:\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) # `embed` outputs a `np.array`\r\nds_with_embeddings.add_vector_index(column='embeddings')\r\nscores, retrieved_examples = ds_with_embeddings.get_nearest(column='embeddings', query=embed('my new query'), k=10)\r\n```\r\n\r\n```python\r\nds = nlp.load_dataset('crime_and_punish', split='train')\r\nes_client = elasticsearch.Elasticsearch()\r\nds.add_text_index(column='line', es_client=es_client, index_name=\"my_es_index\")\r\nscores, retrieved_examples = ds.get_nearest(column='line', query='my new query', k=10)\r\n```\r\n\r\nPS4: Faiss allows to specify many options for the [index](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/The-index-factory) and for [GPU settings](https:\/\/github.com\/facebookresearch\/faiss\/wiki\/Faiss-on-the-GPU). I made sure that the user has full control over those settings.\r\n\r\n## Tests\r\n\r\nI added tests for Faiss, Elasticsearch and indexed datasets.\r\nI had to edit the CI config because all the test scripts were not being run by CircleCI.\r\n\r\n------------------\r\n\r\nI'd be really happy to have some feedbacks :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/298\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/297","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/297\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/297\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/297\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/297","id":643444625,"node_id":"MDU6SXNzdWU2NDM0NDQ2MjU=","number":297,"title":"Error in Demo for Specific Datasets","user":{"login":"s-jse","id":60150701,"node_id":"MDQ6VXNlcjYwMTUwNzAx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/60150701?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/s-jse","html_url":"https:\/\/github.com\/s-jse","followers_url":"https:\/\/api.github.com\/users\/s-jse\/followers","following_url":"https:\/\/api.github.com\/users\/s-jse\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/s-jse\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/s-jse\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/s-jse\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/s-jse\/orgs","repos_url":"https:\/\/api.github.com\/users\/s-jse\/repos","events_url":"https:\/\/api.github.com\/users\/s-jse\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/s-jse\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually have the resources to process NQ right now so we'll have to wait until we have a version that we've already processed on our google storage (that's what we've done for wikipedia for example).\r\n\r\nSecond, datasets like `newsroom` require manual downloads as we're not allowed to redistribute the data ourselves (if I'm not wrong). An error message should be displayed saying that we're not allowed to show the dataset.\r\n\r\nI can fix the first issue with the imports but for the second one I think we'll have to see with @srush to show a message for datasets that require manual downloads (it can be checked whether a dataset requires manual downloads if `dataset_builder_instance.manual_download_instructions is not None`).\r\n\r\n","I added apache-beam to the viewer. We can think about how to add newsroom. ","We don't plan to host the source files of newsroom ourselves for now.\r\nYou can still get the dataset if you follow the download instructions given by `dataset = load_dataset('newsroom')` though.\r\nThe viewer also shows the instructions now.\r\n\r\nClosing this one. If you have other questions, feel free to re-open :)"],"created_at":1592872722000,"updated_at":1595007786000,"closed_at":1595007786000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/60150701\/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/297\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/296","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/296\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/296\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/296\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/296","id":643423717,"node_id":"MDU6SXNzdWU2NDM0MjM3MTc=","number":296,"title":"snli -1 labels","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ","Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training\/eval?","Yes the original dataset is missing some labels maybe @sleepinyourhat , @gangeli can correct me if I'm wrong \r\nFor my personal opinion at least if you want your model to learn to predict no answer (-1) you can leave it their but otherwise you can discard them. ","thanks @mariamabarham :)"],"created_at":1592868810000,"updated_at":1592923319000,"closed_at":1592923318000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels?\r\n```\r\nimport nlp\r\nfrom collections import Counter\r\ndata = nlp.load_dataset('snli')['train']\r\nprint(Counter(data['label']))\r\nCounter({0: 183416, 2: 183187, 1: 182764, -1: 785})\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/296\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/295","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/295\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/295\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/295\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/295","id":643245412,"node_id":"MDU6SXNzdWU2NDMyNDU0MTI=","number":295,"title":"Improve input warning for evaluation metrics","user":{"login":"Tiiiger","id":19514537,"node_id":"MDQ6VXNlcjE5NTE0NTM3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/19514537?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Tiiiger","html_url":"https:\/\/github.com\/Tiiiger","followers_url":"https:\/\/api.github.com\/users\/Tiiiger\/followers","following_url":"https:\/\/api.github.com\/users\/Tiiiger\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Tiiiger\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Tiiiger\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Tiiiger\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Tiiiger\/orgs","repos_url":"https:\/\/api.github.com\/users\/Tiiiger\/repos","events_url":"https:\/\/api.github.com\/users\/Tiiiger\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Tiiiger\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592846937000,"updated_at":1592923657000,"closed_at":1592923657000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, \r\n\r\nI am the author of `bert_score`. Recently, we received [ an issue ](https:\/\/github.com\/Tiiiger\/bert_score\/issues\/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes input. \r\n\r\nHere is a minimal example:\r\n```python\r\nimport nlp\r\n\r\nscorer = nlp.load_metric(\"bertscore\")\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n scorer.add(lp, lg)\r\nscore = scorer.compute(lang=\"en\")\r\n```\r\n\r\nThe problem in the above code is that `scorer.add()` expects a list of strings as input for the references. As a result, the `scorer` here would take a list of characters in `lg` to be the references. The correct implementation would be calling\r\n```python\r\nscorer.add(lp, [lg])\r\n```\r\n\r\nI just want to raise this issue to you to prevent future user errors of a similar kind. I assume some simple type checking can prevent this from happening?\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/295\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/294","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/294\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/294\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/294\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/294","id":643181179,"node_id":"MDU6SXNzdWU2NDMxODExNzk=","number":294,"title":"Cannot load arxiv dataset on MacOS?","user":{"login":"JohnGiorgi","id":8917831,"node_id":"MDQ6VXNlcjg5MTc4MzE=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8917831?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JohnGiorgi","html_url":"https:\/\/github.com\/JohnGiorgi","followers_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/followers","following_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/orgs","repos_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/repos","events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JohnGiorgi\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I couldn't replicate this issue on my macbook :\/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?","I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```python\r\n from json import JSONDecodeError\r\n try:\r\n d = json.loads(line)\r\n summary = \"\\n\".join(d[\"abstract_text\"])\r\n except JSONDecodeError:\r\n print(path, line)\r\n```\r\n\r\n\r\n\r\nFor me it was at: `\/Users\/johngiorgi\/.cache\/huggingface\/datasets\/f87fd498c5003cbe253a2af422caa1e58f87a4fd74cb3e67350c635c8903b259\/arxiv-dataset\/train.txt` with `\"article_id\": \"1407.3051\"`.\r\n\r\nNot really 100% sure at the moment, but it looks like this specific substring from `\"article_text\"` may be causing the problem?\r\n\r\n```\r\n\"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev\/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev\/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas\r\n```\r\n\r\nperhaps because it appears to be truncated. I (think) I can recreate the problem by doing the following:\r\n\r\n```python\r\nimport json\r\n\r\n# A minimal example of the json file that causes the error\r\ninvalid_json = '{\"article_id\": \"1407.3051\", \"article_text\": [\"the missing - mass resolution was obtained to be 2.8 @xmath3 0.1 mev\/@xmath4 ( fwhm ) , which corresponds to the missing - mass resolution of 3.2 @xmath3 0.2 mev\/@xmath4 ( fwhm ) at the @xmath6 cusp region in the @xmath0 reaction .\", \"this resolution is at least by a factor of 2 better than the previous measurement with the same reaction ( 3.2@xmath595.5 mev\/@xmath4 in @xmath84 ) @xcite .\", \"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev\/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev\/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas' \r\n# The line of code from `scientific_papers.py` which appears to cause the error\r\njson.loads(invalid_json)\r\n```\r\n\r\nThis is as far as I get before I am stumped.","I just checked inside `train.txt` and this line isn't truncated for me (line 163577).\r\nCould you try to clear your cache and re-download the dataset ?","Ah the turn-it-off-turn-it-on again solution! That did it, thanks a lot :) "],"created_at":1592840815000,"updated_at":1593530710000,"closed_at":1593530710000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am having trouble loading the `\"arxiv\"` config from the `\"scientific_papers\"` dataset on MacOS. When I try loading the dataset with:\r\n\r\n```python\r\narxiv = nlp.load_dataset(\"scientific_papers\", \"arxiv\")\r\n```\r\n\r\nI get the following stack trace:\r\n\r\n```bash\r\nJSONDecodeError Traceback (most recent call last)\r\n<ipython-input-2-8e00c55d5a59> in <module>\r\n----> 1 arxiv = nlp.load_dataset(\"scientific_papers\", \"arxiv\")\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 522 download_mode=download_mode,\r\n 523 ignore_verifications=ignore_verifications,\r\n--> 524 save_infos=save_infos,\r\n 525 )\r\n 526 \r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 430 verify_infos = not save_infos and not ignore_verifications\r\n 431 self._download_and_prepare(\r\n--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 433 )\r\n 434 # Sync info\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 481 try:\r\n 482 # Prepare split will record examples associated to the split\r\n--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 484 except OSError:\r\n 485 raise OSError(\"Cannot find data file. \" + (self.manual_download_instructions or \"\"))\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/site-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 662 \r\n 663 generator = self._generate_examples(**split_generator.gen_kwargs)\r\n--> 664 for key, record in utils.tqdm(generator, unit=\" examples\", total=split_info.num_examples, leave=False):\r\n 665 example = self.info.features.encode_example(record)\r\n 666 writer.write(example)\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/site-packages\/tqdm\/std.py in __iter__(self)\r\n 1106 fp_write=getattr(self.fp, 'write', sys.stderr.write))\r\n 1107 \r\n-> 1108 for obj in iterable:\r\n 1109 yield obj\r\n 1110 # Update and possibly print the progressbar.\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/site-packages\/nlp\/datasets\/scientific_papers\/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc\/scientific_papers.py in _generate_examples(self, path)\r\n 114 # \"section_names\": list[str], list of section names.\r\n 115 # \"sections\": list[list[str]], list of sections (list of paragraphs)\r\n--> 116 d = json.loads(line)\r\n 117 summary = \"\\n\".join(d[\"abstract_text\"])\r\n 118 # In original paper, <S> and <\/S> are not used in vocab during training\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/json\/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\r\n 346 parse_int is None and parse_float is None and\r\n 347 parse_constant is None and object_pairs_hook is None and not kw):\r\n--> 348 return _default_decoder.decode(s)\r\n 349 if cls is None:\r\n 350 cls = JSONDecoder\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/json\/decoder.py in decode(self, s, _w)\r\n 335 \r\n 336 \"\"\"\r\n--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n 338 end = _w(s, end).end()\r\n 339 if end != len(s):\r\n\r\n~\/miniconda3\/envs\/t2t\/lib\/python3.7\/json\/decoder.py in raw_decode(self, s, idx)\r\n 351 \"\"\"\r\n 352 try:\r\n--> 353 obj, end = self.scan_once(s, idx)\r\n 354 except StopIteration as err:\r\n 355 raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\n\r\nJSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982)\r\n\r\n163502 examples [02:10, 2710.68 examples\/s] \r\n```\r\n\r\nI am not sure how to trace back to the specific JSON file that has the \"Unterminated string\". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below:\r\n\r\n- Platform: Darwin-19.5.0-x86_64-i386-64bit\r\n- Python version: 3.7.5\r\n- PyTorch version (GPU?): 1.5.0 (False)\r\n- Tensorflow version (GPU?): 2.2.0 (False)\r\n\r\nAny ideas?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/294\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/293","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/293\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/293\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/293\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/293","id":642942182,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4","number":293,"title":"Don't test community datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592820933000,"updated_at":1592824020000,"closed_at":1592824019000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/293","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/293","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/293.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/293.patch"},"body":"This PR disables testing for community datasets on aws.\r\n\r\nIt should fix the CI that is currently failing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/293\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/292","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/292\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/292\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/292\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/292","id":642897797,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2","number":292,"title":"Update metadata for x_stance dataset","user":{"login":"jvamvas","id":5830820,"node_id":"MDQ6VXNlcjU4MzA4MjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5830820?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jvamvas","html_url":"https:\/\/github.com\/jvamvas","followers_url":"https:\/\/api.github.com\/users\/jvamvas\/followers","following_url":"https:\/\/api.github.com\/users\/jvamvas\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jvamvas\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jvamvas\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jvamvas\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jvamvas\/orgs","repos_url":"https:\/\/api.github.com\/users\/jvamvas\/repos","events_url":"https:\/\/api.github.com\/users\/jvamvas\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jvamvas\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great! Thanks @jvamvas for these updates.\r\n","I have fixed a warning. The remaining test failure is due to an unrelated dataset.","We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?"],"created_at":1592817206000,"updated_at":1592899644000,"closed_at":1592899644000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/292","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/292","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/292.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/292.patch"},"body":"Thank you for featuring the x_stance dataset in your library. This PR updates some metadata:\r\n- Citation: Replace preprint with proceedings\r\n- URL: Use a URL with long-term availability\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/292\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/291","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/291\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/291\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/291\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/291","id":642688450,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy","number":291,"title":"break statement not required","user":{"login":"mayurnewase","id":12967587,"node_id":"MDQ6VXNlcjEyOTY3NTg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/12967587?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mayurnewase","html_url":"https:\/\/github.com\/mayurnewase","followers_url":"https:\/\/api.github.com\/users\/mayurnewase\/followers","following_url":"https:\/\/api.github.com\/users\/mayurnewase\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mayurnewase\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mayurnewase\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mayurnewase\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mayurnewase\/orgs","repos_url":"https:\/\/api.github.com\/users\/mayurnewase\/repos","events_url":"https:\/\/api.github.com\/users\/mayurnewase\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mayurnewase\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I guess,test failing due to connection error?","We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?","If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r\nI guess we can have one return in the for loop instead of the break statement, AND one return at the end to explicitly return None.\r\nWhat do you think ?"],"created_at":1592790055000,"updated_at":1592935078000,"closed_at":1592905022000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/291","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/291","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/291.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/291.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/291\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/290","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/290\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/290\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/290\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/290","id":641978286,"node_id":"MDU6SXNzdWU2NDE5NzgyODY=","number":290,"title":"ConnectionError - Eli5 dataset download","user":{"login":"JovanNj","id":8490096,"node_id":"MDQ6VXNlcjg0OTAwOTY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8490096?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/JovanNj","html_url":"https:\/\/github.com\/JovanNj","followers_url":"https:\/\/api.github.com\/users\/JovanNj\/followers","following_url":"https:\/\/api.github.com\/users\/JovanNj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/JovanNj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/JovanNj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/JovanNj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/JovanNj\/orgs","repos_url":"https:\/\/api.github.com\/users\/JovanNj\/repos","events_url":"https:\/\/api.github.com\/users\/JovanNj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/JovanNj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue.","It works now, thanks for prompt help!"],"created_at":1592574033000,"updated_at":1592659344000,"closed_at":1592659344000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https:\/\/storage.googleapis.com\/huggingface-nlp\/cache\/datasets\/eli5\/LFQA_reddit\/1.0.0\/explain_like_im_five-train_eli5.arrow\r\n\r\nI would appreciate if you could help me with this issue.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/290\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/289","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/289\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/289\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/289\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/289","id":641934194,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3","number":289,"title":"update xsum","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks cool!\r\n@mariamabarham can you add a detailed description here what exactly is changed and how the user can load xsum now?","And a rebase should solve the conflicts","This is a super useful PR :-) @sshleifer - maybe you can take a look at the updated version of xsum if you can use it for your use case. Now, one should be able to just load it with:\r\n\r\n```python \r\nnlp.load_datasets(\"xsum\", ....) # no manual dir required anymore\r\n```\r\n"],"created_at":1592569712000,"updated_at":1592832446000,"closed_at":1592810407000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/289","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/289","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/289.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/289.patch"},"body":"This PR makes the following update to the xsum dataset:\r\n\r\n- Manual download is not required anymore\r\n\r\n- dataset can be loaded as follow: `nlp.load_dataset('xsum')`\r\n\r\n\r\n**Important** \r\nInstead of using on outdated url to download the data: \"https:\/\/raw.githubusercontent.com\/EdinburghNLP\/XSum\/master\/XSum-Dataset\/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json\" \r\n\r\na more up-to-date url stored here: https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/summarization\/xsum.tar.gz is used\r\n, so that the user does not need to manually download the data anymore. \r\nThere might be slight breaking changes here for xsum. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/289\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/288","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/288\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/288\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/288\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/288","id":641888610,"node_id":"MDU6SXNzdWU2NDE4ODg2MTA=","number":288,"title":"Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'","user":{"login":"wutong8023","id":14964542,"node_id":"MDQ6VXNlcjE0OTY0NTQy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/14964542?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/wutong8023","html_url":"https:\/\/github.com\/wutong8023","followers_url":"https:\/\/api.github.com\/users\/wutong8023\/followers","following_url":"https:\/\/api.github.com\/users\/wutong8023\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/wutong8023\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/wutong8023\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/wutong8023\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/wutong8023\/orgs","repos_url":"https:\/\/api.github.com\/users\/wutong8023\/repos","events_url":"https:\/\/api.github.com\/users\/wutong8023\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/wutong8023\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It looks like the bug comes from `dill`. Which version of `dill` are you using ?","Thank you. It is version 0.2.6, which version is better?","0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?","Thanks guys! I upgraded dill and it works.","Awesome"],"created_at":1592564482000,"updated_at":1592730311000,"closed_at":1592730311000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\r\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/dtypes.py:470: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\r\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/dtypes.py:471: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\r\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/dtypes.py:472: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\r\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/dtypes.py:473: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\r\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/tensorflow\/python\/framework\/dtypes.py:476: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) \/ '(1,)type'.\r\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/importlib\/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\r\n return f(*args, **kwds)\r\n\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/h5py\/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nTraceback (most recent call last):\r\n File \"\/Users\/parasol_tree\/Resource\/019 - Github\/AcademicEnglishToolkit \/test.py\", line 7, in <module>\r\n import nlp\r\n File \"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/nlp\/__init__.py\", line 27, in <module>\r\n from .arrow_dataset import Dataset\r\n File \"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/nlp\/arrow_dataset.py\", line 31, in <module>\r\n from nlp.utils.py_utils import dumps\r\n File \"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/nlp\/utils\/__init__.py\", line 20, in <module>\r\n from .download_manager import DownloadManager, GenerateMode\r\n File \"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py\", line 25, in <module>\r\n from .py_utils import flatten_nested, map_nested, size_str\r\n File \"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/nlp\/utils\/py_utils.py\", line 244, in <module>\r\n class Pickler(dill.Pickler):\r\n File \"\/Users\/parasol_tree\/anaconda3\/lib\/python3.6\/site-packages\/nlp\/utils\/py_utils.py\", line 247, in Pickler\r\n dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy())\r\nAttributeError: module 'dill' has no attribute '_dill'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/288\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/287","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/287\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/287\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/287\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/287","id":641800227,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM2OTY0NTg0","number":287,"title":"fix squad_v2 metric","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592555086000,"updated_at":1592555623000,"closed_at":1592555621000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/287","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/287","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/287.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/287.patch"},"body":"Fix #280 \r\nThe imports were wrong","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/287\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/286","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/286\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/286\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/286\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/286","id":641585758,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4","number":286,"title":"Add ANLI dataset.","user":{"login":"easonnie","id":11016329,"node_id":"MDQ6VXNlcjExMDE2MzI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11016329?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/easonnie","html_url":"https:\/\/github.com\/easonnie","followers_url":"https:\/\/api.github.com\/users\/easonnie\/followers","following_url":"https:\/\/api.github.com\/users\/easonnie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/easonnie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/easonnie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/easonnie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/easonnie\/orgs","repos_url":"https:\/\/api.github.com\/users\/easonnie\/repos","events_url":"https:\/\/api.github.com\/users\/easonnie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/easonnie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome!! Thanks @easonnie.\r\nLet's wait for additional reviews maybe from @lhoestq @patrickvonplaten @jplu"],"created_at":1592519250000,"updated_at":1592828607000,"closed_at":1592828607000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/286","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/286","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/286.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/286.patch"},"body":"I completed all the steps in https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/286\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/285","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/285\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/285\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/285\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/285","id":641360702,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM2NjAyMjk4","number":285,"title":"Consistent formatting of citations","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Circle CI shuold be green :-) "],"created_at":1592497523000,"updated_at":1592813365000,"closed_at":1592813364000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/285","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/285","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/285.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/285.patch"},"body":"#283 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/285\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/284","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/284\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/284\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/284\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/284","id":641337217,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM2NTgxODQ2","number":284,"title":"Fix manual download instructions","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Verified that this works, thanks!","But I get\r\n```python\r\nConnectionError: Couldn't reach https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/.\/datasets\/wmt16\/wmt16.py\r\n```\r\nWhen I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n\r\n\r\nBoth machines can run\r\n```bash\r\naws s3 ls s3:\/\/datasets.huggingface.co\/nlp\/datasets\/wmt16\/\r\n```\r\nbut it seems one must be in the nlp directory to run the command?\r\n\r\n(I ran `pip install -e . ` on this branch in both situations.)\r\n\r\n\r\n","`https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/.\/datasets\/wmt16\/wmt16.py` looks very weird.\r\n\r\n(Also, S3 is not a file-system, it's a flat key-value store)","Good to merge I think @lhoestq ","> But I get\r\n> \r\n> ```python\r\n> ConnectionError: Couldn't reach https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/datasets\/.\/datasets\/wmt16\/wmt16.py\r\n> ```\r\n> \r\n> When I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n> \r\n> Both machines can run\r\n> \r\n> ```shell\r\n> aws s3 ls s3:\/\/datasets.huggingface.co\/nlp\/datasets\/wmt16\/\r\n> ```\r\n> \r\n> but it seems one must be in the nlp directory to run the command?\r\n> \r\n> (I ran `pip install -e . ` on this branch in both situations.)\r\n\r\nAs soon as it is on master, the dataset script wmt16.py will be synced on S3 and you'll be able to do `load_dataset(\"wmt16\")`"],"created_at":1592495997000,"updated_at":1592555061000,"closed_at":1592555059000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/284","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/284","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/284.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/284.patch"},"body":"This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`. \r\n\r\nSome datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs.\r\n\r\nAfter some brainstorming with @mariamabarham and @lhoestq, we came to the conclusion that having a property function `manual_download_instructions()` gives us more flexibility to decide on a per config basis in the dataset builder if manual download instructions are needed.\r\n\r\nAlso this PR should unblock solves a bug with `wmt16 - ro-en` \r\n@sshleifer from this branch you should be able to succesfully run\r\n\r\n```python \r\nimport nlp \r\nds = nlp.load_dataset('.\/datasets\/wmt16', 'ro-en')\r\n```\r\n\r\nand once this PR is merged S3 should be synched so that \r\n\r\n```python\r\nimport nlp\r\nds = nlp.load_dataset(\"wmt16\", \"ro-en\")\r\n```\r\n\r\nworks as well.\r\n\r\n**Important**: Since `MANUAL_DOWNLOAD_INSTRUCTIONS` was not really exposed to the user, this PR should not be a problem regarding backward compatibility.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/284\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/283","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/283\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/283\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/283\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/283","id":641270439,"node_id":"MDU6SXNzdWU2NDEyNzA0Mzk=","number":283,"title":"Consistent formatting of citations","user":{"login":"srush","id":35882,"node_id":"MDQ6VXNlcjM1ODgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35882?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/srush","html_url":"https:\/\/github.com\/srush","followers_url":"https:\/\/api.github.com\/users\/srush\/followers","following_url":"https:\/\/api.github.com\/users\/srush\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/srush\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/srush\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/srush\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/srush\/orgs","repos_url":"https:\/\/api.github.com\/users\/srush\/repos","events_url":"https:\/\/api.github.com\/users\/srush\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/srush\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"assignees":[{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1592491725000,"updated_at":1592847046000,"closed_at":1592847046000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"The citations are all of a different format, some have \"```\" and have text inside, others are proper bibtex. \r\n\r\nCan we make it so that they all are proper citations, i.e. parse by the bibtex spec:\r\n\r\nhttps:\/\/bibtexparser.readthedocs.io\/en\/master\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/283\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/282","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/282\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/282\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/282\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/282","id":641217759,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM2NDgxNzMy","number":282,"title":"Update dataset_info from gcs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592487675000,"updated_at":1592497492000,"closed_at":1592497491000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/282","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/282","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/282.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/282.patch"},"body":"Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local files may end up outdated.\r\n\r\nFurthermore, to avoid outdated dataset_infos.json, I now make sure that each time you run `load_dataset` it also tries to update the file locally.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/282\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/281","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/281\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/281\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/281\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/281","id":641067856,"node_id":"MDU6SXNzdWU2NDEwNjc4NTY=","number":281,"title":"Private\/sensitive data","user":{"login":"MFreidank","id":6368040,"node_id":"MDQ6VXNlcjYzNjgwNDA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6368040?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/MFreidank","html_url":"https:\/\/github.com\/MFreidank","followers_url":"https:\/\/api.github.com\/users\/MFreidank\/followers","following_url":"https:\/\/api.github.com\/users\/MFreidank\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/MFreidank\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/MFreidank\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/MFreidank\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/MFreidank\/orgs","repos_url":"https:\/\/api.github.com\/users\/MFreidank\/repos","events_url":"https:\/\/api.github.com\/users\/MFreidank\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/MFreidank\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road.","Hi @MFreidank, it is possible to load a dataset from your local storage, but only CSV\/TSV and JSON are supported. To load a dataset in JSON format:\r\n\r\n```\r\nnlp.load_dataset(path=\"json\", data_files={nlp.Split.TRAIN: [\"path\/to\/train.json\"], nlp.Split.TEST: [\"path\/to\/test.json\"]})\r\n```\r\n\r\nFor CSV\/TSV datasets, you have to replace `json` by `csv`.","Hi @julien-c @jplu,\r\nThanks for sharing this solution with me, it helps, this is what I was looking for. \r\nIf not already there and only missed by me, this could be a great addition in the docs.\r\n\r\nClosing my issue as resolved, thanks again."],"created_at":1592473647000,"updated_at":1592658912000,"closed_at":1592658912000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi all,\r\nThanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF\/Pytorch. \r\n\r\nUnfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information. \r\nIs there support\/a plan to support such data with NLP, e.g. by reading it from local sources?\r\n\r\nUse case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive\/private data without the need to rethink data processing pipelines. \r\n\r\nMany thanks for your responses ahead of time and kind regards,\r\nMFreidank","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/281\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/280","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/280\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/280\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/280\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/280","id":640677615,"node_id":"MDU6SXNzdWU2NDA2Nzc2MTU=","number":280,"title":"Error with SquadV2 Metrics","user":{"login":"avinregmi","id":32203792,"node_id":"MDQ6VXNlcjMyMjAzNzky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/32203792?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/avinregmi","html_url":"https:\/\/github.com\/avinregmi","followers_url":"https:\/\/api.github.com\/users\/avinregmi\/followers","following_url":"https:\/\/api.github.com\/users\/avinregmi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/avinregmi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/avinregmi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/avinregmi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/avinregmi\/orgs","repos_url":"https:\/\/api.github.com\/users\/avinregmi\/repos","events_url":"https:\/\/api.github.com\/users\/avinregmi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/avinregmi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592421054000,"updated_at":1592555621000,"closed_at":1592555621000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I can't seem to import squad v2 metrics. \r\n\r\n**squad_metric = nlp.load_metric('squad_v2')**\r\n\r\n**This throws me an error.:**\r\n\r\n\r\n```\r\nImportError Traceback (most recent call last)\r\n<ipython-input-8-170b6a170555> in <module>\r\n----> 1 squad_metric = nlp.load_metric('squad_v2')\r\n\r\n~\/env\/lib64\/python3.6\/site-packages\/nlp\/load.py in load_metric(path, name, process_id, num_process, data_dir, experiment_id, in_memory, download_config, **metric_init_kwargs)\r\n 426 \"\"\"\r\n 427 module_path = prepare_module(path, download_config=download_config, dataset=False)\r\n--> 428 metric_cls = import_main_class(module_path, dataset=False)\r\n 429 metric = metric_cls(\r\n 430 name=name,\r\n\r\n~\/env\/lib64\/python3.6\/site-packages\/nlp\/load.py in import_main_class(module_path, dataset)\r\n 55 \"\"\"\r\n 56 importlib.invalidate_caches()\r\n---> 57 module = importlib.import_module(module_path)\r\n 58 \r\n 59 if dataset:\r\n\r\n\/usr\/lib64\/python3.6\/importlib\/__init__.py in import_module(name, package)\r\n 124 break\r\n 125 level += 1\r\n--> 126 return _bootstrap._gcd_import(name[level:], package, level)\r\n 127 \r\n 128 \r\n\r\n\/usr\/lib64\/python3.6\/importlib\/_bootstrap.py in _gcd_import(name, package, level)\r\n\r\n\/usr\/lib64\/python3.6\/importlib\/_bootstrap.py in _find_and_load(name, import_)\r\n\r\n\/usr\/lib64\/python3.6\/importlib\/_bootstrap.py in _find_and_load_unlocked(name, import_)\r\n\r\n\/usr\/lib64\/python3.6\/importlib\/_bootstrap.py in _load_unlocked(spec)\r\n\r\n\/usr\/lib64\/python3.6\/importlib\/_bootstrap_external.py in exec_module(self, module)\r\n\r\n\/usr\/lib64\/python3.6\/importlib\/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)\r\n\r\n~\/env\/lib64\/python3.6\/site-packages\/nlp\/metrics\/squad_v2\/a15e787c76889174874386d3def75321f0284c11730d2a57e28fe1352c9b5c7a\/squad_v2.py in <module>\r\n 16 \r\n 17 import nlp\r\n---> 18 from .evaluate import evaluate\r\n 19 \r\n 20 _CITATION = \"\"\"\\\r\n\r\nImportError: cannot import name 'evaluate'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/280\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/279","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/279\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/279\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/279\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/279","id":640611692,"node_id":"MDU6SXNzdWU2NDA2MTE2OTI=","number":279,"title":"Dataset Preprocessing Cache with .map() function not working as expected","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re-process using `.map(my_func, load_from_cache_file=False)` if you want to.\r\n\r\nI am curious about the problem you have with splits. It makes me think about #160 that was an issue of version 0.1.0. What version of `nlp` are you running ? Could you give me more details ?","Thanks, that's helpful! I was running 0.1.0, but since upgraded to 0.2.1. I can't reproduce the issue anymore as I've cleared the cache & everything now seems to be running fine since the upgrade. I've added some checks to my code, so if I do encounter it again I will reopen this issue.","Just checking in, the cache sometimes still does not work when I make changes in my processing function in version `1.2.1`. The changes made to my data processing function only propagate to the dataset when I use `load_from_cache_file=False` or clear the cache. Is this a system-specific issue?","Hi @sarahwie \r\nThe data are reloaded from the cache if the hash of the function you provide is the same as a computation you've done before. The hash is computed by recursively looking at the python objects of the function you provide.\r\n\r\nIf you think there's an issue, can you share the function you used or a google colab please ?","I can't reproduce it, so I'll close for now."],"created_at":1592414241000,"updated_at":1625607808000,"closed_at":1618789429000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system. \r\n\r\nIs there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file. \r\n\r\nCould you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess. \r\nI was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set.\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/279\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/278","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/278\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/278\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/278\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/278","id":640518917,"node_id":"MDU6SXNzdWU2NDA1MTg5MTc=","number":278,"title":"MemoryError when loading German Wikipedia","user":{"login":"gregburman","id":4698028,"node_id":"MDQ6VXNlcjQ2OTgwMjg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4698028?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/gregburman","html_url":"https:\/\/github.com\/gregburman","followers_url":"https:\/\/api.github.com\/users\/gregburman\/followers","following_url":"https:\/\/api.github.com\/users\/gregburman\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/gregburman\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/gregburman\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/gregburman\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/gregburman\/orgs","repos_url":"https:\/\/api.github.com\/users\/gregburman\/repos","events_url":"https:\/\/api.github.com\/users\/gregburman\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/gregburman\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi !\r\n\r\nAs you noticed, \"big\" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don't have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download and use them right away.\r\n\r\nThis is the case for english and french wikipedia right now: we've processed them ourselves and now they are available from our google storage. However we've not processed the german one (yet).","Hi @lhoestq \r\n\r\nThank you for your quick reply. I thought this might be the case, that the processing was done for some languages and not for others. Is there any set timeline for when other languages (German, Italian) will be processed?\r\n\r\nGiven enough memory, is it possible to process the data ourselves by specifying the `beam_runner`?","Adding them is definitely in our short term objectives. I'll be working on this early next week :)\r\n\r\nAlthough if you have an apache beam runtime feel free to specify the beam runner. You can find more info [here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/docs\/beam_dataset.md) on how to make it work on Dataflow but you can adapt it for Spark or any other beam runtime (by changing the `runner`).\r\n\r\nHowever if you don't have a beam runtime and even if you have enough memory, I discourage you to use the `DirectRunner` on the german or italian wikipedia. According to Apache Beam documentation it was made for testing purposes and therefore it is memory-inefficient.","German is [almost] done @gregburman","I added the German and the Italian Wikipedia to our google cloud storage:\r\nFirst update the `nlp` package to 0.3.0:\r\n```bash\r\npip install nlp --upgrade\r\n```\r\nand then\r\n```python\r\nfrom nlp import load_dataset\r\nwiki_de = load_dataset(\"wikipedia\", \"20200501.de\")\r\nwiki_it = load_dataset(\"wikipedia\", \"20200501.it\")\r\n```\r\nThe datasets are downloaded and directly ready to use (no processing).","Hi @lhoestq \r\n\r\nWow, thanks so much, that's **really** incredible! I was considering looking at creating my own Beam Dataset, as per the doc you linked, but instead opted to process the data myself using `wikiextractor`. However, now that this is available, I'll definitely switch across and use it.\r\n\r\nThanks so much for the incredible work, this really helps out our team considerably!\r\n\r\nHave a great (and well-deserved ;) weekend ahead!\r\n\r\nP.S. I'm not sure if I should close the issue here - if so I'm happy to do so.","Thanks for your message, glad I could help :)\r\nClosing this one."],"created_at":1592406381000,"updated_at":1592571182000,"closed_at":1592571182000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :)\r\n\r\nI'm trying to download the German Wikipedia dataset as follows:\r\n\r\n```\r\nwiki = nlp.load_dataset(\"wikipedia\", \"20200501.de\", split=\"train\")\r\n```\r\n\r\nHowever, when I do so, I get the following error:\r\n\r\n```\r\nDownloading and preparing dataset wikipedia\/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to \/home\/ubuntu\/.cache\/huggingface\/datasets\/wikipedia\/20200501.de\/1.0.0...\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/ubuntu\/anaconda3\/envs\/albert\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 520, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/home\/ubuntu\/anaconda3\/envs\/albert\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 433, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/ubuntu\/anaconda3\/envs\/albert\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 824, in _download_and_prepare\r\n \"\\n\\t`{}`\".format(usage_example)\r\nnlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https:\/\/beam.apache.org\/documentation\/runners\/capability-matrix\/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n\t`load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')`\r\n```\r\n\r\nSo, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned.\r\n\r\nThis isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset?\r\n\r\nMy nlp version is 0.2.1.\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/278\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/277","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/277\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/277\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/277\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/277","id":640163053,"node_id":"MDU6SXNzdWU2NDAxNjMwNTM=","number":277,"title":"Empty samples in glue\/qqp","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?","Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. "],"created_at":1592373292000,"updated_at":1592698905000,"closed_at":1592698905000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\nqqp = nlp.load_dataset('glue', 'qqp')\r\nprint(qqp['train'][310121])\r\nprint(qqp['train'][362225])\r\n```\r\n```\r\n{'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137}\r\n{'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246}\r\n```\r\nNotice that question 2 is empty string. \r\nBTW, I have checked and these two are the only naughty ones in all splits of qqp.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/277\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/276","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/276\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/276\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/276\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/276","id":639490858,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5","number":276,"title":"Fix metric compute (original_instructions missing)","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome! This is working now:\r\n\r\n```python\r\nimport nlp \r\nseqeval = nlp.load_metric(\"seqeval\") \r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\n\r\nresults = seqeval.compute(y_true, y_pred)\r\n```\r\n\r\nI heavily need this fix for an upcoming `nlp` integration PR for Transformers (token classification example) \ud83d\ude05","Haha nice ! We'll ship this fix with the next release that will probably come out on thursday :)"],"created_at":1592297521000,"updated_at":1592466105000,"closed_at":1592466104000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/276","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/276","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/276.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/276.patch"},"body":"When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset.\r\nHowever metrics load data the same way but don't need instructions (we use one single file).\r\n\r\nIn this PR I just make `original_instructions` optional when reading files to load a `Dataset` object.\r\n\r\nThis should fix #269 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/276\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/275","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/275\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/275\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/275\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/275","id":639439052,"node_id":"MDU6SXNzdWU2Mzk0MzkwNTI=","number":275,"title":"NonMatchingChecksumError when loading pubmed dataset","user":{"login":"DavideStenner","id":48441753,"node_id":"MDQ6VXNlcjQ4NDQxNzUz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/48441753?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/DavideStenner","html_url":"https:\/\/github.com\/DavideStenner","followers_url":"https:\/\/api.github.com\/users\/DavideStenner\/followers","following_url":"https:\/\/api.github.com\/users\/DavideStenner\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/DavideStenner\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/DavideStenner\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/DavideStenner\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/DavideStenner\/orgs","repos_url":"https:\/\/api.github.com\/users\/DavideStenner\/repos","events_url":"https:\/\/api.github.com\/users\/DavideStenner\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/DavideStenner\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/42851186\/84751599-096c6580-afbd-11ea-97f3-ee4aef791711.png)\r\n"],"created_at":1592292711000,"updated_at":1592552227000,"closed_at":1592552227000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`.\r\nThe error is:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNonMatchingChecksumError Traceback (most recent call last)\r\n<ipython-input-2-7742dea167d0> in <module>()\r\n----> 1 df = nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')\r\n 2 df = pd.DataFrame(df)\r\n 3 gc.collect()\r\n\r\n3 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 518 download_mode=download_mode,\r\n 519 ignore_verifications=ignore_verifications,\r\n--> 520 save_infos=save_infos,\r\n 521 )\r\n 522 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 431 verify_infos = not save_infos and not ignore_verifications\r\n 432 self._download_and_prepare(\r\n--> 433 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 434 )\r\n 435 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 468 # Checksums verification\r\n 469 if verify_infos:\r\n--> 470 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n 471 for split_generator in split_generators:\r\n 472 if str(split_generator.split_info.name).lower() == \"all\":\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/utils\/info_utils.py in verify_checksums(expected_checksums, recorded_checksums)\r\n 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]\r\n 35 if len(bad_urls) > 0:\r\n---> 36 raise NonMatchingChecksumError(str(bad_urls))\r\n 37 logger.info(\"All the checksums matched successfully.\")\r\n 38 \r\n\r\nNonMatchingChecksumError: ['https:\/\/drive.google.com\/uc?id=1b3rmCSIoh6VhD4HKWjI4HOW-cSwcwbeC&export=download', 'https:\/\/drive.google.com\/uc?id=1lvsqvsFi3W-pE1SqNZI0s8NR9rC1tsja&export=download']\r\n```\r\nI'm currently working on google colab.\r\n\r\nThat is quite strange because yesterday it was fine.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/275\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/274","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/274\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/274\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/274\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/274","id":639156625,"node_id":"MDU6SXNzdWU2MzkxNTY2MjU=","number":274,"title":"PG-19","user":{"login":"lucidrains","id":108653,"node_id":"MDQ6VXNlcjEwODY1Mw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/108653?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lucidrains","html_url":"https:\/\/github.com\/lucidrains","followers_url":"https:\/\/api.github.com\/users\/lucidrains\/followers","following_url":"https:\/\/api.github.com\/users\/lucidrains\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lucidrains\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lucidrains\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lucidrains\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lucidrains\/orgs","repos_url":"https:\/\/api.github.com\/users\/lucidrains\/repos","events_url":"https:\/\/api.github.com\/users\/lucidrains\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lucidrains\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sounds good! Do you want to give it a try?","Ok, I'll see if I can figure it out tomorrow!","Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that each book from pg19 actually resides as its own text file in a google cloud folder that denotes the split, where the book id is the name of the text file. https:\/\/console.cloud.google.com\/storage\/browser\/deepmind-gutenberg\/train\/ I don't believe there's anywhere else (even in the supplied metadata), where the mapping of id -> split can be found.\r\n\r\nTherefore I end up making a network call `tf.io.gfile.listdir` to get all the files within each of the split directories. https:\/\/github.com\/lucidrains\/nlp\/commit\/adbacbd85decc80db2347d0882e7dab4faa6fd03#diff-cece8f166a85dd927caf574ba303d39bR78\r\n\r\nDoes this network call need to be eventually stubbed out for testing?","Ohh nevermind, I think I can use `download_custom` here with `listdir` as the custom function. Ok, I'll keep trying to make the dummy data work!"],"created_at":1592254946000,"updated_at":1594049702000,"closed_at":1594049702000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, and thanks for all your open-sourced work, as always!\r\n\r\nI was wondering if you would be open to adding PG-19 to your collection of datasets. https:\/\/github.com\/deepmind\/pg19 It is often used for benchmarking long-range language modeling.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/274\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/273","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/273\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/273\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/273\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/273","id":638968054,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4","number":273,"title":"update cos_e to add cos_e v1.0","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592237002000,"updated_at":1592295954000,"closed_at":1592295952000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/273","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/273","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/273.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/273.patch"},"body":"This PR updates the cos_e dataset to add v1.0 as requested here #163 \r\n@nazneenrajani","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/273\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/272","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/272\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/272\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/272\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/272","id":638307313,"node_id":"MDExOlB1bGxSZXF1ZXN0NDM0MTExOTQ3","number":272,"title":"asd","user":{"login":"sn696","id":66900970,"node_id":"MDQ6VXNlcjY2OTAwOTcw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/66900970?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sn696","html_url":"https:\/\/github.com\/sn696","followers_url":"https:\/\/api.github.com\/users\/sn696\/followers","following_url":"https:\/\/api.github.com\/users\/sn696\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sn696\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sn696\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sn696\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sn696\/orgs","repos_url":"https:\/\/api.github.com\/users\/sn696\/repos","events_url":"https:\/\/api.github.com\/users\/sn696\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sn696\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1592122838000,"updated_at":1592126201000,"closed_at":1592126201000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/272","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/272","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/272.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/272.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/272\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/271","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/271\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/271\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/271\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/271","id":638135754,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMzOTg3NDkw","number":271,"title":"Fix allocin\u00e9 dataset configuration","user":{"login":"TheophileBlard","id":37028092,"node_id":"MDQ6VXNlcjM3MDI4MDky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37028092?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TheophileBlard","html_url":"https:\/\/github.com\/TheophileBlard","followers_url":"https:\/\/api.github.com\/users\/TheophileBlard\/followers","following_url":"https:\/\/api.github.com\/users\/TheophileBlard\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TheophileBlard\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TheophileBlard\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TheophileBlard\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TheophileBlard\/orgs","repos_url":"https:\/\/api.github.com\/users\/TheophileBlard\/repos","events_url":"https:\/\/api.github.com\/users\/TheophileBlard\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TheophileBlard\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n```python\r\ndataset = load_dataset('allocine')\r\n```\r\nand it works.\r\n\r\nMaybe we should take that into account in the nlp viewer @srush ?","@lhoestq Just to understand the exact semantics. Are you suggesting that if there is exactly 1 configuration I should not show the configuration menu and just treat it as if there were 0 configurations? ","The configuration menu is fine imo.\r\nIt was more about the code snippet presented in the viewer.\r\nFor example for Allocin\u00e9 it currently shows this snippet to load the dataset:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('allocine', 'allocine')\r\n```\r\nHowever for datasets with one or zero configurations, the second argument in `load_dataset` is optional. For Allocin\u00e9, that has one configuration, we can expect to show instead:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('allocine')\r\n```","> Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n> \r\n> ```python\r\n> dataset = load_dataset('allocine')\r\n> ```\r\n> \r\n> and it works.\r\n> \r\n> Maybe we should take that into account in the nlp viewer @srush ?\r\n\r\nOh ok, I didn't expect it would work! \r\n\r\nAnyway, I think it's intrinsically better to simply remove the optional parameter. \r\nThe dummy data folder architecture seems also more logical this way.\r\n","Fixed in the viewer. Checked that allocine works.","Awesome thanks :)\r\n\r\nClosing this."],"created_at":1592043130000,"updated_at":1592466081000,"closed_at":1592466080000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/271","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/271","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/271.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/271.patch"},"body":"This is a patch for #244. According to the [live nlp viewer](url), the Allocin\u00e9 dataset must be loaded with :\r\n```python\r\ndataset = load_dataset('allocine', 'allocine')\r\n```\r\nThis is redundant, as there is only one \"dataset configuration\", and should only be:\r\n```python\r\ndataset = load_dataset('allocine')\r\n```\r\n\r\nThis is my mistake, because the code for [`allocine.py`](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/allocine\/allocine.py) was inspired by [`imdb.py`](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/imdb\/imdb.py), which also force the user to specify the \"dataset configuration\" (even if there is only one).\r\n\r\nI believe this PR should solve this issue, making the Allocin\u00e9 dataset more convenient to use.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/271\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/270","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/270\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/270\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/270\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/270","id":638121617,"node_id":"MDU6SXNzdWU2MzgxMjE2MTc=","number":270,"title":"c4 dataset is not viewable in nlpviewer demo","user":{"login":"rajarsheem","id":6441313,"node_id":"MDQ6VXNlcjY0NDEzMTM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6441313?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/rajarsheem","html_url":"https:\/\/github.com\/rajarsheem","followers_url":"https:\/\/api.github.com\/users\/rajarsheem\/followers","following_url":"https:\/\/api.github.com\/users\/rajarsheem\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/rajarsheem\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/rajarsheem\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/rajarsheem\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/rajarsheem\/orgs","repos_url":"https:\/\/api.github.com\/users\/rajarsheem\/repos","events_url":"https:\/\/api.github.com\/users\/rajarsheem\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/rajarsheem\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["C4 is too large to be shown in the viewer"],"created_at":1592036776000,"updated_at":1603812929000,"closed_at":1603812913000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I get the following error when I try to view the c4 dataset in [nlpviewer](https:\/\/huggingface.co\/nlp\/viewer\/)\r\n\r\n```python\r\nModuleNotFoundError: No module named 'langdetect'\r\nTraceback:\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/ScriptRunner.py\", line 322, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp_viewer\/run.py\", line 54, in <module>\r\n configs = get_confs(option.id)\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 591, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 575, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"\/home\/sasha\/nlp_viewer\/run.py\", line 48, in get_confs\r\n builder_cls = nlp.load.import_main_class(module_path, dataset=True)\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 57, in import_main_class\r\n module = importlib.import_module(module_path)\r\nFile \"\/usr\/lib\/python3.7\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\nFile \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\nFile \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\nFile \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/datasets\/c4\/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19\/c4.py\", line 29, in <module>\r\n from .c4_utils import (\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/datasets\/c4\/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19\/c4_utils.py\", line 29, in <module>\r\n import langdetect\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/270\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/269","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/269\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/269\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/269\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/269","id":638106774,"node_id":"MDU6SXNzdWU2MzgxMDY3NzQ=","number":269,"title":"Error in metric.compute: missing `original_instructions` argument","user":{"login":"zphang","id":1668462,"node_id":"MDQ6VXNlcjE2Njg0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1668462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zphang","html_url":"https:\/\/github.com\/zphang","followers_url":"https:\/\/api.github.com\/users\/zphang\/followers","following_url":"https:\/\/api.github.com\/users\/zphang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zphang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zphang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zphang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zphang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zphang\/repos","events_url":"https:\/\/api.github.com\/users\/zphang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zphang\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1592029614000,"updated_at":1592466104000,"closed_at":1592466104000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example:\r\n\r\n```python\r\nimport nlp\r\nrte_metric = nlp.load_metric('glue', name=\"rte\")\r\nrte_metric.compute(\r\n [0, 0, 1, 1],\r\n [0, 1, 0, 1],\r\n)\r\n```\r\n\r\n```\r\n 181 # Read the predictions and references\r\n 182 reader = ArrowReader(path=self.data_dir, info=None)\r\n--> 183 self.data = reader.read_files(node_files)\r\n 184 \r\n 185 # Release all of our locks\r\n\r\nTypeError: read_files() missing 1 required positional argument: 'original_instructions'\r\n```\r\n\r\nI believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/269\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/268","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/268\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/268\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/268\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/268","id":637848056,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1","number":268,"title":"add Rotten Tomatoes Movie Review sentences sentiment dataset","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@jplu @thomwolf @patrickvonplaten @lhoestq -- How do I request reviewers? Thanks."],"created_at":1591977239000,"updated_at":1592466384000,"closed_at":1592466383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/268","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/268","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/268.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/268.patch"},"body":"Sentence-level movie reviews v1.0 from here: http:\/\/www.cs.cornell.edu\/people\/pabo\/movie-review-data\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/268\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/267","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/267\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/267\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/267\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/267","id":637415545,"node_id":"MDU6SXNzdWU2Mzc0MTU1NDU=","number":267,"title":"How can I load\/find WMT en-romanian?","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I will take a look :-) "],"created_at":1591924177000,"updated_at":1592555059000,"closed_at":1592555059000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"I believe it is from `wmt16`\r\n\r\nWhen I run\r\n\r\n```python\r\nwmt = nlp.load_dataset('wmt16')\r\n```\r\nI get:\r\n```python\r\nAssertionError: The dataset wmt16 with config cs-en requires manual data. \r\n Please follow the manual download instructions: Some of the wmt configs here, require a manual download.\r\n Please look into wmt.py to see the exact path (and file name) that has to\r\n be downloaded.\r\n . \r\n Manual data can be loaded with `nlp.load(wmt16, data_dir='<path\/to\/manual\/data>')\r\n```\r\nThere is no wmt.py,as the error message suggests, and wmt16.py doesn't have manual download instructions.\r\n\r\nAny idea how to do this?\r\n\r\nThanks in advance!\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/267\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/266","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/266\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/266\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/266\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/266","id":637156392,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw","number":266,"title":"Add sort, shuffle, test_train_split and select methods","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice !\r\n\r\nAlso it looks like we can have a train_test_split method for free:\r\n```python\r\ntrain_indices, test_indices = train_test_split(range(len(dataset)))\r\ntrain = dataset.sort(indices=train_indices)\r\ntest = dataset.sort(indices=test_indices)\r\n```\r\n\r\nand a shuffling method for free:\r\n```python\r\nshuffled_indices = shuffle(range(len(dataset)))\r\nshuffled_dataset = dataset.sort(indices=shuffled_indices)\r\n```\r\n\r\nMaybe we can have a specific API for train_test_split and shuffle. They are two features asked quite often (see #147, #166)","Ok, I think this one is ready to merge.\r\n\r\n@patrickvonplaten @jplu @mariamabarham @joeddav @n1t0 @julien-c you may want to give it a look, it adds a bunch of methods to reorder\/split\/select rows in a dataset:\r\n- `dataset.select(indices)`: Create a new dataset with rows selected following the list\/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)\r\n- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)\r\n- `dataset.shuffle(seed)`: shuffle a dataset rows\r\n- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)\r\n\r\nAll these methods are **not** in-place which means they return new ``Dataset``, which is the default behavior in the library.","> Might be a solution to put 0.25 and 0.75 as default values for respectively `test_size` and `train_size`. WDYT?\r\n\r\nAccording to sklearn documentation, it is indeed set to 0.25 and 0.75 if both `test_size` and `train_size` are None.\r\nLet me add it.","I think we're good to go now :) @joeddav @thomwolf @jplu "],"created_at":1591892540000,"updated_at":1592497405000,"closed_at":1592497404000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/266","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/266","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/266.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/266.patch"},"body":"Add a bunch of methods to reorder\/split\/select rows in a dataset:\r\n- `dataset.select(indices)`: Create a new dataset with rows selected following the list\/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be smaller than the dataset size, otherwise we're indexing outside the dataset...)\r\n- `dataset.sort(column_name)`: sort a dataset according to a column (has to be a column with a numpy compatible type)\r\n- `dataset.shuffle(seed)`: shuffle a dataset rows\r\n- `dataset.train_test_split(test_size, train_size)`: Return a dictionary with two random train and test subsets (`train` and `test` ``Dataset`` splits)\r\n\r\nAll these methods are **not** in-place which means they return new ``Dataset``.\r\nThis is the default behavior in the library.\r\n\r\nFix #147 #166 #259 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/266\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/265","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/265\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/265\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/265\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/265","id":637139220,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz","number":265,"title":"Add pyarrow warning colab","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1591891071000,"updated_at":1596392076000,"closed_at":1591949656000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/265","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/265","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/265.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/265.patch"},"body":"When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow.\r\n\r\nThis is an issue because `nlp` requires the updated version to work correctly.\r\n\r\nIn this PR I added en error that is shown to the user in google colab if the user tries to `import nlp` without having restarted the runtime. The error tells the user to restart the runtime.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/265\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/264","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/264\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/264\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/264\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/264","id":637106170,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4","number":264,"title":"Fix small issues creating dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1591888816000,"updated_at":1591949757000,"closed_at":1591949756000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/264","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/264","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/264.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/264.patch"},"body":"Fix many small issues mentioned in #249:\r\n- don't force to install apache beam for commands\r\n- fix None cache dir when using `dl_manager.download_custom`\r\n- added new extras in `setup.py` named `dev` that contains tests and quality dependencies\r\n- mock dataset sizes when running tests with dummy data\r\n- add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md\r\n\r\nThis should help users create their datasets.\r\nNext step is the `add_dataset.md` docs :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/264\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/263","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/263\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/263\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/263\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/263","id":637028015,"node_id":"MDU6SXNzdWU2MzcwMjgwMTU=","number":263,"title":"[Feature request] Support for external modality for language datasets","user":{"login":"aleSuglia","id":1479733,"node_id":"MDQ6VXNlcjE0Nzk3MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1479733?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aleSuglia","html_url":"https:\/\/github.com\/aleSuglia","followers_url":"https:\/\/api.github.com\/users\/aleSuglia\/followers","following_url":"https:\/\/api.github.com\/users\/aleSuglia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aleSuglia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aleSuglia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aleSuglia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aleSuglia\/orgs","repos_url":"https:\/\/api.github.com\/users\/aleSuglia\/repos","events_url":"https:\/\/api.github.com\/users\/aleSuglia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aleSuglia\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We'll probably try to tackle this during the summer.","I was looking into Facebook MMF and apparently they decided to use LMDB to store additional features associated with every example: https:\/\/github.com\/facebookresearch\/mmf\/blob\/master\/mmf\/datasets\/databases\/features_database.py\r\n\r\n","I saw the Mozilla common_voice dataset in model hub, which has mp3 audio recordings as part it. It's use predominantly maybe in ASR and TTS, but dataset is a Language + Voice Dataset similar to @aleSuglia's point about Language + Vision. \r\n\r\nhttps:\/\/huggingface.co\/datasets\/common_voice"],"created_at":1591882938000,"updated_at":1617247964000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"# Background\r\n\r\nIn recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to \"learn to speak by listening to the radio\" [[E. Bender and A. Koller,2020](https:\/\/openreview.net\/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https:\/\/arxiv.org\/abs\/2004.10151)]. Therefore, the importance of multi-modal datasets for the NLP community is of paramount importance for next-generation models. For this reason, I raised a [concern](https:\/\/github.com\/huggingface\/nlp\/pull\/236#issuecomment-639832029) related to the best way to integrate external features in NLP datasets (e.g., visual features associated with an image, audio features associated with a recording, etc.). This would be of great importance for a more systematic way of representing data for ML models that are learning from multi-modal data. \r\n\r\n# Language + Vision\r\n\r\n## Use case\r\nTypically, people working on Language+Vision tasks, have a reference dataset (either in JSON or JSONL format) and for each example, they have an identifier that specifies the reference image. For a practical example, you can refer to the [GQA](https:\/\/cs.stanford.edu\/people\/dorarad\/gqa\/download.html#seconddown) dataset.\r\n\r\nCurrently, images are represented by either pooling-based features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https:\/\/arxiv.org\/abs\/1611.08481), [Shekhar et.al, 2019](https:\/\/www.aclweb.org\/anthology\/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https:\/\/arxiv.org\/abs\/1502.03044)). A more recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https:\/\/arxiv.org\/abs\/1908.03557), is to use FastRCNN features. \r\n\r\nFor all these types of features, people use one of the following formats:\r\n1. [HD5F](https:\/\/pypi.org\/project\/h5py\/)\r\n2. [NumPy](https:\/\/numpy.org\/doc\/stable\/reference\/generated\/numpy.savez.html)\r\n3. [LMDB](https:\/\/lmdb.readthedocs.io\/en\/release\/)\r\n\r\n## Implementation considerations\r\n\r\nI was thinking about possible ways of implementing this feature. As mentioned above, depending on the model, different visual features can be used. This step usually relies on another model (say ResNet-101) that is used to generate the visual features for each image used in the dataset. Typically, this step is done in a separate script that completes the feature generation procedure. The usual processing steps for these datasets are the following:\r\n\r\n1. Download dataset\r\n2. Download images associated with the dataset\r\n3. Write a script that generates the visual features for every image and store them in a specific file\r\n4. Create a DataLoader that maps the visual features to the corresponding language example\r\n\r\nIn my personal projects, I've decided to ignore HD5F because it doesn't have out-of-the-box support for multi-processing (see this PyTorch [issue](https:\/\/github.com\/pytorch\/pytorch\/issues\/11929)). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it.\r\n\r\nFor ease of use of all these Language+Vision datasets, it would be really handy to have a way to associate the visual features with the text and store them in an efficient way. That's why I immediately thought about the HuggingFace NLP backend based on Apache Arrow. The assumption here is that the external modality will be mapped to a N-dimensional tensor so easily represented by a NumPy array. \r\n\r\nLooking forward to hearing your thoughts about it!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/263\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/262","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/262\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/262\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/262\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/262","id":636702849,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMyODI3Mzcz","number":262,"title":"Add new dataset ANLI Round 1","user":{"login":"easonnie","id":11016329,"node_id":"MDQ6VXNlcjExMDE2MzI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11016329?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/easonnie","html_url":"https:\/\/github.com\/easonnie","followers_url":"https:\/\/api.github.com\/users\/easonnie\/followers","following_url":"https:\/\/api.github.com\/users\/easonnie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/easonnie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/easonnie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/easonnie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/easonnie\/orgs","repos_url":"https:\/\/api.github.com\/users\/easonnie\/repos","events_url":"https:\/\/api.github.com\/users\/easonnie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/easonnie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello ! Thanks for adding this one :)\r\n\r\nThis looks great, you just have to do the last steps to make the CI pass.\r\nI can see that two things are missing:\r\n1. the dummy data that is used to test that the script is working as expected\r\n2. the json file with all the infos about the dataset\r\n\r\nYou can see the steps to help you create the dummy data and generate the dataset_infos.json file right [here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset)"],"created_at":1591848897000,"updated_at":1591999383000,"closed_at":1591999383000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/262","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/262","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/262.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/262.patch"},"body":"Adding new dataset [ANLI](https:\/\/github.com\/facebookresearch\/anli\/).\r\n\r\nI'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/262\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/261","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/261\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/261\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/261\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/261","id":636372380,"node_id":"MDU6SXNzdWU2MzYzNzIzODA=","number":261,"title":"Downloading dataset error with pyarrow.lib.RecordBatch","user":{"login":"cuent","id":5248968,"node_id":"MDQ6VXNlcjUyNDg5Njg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/5248968?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cuent","html_url":"https:\/\/github.com\/cuent","followers_url":"https:\/\/api.github.com\/users\/cuent\/followers","following_url":"https:\/\/api.github.com\/users\/cuent\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cuent\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cuent\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cuent\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cuent\/orgs","repos_url":"https:\/\/api.github.com\/users\/cuent\/repos","events_url":"https:\/\/api.github.com\/users\/cuent\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cuent\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your message.","Yeah, that worked! Thanks :) "],"created_at":1591805059000,"updated_at":1591886112000,"closed_at":1591886112000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to download `sentiment140` and I have the following error\r\n\r\n```\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 518 download_mode=download_mode,\r\n 519 ignore_verifications=ignore_verifications,\r\n--> 520 save_infos=save_infos,\r\n 521 )\r\n 522 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 418 verify_infos = not save_infos and not ignore_verifications\r\n 419 self._download_and_prepare(\r\n--> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 421 )\r\n 422 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 472 try:\r\n 473 # Prepare split will record examples associated to the split\r\n--> 474 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 475 except OSError:\r\n 476 raise OSError(\"Cannot find data file. \" + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or \"\"))\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 652 for key, record in utils.tqdm(generator, unit=\" examples\", total=split_info.num_examples, leave=False):\r\n 653 example = self.info.features.encode_example(record)\r\n--> 654 writer.write(example)\r\n 655 num_examples, num_bytes = writer.finalize()\r\n 656 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in write(self, example, writer_batch_size)\r\n 143 self._build_writer(pa_table=pa.Table.from_pydict(example))\r\n 144 if writer_batch_size is not None and len(self.current_rows) >= writer_batch_size:\r\n--> 145 self.write_on_file()\r\n 146 \r\n 147 def write_batch(\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in write_on_file(self)\r\n 127 else:\r\n 128 # All good\r\n--> 129 self._write_array_on_file(pa_array)\r\n 130 self.current_rows = []\r\n 131 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in _write_array_on_file(self, pa_array)\r\n 96 def _write_array_on_file(self, pa_array):\r\n 97 \"\"\"Write a PyArrow Array\"\"\"\r\n---> 98 pa_batch = pa.RecordBatch.from_struct_array(pa_array)\r\n 99 self._num_bytes += pa_array.nbytes\r\n 100 self.pa_writer.write_batch(pa_batch)\r\n\r\nAttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'\r\n```\r\n\r\nI installed the last version and ran the following command:\r\n\r\n```python\r\nimport nlp\r\nsentiment140 = nlp.load_dataset('sentiment140', cache_dir='\/content')\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/261\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/260","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/260\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/260\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/260\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/260","id":636261118,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMyNDY3NDM5","number":260,"title":"Consistency fixes","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1591796682000,"updated_at":1591871677000,"closed_at":1591871676000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/260","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/260","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/260.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/260.patch"},"body":"A few bugs I've found while hacking","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/260\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/259","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/259\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/259\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/259\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/259","id":636239529,"node_id":"MDU6SXNzdWU2MzYyMzk1Mjk=","number":259,"title":"documentation missing how to split a dataset","user":{"login":"fotisj","id":2873355,"node_id":"MDQ6VXNlcjI4NzMzNTU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/2873355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/fotisj","html_url":"https:\/\/github.com\/fotisj","followers_url":"https:\/\/api.github.com\/users\/fotisj\/followers","following_url":"https:\/\/api.github.com\/users\/fotisj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/fotisj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/fotisj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/fotisj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/fotisj\/orgs","repos_url":"https:\/\/api.github.com\/users\/fotisj\/repos","events_url":"https:\/\/api.github.com\/users\/fotisj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/fotisj\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`","Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHowever right now we don't have a way to shuffle a dataset but we are thinking about it in the discussion in #166. Feel free to share your thoughts about it.\r\n\r\nOne trick that you can do until we have a better solution is to shuffle and split the indices of your dataset:\r\n```python\r\nimport nlp\r\nfrom sklearn.model_selection import train_test_split\r\n\r\nimdb = nlp.load_dataset('imbd', split='test')\r\ntest_indices, val_indices = train_test_split(range(len(imdb)))\r\n```\r\n\r\nand then to iterate each split:\r\n```python\r\nfor i in test_indices:\r\n example = imdb[i]\r\n ...\r\n```\r\n","I added a small guide [here](https:\/\/github.com\/huggingface\/nlp\/tree\/master\/docs\/splits.md) that explains how to split a dataset. It is very similar to the tensorflow datasets guide, as we kept the same logic.","Thanks a lot, the new explanation is very helpful!\r\n\r\nAbout using train_test_split from sklearn: I stumbled across the [same error message as this user ](https:\/\/github.com\/huggingface\/nlp\/issues\/147 )and thought it can't be used at the moment in this context. Will check it out again.\r\n\r\nOne of the problems is how to shuffle very large datasets, which don't fit into the memory. Well, one strategy could be shuffling data in sections. But in a case where the data is sorted by the labels you have to swap larger sections first. \r\n","We added a way to shuffle datasets (shuffle the indices and then reorder to make a new dataset).\r\nYou can do `shuffled_dset = dataset.shuffle(seed=my_seed)`. It shuffles the whole dataset.\r\nThere is also `dataset.train_test_split()` which if very handy (with the same signature as sklearn).\r\n\r\nClosing this issue as we added the docs for splits and tools to split datasets. Thanks again for your feedback !"],"created_at":1591795093000,"updated_at":1592518824000,"closed_at":1592518824000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I am trying to understand how to split a dataset ( as arrow_dataset). \r\nI know I can do something like this to access a split which is already in the original dataset : \r\n\r\n`ds_test = nlp.load_dataset('imdb, split='test') `\r\n\r\nBut how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)?\r\nI guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description: \r\n\r\n> See the [guide on splits](https:\/\/github.com\/huggingface\/nlp\/tree\/master\/docs\/splits.md) for more information.\r\n\r\nBut the guide seems to be missing.\r\n\r\nTo clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https:\/\/www.tensorflow.org\/datasets\/splits). But to come back to the example above: I cannot simply split the testset doing this: \r\n`ds_test = nlp.load_dataset('imdb, split='test'[:5000]) `\r\n`ds_val = nlp.load_dataset('imdb, split='test'[5000:])`\r\n\r\nbecause the imdb test data is sorted by class (probably not a good idea anyway)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/259\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/258","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/258\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/258\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/258\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/258","id":635859525,"node_id":"MDU6SXNzdWU2MzU4NTk1MjU=","number":258,"title":"Why is dataset after tokenization far more larger than the orginal one ?","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of `.map`","Hi ! Thanks for your reply.\r\n\r\nBut since size of `input_ids` < size of `text`, I am wondering why\r\nsize of `input_ids` + `text` > 2x the size of `text` \ud83e\udd14","Hard to tell... This is probably related to the way apache arrow compresses lists of integers, that may be different from the compression of strings.","Thanks for your point. \ud83d\ude00, It might be answer.\r\nSince this is hard to know, I'll close this issue.\r\nBut if somebody knows more details, please comment below ~ \ud83d\ude01"],"created_at":1591752427000,"updated_at":1591793194000,"closed_at":1591793194000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I tokenize wiki dataset by `map` and cache the results.\r\n```\r\ndef tokenize_tfm(example):\r\n example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text']))\r\n return example\r\nwiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train']\r\nwiki.map(tokenize_tfm, cache_file_name=cache_dir\/\"wikipedia\/20200501.en\/1.0.0\/tokenized_wiki.arrow\")\r\n```\r\nand when I see their size\r\n```\r\nls -l --block-size=M\r\n17460M wikipedia-train.arrow\r\n47511M tokenized_wiki.arrow\r\n```\r\nThe tokenized one is over 2x size of original one.\r\nIs there something I did wrong ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/258\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/257","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/257\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/257\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/257\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/257","id":635620979,"node_id":"MDU6SXNzdWU2MzU2MjA5Nzk=","number":257,"title":"Tokenizer pickling issue fix not landed in `nlp` yet?","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`","If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6 and the built in `dataclasses` in 3.7+ cause the issue.\r\n\r\nProbably a dumb fix, but it works for me."],"created_at":1591722754000,"updated_at":1591825532000,"closed_at":1591723613000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function:\r\n\r\n```\r\ndataset = nlp.load_dataset('cos_e')\r\ntokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir)\r\n\r\nfor split in dataset.keys():\r\n dataset[split].map(lambda x: some_function(x, tokenizer))\r\n```\r\n```\r\n06\/09\/2020 10:09:19 - INFO - nlp.builder - Constructing Dataset for split train[:10], from \/home\/sarahw\/.cache\/huggingface\/datasets\/cos_e\/default\/0.0.1\r\nTraceback (most recent call last):\r\n File \"generation\/input_to_label_and_rationale.py\", line 390, in <module>\r\n main()\r\n File \"generation\/input_to_label_and_rationale.py\", line 263, in main\r\n dataset[split] = dataset[split].map(lambda x: input_to_explanation_plus_label(x, tokenizer, max_length, datasource=data_args.task_name, wt5=(model_class=='t5'), expl_only=model_args.rationale_only), batched=False)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/nlp\/arrow_dataset.py\", line 522, in map\r\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/nlp\/arrow_dataset.py\", line 381, in _get_cache_file_path\r\n function_bytes = dumps(function)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/nlp\/utils\/py_utils.py\", line 257, in dumps\r\n dump(obj, file)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/nlp\/utils\/py_utils.py\", line 250, in dump\r\n Pickler(file).dump(obj)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/dill\/_dill.py\", line 445, in dump\r\n StockPickler.dump(self, obj)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 485, in dump\r\n self.save(obj)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/dill\/_dill.py\", line 1410, in save_function\r\n pickler.save_reduce(_create_function, (obj.__code__,\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 690, in save_reduce\r\n save(args)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 899, in save_tuple\r\n save(element)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 899, in save_tuple\r\n save(element)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/dill\/_dill.py\", line 1147, in save_cell\r\n pickler.save_reduce(_create_cell, (f,), obj=obj)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 690, in save_reduce\r\n save(args)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 884, in save_tuple\r\n save(element)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 601, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 715, in save_reduce\r\n save(state)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/dill\/_dill.py\", line 912, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 969, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 995, in _batch_setitems\r\n save(v)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 601, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 715, in save_reduce\r\n save(state)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 558, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/site-packages\/dill\/_dill.py\", line 912, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 969, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 995, in _batch_setitems\r\n save(v)\r\n File \"\/home\/sarahw\/miniconda3\/envs\/project_huggingface\/lib\/python3.8\/pickle.py\", line 576, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle 'Tokenizer' object\r\n```\r\nFix seems to be in the tokenizers [`0.8.0.dev1 pre-release`](https:\/\/github.com\/huggingface\/tokenizers\/issues\/87), which I can't install with any package managers. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/257\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/256","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/256\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/256\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/256\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/256","id":635596295,"node_id":"MDU6SXNzdWU2MzU1OTYyOTU=","number":256,"title":"[Feature request] Add a feature to dataset","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)","Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prior to performing map.\r\n\r\nE.g. \r\n```\r\nnew_info = list of length dataset['train']\r\n\r\ndataset['train'] = dataset['train'].map(lambda x: some_function(x, new_info[index of x]))\r\n\r\ndef some_function(x, new_info_x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x\r\n return x\r\n```\r\nI was thinking to instead create a new field in the arrow dataset so that instance x contains all the necessary information when map function is applied (since I don't have index information to pass to map function).","This is what I have so far: \r\n\r\n```\r\nimport pyarrow as pa\r\nfrom nlp.arrow_dataset import Dataset\r\n\r\naug_dataset = dataset['train'][:]\r\naug_dataset['new_info'] = new_info\r\n\r\n#reformat as arrow-table\r\nschema = dataset['train'].schema\r\n\r\n# this line doesn't work:\r\nschema.append(pa.field('new_info', pa.int32()))\r\n\r\ntable = pa.Table.from_pydict(\r\n aug_dataset,\r\n schema=schema\r\n)\r\ndataset['train'] = Dataset(table) \r\n```","Maybe you can use `with_indices`?\r\n\r\n```python\r\nnew_info = list of length dataset['train']\r\n\r\ndef some_function(indice, x):\r\n # adds new_info[index of x] as a field to x\r\n x['new_info'] = new_info_x[indice]\r\n return x\r\n\r\ndataset['train'] = dataset['train'].map(some_function, with_indices=True)\r\n```","Oh great. That should work. I missed that in the documentation- thanks :) "],"created_at":1591720692000,"updated_at":1591721502000,"closed_at":1591721502000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/256\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/255","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/255\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/255\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/255\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/255","id":635300822,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0","number":255,"title":"Add dataset\/piaf","user":{"login":"RachelKer","id":36986299,"node_id":"MDQ6VXNlcjM2OTg2Mjk5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/36986299?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/RachelKer","html_url":"https:\/\/github.com\/RachelKer","followers_url":"https:\/\/api.github.com\/users\/RachelKer\/followers","following_url":"https:\/\/api.github.com\/users\/RachelKer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/RachelKer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/RachelKer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/RachelKer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/RachelKer\/orgs","repos_url":"https:\/\/api.github.com\/users\/RachelKer\/repos","events_url":"https:\/\/api.github.com\/users\/RachelKer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/RachelKer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Very nice !"],"created_at":1591697761000,"updated_at":1591950687000,"closed_at":1591950687000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/255","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/255","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/255.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/255.patch"},"body":"Small SQuAD-like French QA dataset [PIAF](https:\/\/www.aclweb.org\/anthology\/2020.lrec-1.673.pdf)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/255\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/254","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/254\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/254\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/254\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/254","id":635057568,"node_id":"MDU6SXNzdWU2MzUwNTc1Njg=","number":254,"title":"[Feature request] Be able to remove a specific sample of the dataset","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Oh yes you can now do that with the `dataset.filter()` method that was added in #214 "],"created_at":1591669333000,"updated_at":1591692098000,"closed_at":1591692098000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"As mentioned in #117, it's currently not possible to remove a sample of the dataset.\r\n\r\nBut it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples.\r\n\r\nI think it should be a feature. What do you think ?\r\n\r\n---\r\n\r\nAny work-around in the meantime ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/254\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/253","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/253\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/253\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/253\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/253","id":634791939,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz","number":253,"title":"add flue dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? ","Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortunately, it seems that we've both been working on adding it to the repo...\r\nI was going to open a pull request before I came across yours.\r\nI didn't want to open a duplicate, that's why I'm commenting here (I hope it's not rude).\r\n\r\nWhen I look at your code there is one issue that jump out at me: for both `vsd` and `nsd`, the labels are missing. I believe this is more a data issue, as they were not kept in the cleaned dataframes of #223. I think the *word sense disambiguation* task was a bit misunderstood. \r\n\r\nMaybe you should directly use the data provided by FLUE for these ?","Hi @TheophileBlard thanks for pointing this out. I will give a look at it or maybe if you already done it you can update this PR. Also I haven't added yet the parsing datasets, I submited a request to get access to them. If you already have them, you can also add them.","Hi,\r\n\r\nAs @TheophileBlard pointed out, the labels for the vsd and nsd stains are missing.\r\n\r\nFor the wsd, it is my mistake, I added the files containing the labels on the drive.\r\nThere is still the join to do between the files that I didn't have time to do. It can be done after importing the two files, however if you wish to have a single dataframe already containing all the information, I could do it but only when I have free time because I have a lot of work at the moment at INSERM with the covid.\r\n\r\nFor the nsd, I've downloaded the files at https:\/\/zenodo.org\/record\/3549806, and if you do the same you'll see that they don't contain any labels.\r\nIn the files, you can see that some words have a WN code. I don't know what it corresponds to. On the FLUE github, they say to use the disambiguate tool (https:\/\/github.com\/getalp\/disambiguate) but I don't understand what he's doing.\r\n\r\n@mariamabarham for the parsing datasets, I have them in my possession. What it does that I haven't shared them is that they are licensed and you have to make a request to their creators. They give them away very easily for research purposes. For another use, you have to ask a commercial licence. All this means that if the data is freely available on your librairy, their licence and their application form are no longer of interest, which is why I did not add them.\r\nAfterwards, maybe the authors will change their policies and decide to make the data freely available through your librairy","@mariamabarham @lbourdois, Yea I don't think we can had the parsing datasets without asking the authors permission first. I also hope they'll change their policy.\r\n\r\nRegarding `vsd` and `nsd`, if I understand well the task, the labels are \"word senses\" and the goal is to find the correct word sense for each ambiguous word. For `vsd` there is one ambiguous verb per sentence, and the labels we manually annotated with \"wiktionary senses\". For `nsd`, there are multiple ambiguous word per sentence, and the labels are WordNet Princeton Identifiers (hence the WN tag). This dataset was translated in french & automatically aligned.\r\n\r\nImo, for these 2 datasets, each example should be made of:\r\n- a list of string tokens (the words of the sentence)\r\n- a list of string labels (the word senses or 'O' when the word is not ambiguous.\r\n\r\nIn fact, for `vsd` it could be even simpler, with a single string label (as there is only one ambiguous verb), + some \"idx\" feature to indicate the location of the ambiguous verb.\r\n\r\nUnfortunately, I cannot update your PR as I'm not a maintainer of the project. Maybe we could work together on a fork ? Here's [mine](https:\/\/github.com\/TheophileBlard\/nlp\/commits\/flue-benchmark).\r\n","Hi\r\n\r\nAny news about this PR ?\r\nBecause thinking back FLUE basically offers only two new datasets : those for the Word Sense Disambiguation task (vsd and nsd).\r\n\r\nWouldn't it be more clever to make separate PRs to add the datasets of the other tasks which are multi-lingual (and therefore can be used for other languages) ?\r\n\r\nXNLI being already present on your library, there would only be PAWS-X (datasets and bibtex available here : https:\/\/github.com\/google-research-datasets\/paws\/tree\/master\/pawsx) and the Webis-CLS-10 dataset (dataset : https:\/\/zenodo.org\/record\/3251672#.XvCXN-d8taQ and bibtex : https:\/\/zenodo.org\/record\/3251672\/export\/hx#.XvCXZ-d8taQ) to do.\r\n\r\nAnd next for the FLUE benchmark, all you would have to do would be to use your own library by making an nlp.load_dataset() (for example nlp.load_dataset('xnli') which is already present in your library) for each of the datasets of the benchmark tasks and to keep only the 'fr' data.\r\n\r\n\r\n\r\nAlso @mariamabarham , did you get any feedback for the parsing task dataset request?\r\nIn case of refusal from the authors, there are other datasets in French to perform this task and in this case, I would open a new topic\r\n","Hi @lbourdois ,\r\nPAWS-X is also present in the lib, it's part of `xtreme` dataset, so it can be loaded by `nlp.load_dataset('xtreme', 'PAWS-X.fr')` for the french version.\r\nI think the parsing and the Word Sense Disambiguation task datasets are the only missing in the lib now. \r\nI did not get a feedback yet for the parsing dataset.\r\n","By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.","> By the way, @TheophileBlard I commented some days ago in your fork. It would be great if you can maybe open a new PR with your code or if you have a better way to make it available to others for review.\r\n\r\nYea sorry, missed that! I think @lbourdois has a point, it helps no one to have the same dataset in multiple places. I will try to find some time to adapt the code of my fork and open PRs for `Webis-CLS-10` and `nsd`\/`vsd`. Maybe we should group `nsd`\/`vsd` together ?","Shall we close this PR then ? @mariamabarham @TheophileBlard @lbourdois "],"created_at":1591636269000,"updated_at":1594885859000,"closed_at":1594885859000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/253","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/253","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/253.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/253.patch"},"body":"This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/253\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/252","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/252\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/252\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/252\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/252","id":634563239,"node_id":"MDU6SXNzdWU2MzQ1NjMyMzk=","number":252,"title":"NonMatchingSplitsSizesError error when reading the IMDB dataset","user":{"login":"antmarakis","id":17463361,"node_id":"MDQ6VXNlcjE3NDYzMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17463361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/antmarakis","html_url":"https:\/\/github.com\/antmarakis","followers_url":"https:\/\/api.github.com\/users\/antmarakis\/followers","following_url":"https:\/\/api.github.com\/users\/antmarakis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/antmarakis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/antmarakis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/antmarakis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/antmarakis\/orgs","repos_url":"https:\/\/api.github.com\/users\/antmarakis\/repos","events_url":"https:\/\/api.github.com\/users\/antmarakis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/antmarakis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?","I updated it, that was it, thanks!","Hello, I am facing the same problem... how do you clear the huggingface cache?","Hi ! The cache is at ~\/.cache\/huggingface\r\nYou can just delete this folder if needed :)"],"created_at":1591619184000,"updated_at":1630077658000,"closed_at":1591624886000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi!\r\n\r\nI am trying to load the `imdb` dataset with this line:\r\n\r\n`dataset = nlp.load_dataset('imdb', data_dir='\/A\/PATH', cache_dir='\/A\/PATH')`\r\n\r\nbut I am getting the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/mounts\/Users\/cisintern\/antmarakis\/anaconda3\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 517, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/mounts\/Users\/cisintern\/antmarakis\/anaconda3\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 363, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/mounts\/Users\/cisintern\/antmarakis\/anaconda3\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 421, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"\/mounts\/Users\/cisintern\/antmarakis\/anaconda3\/lib\/python3.7\/site-packages\/nlp\/utils\/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nAm I overlooking something? Thanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/252\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/251","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/251\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/251\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/251\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/251","id":634544977,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMxMDgwMDkw","number":251,"title":"Better access to all dataset information","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1591617410000,"updated_at":1591949580000,"closed_at":1591949578000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/251","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/251","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/251.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/251.patch"},"body":"Moves all the dataset info down one level from `dataset.info.XXX` to `dataset.XXX`\r\nThis way it's easier to access `dataset.feature['label']` for instance\r\n\r\nAlso, add the original split instructions used to create the dataset in `dataset.split`\r\nEx:\r\n```\r\nfrom nlp import load_dataset\r\nstsb = load_dataset('glue', name='stsb', split='train')\r\nstsb.split\r\n>>> NamedSplit('train')\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/251\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/250","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/250\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/250\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/250\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/250","id":634416751,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4","number":250,"title":"Remove checksum download in c4","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Commenting again in case [previous thread](https:\/\/github.com\/huggingface\/nlp\/pull\/233) was inactive.\r\n\r\n@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4\/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to \/home\/devops\/.cache\/huggingface\/datasets\/c4\/en\/2.3.0\/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/datasets\/c4\/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7\/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '\/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?"],"created_at":1591607580000,"updated_at":1598339096000,"closed_at":1591607819000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/250","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/250","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/250.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/250.patch"},"body":"There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/250\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/249","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/249\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/249\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/249\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/249","id":633393443,"node_id":"MDU6SXNzdWU2MzMzOTM0NDM=","number":249,"title":"[Dataset created] some critical small issues when I was creating a dataset","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for noticing all these :) They should be easy to fix indeed","Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon."],"created_at":1591534734000,"updated_at":1591950531000,"closed_at":1591950531000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, I successfully created a dataset and has made a pr #248.\r\nBut I have encountered several problems when I was creating it, and those should be easy to fix.\r\n\r\n1. Not found dataset_info.json\r\nshould be fixed by #241 , eager to wait it be merged.\r\n\r\n2. Forced to install `apach_beam`\r\nIf we should install it, then it might be better to include it in the pakcage dependency or specified in `CONTRIBUTING.md`\r\n```\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 10, in <module>\r\n from nlp.commands.run_beam import RunBeamCommand\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/commands\/run_beam.py\", line 6, in <module>\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\n\r\n3. `cached_dir` is `None`\r\n```\r\nFile \"\/home\/yisiang\/nlp\/src\/nlp\/datasets\/bookscorpus\/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c\/bookscorpus.py\", line 88, in _split_generators\r\n downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/download_manager.py\", line 128, in download_custom\r\n downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/download_manager.py\", line 126, in url_to_downloaded_path\r\n return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))\r\n File \"\/home\/yisiang\/miniconda3\/envs\/nlppr\/lib\/python3.7\/posixpath.py\", line 80, in join\r\n a = os.fspath(a)\r\n```\r\nThis is because this line\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/2e0a8639a79b1abc848cff5c669094d40bba0f63\/src\/nlp\/commands\/test.py#L30-L32\r\nAnd I add `--cache_dir=\"....\"` to `python nlp-cli test datasets\/<your-dataset-folder> --save_infos --all_configs` in the doc, finally I could pass this error.\r\nBut it seems to ignore my arg and use `\/home\/yisiang\/.cache\/huggingface\/datasets\/bookscorpus\/plain_text\/1.0.0` as cahe_dir\r\n\r\n4. There is no `pytest`\r\nSo maybe in the doc we should specify a step to install pytest\r\n\r\n5. Not enough capacity in my `\/tmp`\r\nWhen run test for dummy data, I don't know why it ask me for 5.6g to download something, \r\n```\r\ndef download_and_prepare\r\n...\r\nif not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root):\r\n raise IOError(\r\n \"Not enough disk space. Needed: {} (download: {}, generated: {})\".format(\r\n utils.size_str(self.info.size_in_bytes or 0),\r\n utils.size_str(self.info.download_size or 0),\r\n> utils.size_str(self.info.dataset_size or 0),\r\n )\r\n )\r\nE OSError: Not enough disk space. Needed: 5.62 GiB (download: 1.10 GiB, generated: 4.52 GiB)\r\n```\r\nI add a `processed_temp_dir=\"some\/dir\"; raw_temp_dir=\"another\/dir\"` to 71, and the test passed\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/a67a6c422dece904b65d18af65f0e024e839dbe8\/tests\/test_dataset_common.py#L70-L72\r\n\r\nI suggest we can create tmp dir under the `\/home\/user\/tmp` but not `\/tmp`, because take our lab server for example, everyone use `\/tmp` thus it has not much capacity. Or at least we can improve error message, so the user know is what directory has no space and how many has it lefted. Or we could do both.\r\n\r\n6. name of datasets\r\nI was surprised by the dataset name `books_corpus`, and didn't know it is from `class BooksCorpus(nlp.GeneratorBasedBuilder)` . I change it to `Bookscorpus` afterwards. I think this point shold be also on the doc.\r\n\r\n7. More thorough doc to how to create `dataset.py`\r\nI believe there will be.\r\n\r\n**Feel free to close this issue** if you think these are solved.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/249\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/248","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/248\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/248\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/248\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/248","id":633390427,"node_id":"MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0","number":248,"title":"add Toronto BooksCorpus","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for adding this one !\r\n\r\nAbout the three points you mentioned:\r\n1. I think the `toronto_books_corpus` branch can be removed @mariamabarham ? \r\n2. You can use the download manager to download from google drive. For you case you can just do something like \r\n```python\r\nURL = \"https:\/\/drive.google.com\/uc?export=download&id=16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z\"\r\n...\r\narch_path = dl_manager.download_and_extract(URL)\r\n```\r\nAlso this is is an unofficial host of the dataset, we should probably host it ourselves if we can.\r\n3. Not sure about the copyright here, but I maybe @thomwolf has better insights about it. ","Yes it can be removed","I just downloaded the file and put it on gs. The public url is\r\nhttps:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/toronto_books_corpus\/bookcorpus.tar.bz2\r\n\r\nCould you try to change the url to this one and heck that everything is ok ?","In `books.py`\r\n```\r\nURL = \"https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/toronto_books_corpus\/bookcorpus.tar.bz2\"\r\n```\r\n```\r\nPython 3.7.6 (default, Jan 8 2020, 19:59:22) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from nlp import load_dataset\r\n>>> book = load_dataset(\"nlp\/datasets\/bookscorpus\/books.py\", cache_dir='~\/tmp')\r\nDownloading and preparing dataset bookscorpus\/plain_text (download: 1.10 GiB, generated: 4.52 GiB, total: 5.62 GiB) to \/home\/yisiang\/tmp\/bookscorpus\/plain_text\/1.0.0...\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.18G\/1.18G [00:39<00:00, 30.0MB\/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/load.py\", line 520, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/builder.py\", line 420, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/builder.py\", line 460, in _download_and_prepare\r\n verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/info_utils.py\", line 31, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\nnlp.utils.info_utils.ExpectedMoreDownloadedFiles: {'16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z'}\r\n>>>\r\n```\r\n\r\nBTW, I notice the path `huggingface-nlp\/datasets\/toronto_books_corpus`, does it mean I have to change folder name \"bookscorpus\" to \"toronto_books_corpus\"","> In `books.py`\r\n> \r\n> ```\r\n> URL = \"https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/toronto_books_corpus\/bookcorpus.tar.bz2\"\r\n> ```\r\n> \r\n> ```\r\n> Python 3.7.6 (default, Jan 8 2020, 19:59:22) \r\n> [GCC 7.3.0] :: Anaconda, Inc. on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n> >>> from nlp import load_dataset\r\n> >>> book = load_dataset(\"nlp\/datasets\/bookscorpus\/books.py\", cache_dir='~\/tmp')\r\n> Downloading and preparing dataset bookscorpus\/plain_text (download: 1.10 GiB, generated: 4.52 GiB, total: 5.62 GiB) to \/home\/yisiang\/tmp\/bookscorpus\/plain_text\/1.0.0...\r\n> Downloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.18G\/1.18G [00:39<00:00, 30.0MB\/s]\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"\/home\/yisiang\/nlp\/src\/nlp\/load.py\", line 520, in load_dataset\r\n> save_infos=save_infos,\r\n> File \"\/home\/yisiang\/nlp\/src\/nlp\/builder.py\", line 420, in download_and_prepare\r\n> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n> File \"\/home\/yisiang\/nlp\/src\/nlp\/builder.py\", line 460, in _download_and_prepare\r\n> verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums())\r\n> File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/info_utils.py\", line 31, in verify_checksums\r\n> raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n> nlp.utils.info_utils.ExpectedMoreDownloadedFiles: {'16KCjV9z_FHm8LgZw05RSuk4EsAWPOP_z'}\r\n> >>>\r\n> ```\r\n> \r\n> BTW, I notice the path `huggingface-nlp\/datasets\/toronto_books_corpus`, does it mean I have to change folder name \"bookscorpus\" to \"toronto_books_corpus\"\r\n\r\nLet me change the url to match \"bookscorpus\", so that you don't have to change anything. Good catch.\r\n\r\nAbout the error you're getting: you just have to remove the `dataset_infos.json` and regenerate it","The new url is https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookscorpus\/bookcorpus.tar.bz2","Hi, I found I made a mistake. I found the ELECTRA paper refer it as \"BooksCorpus\", but actually it is caleld \"BookCorpus\", according to the original paper. Sorry, I should have checked the original paper .\r\n\r\nCan you do me a favor and change the url path to ` https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2` ?","Yep I'm doing it right now. Could you please rename all the references to `bookscorpus` and `BooksCorpus` to `book_corpus` and `BookCorpus` (with the right casing) ?","Thank you @lhoestq ,\r\nJust to confirm it fits your naming convention\r\n* make the file path `book_corpus\/book_corpus.py` ?\r\n* make `class Bookscorpus(nlp.GeneratorBasedBuilder)` -> `BookCorpus` (which make cache folder name `book_corpus` and user use `load_dataset('book_corpus')`) ?\r\n(Cuz I found \"HellaSwag\" dataset is named \"nlp\/datasets\/hellaswag\" and `class Hellaswag` )","Oh yea you're right about the Hellaswag example. We should keep the \"_\" symbol to replace spaces. As there are no space in BookCorpus, what we should do here is use:\r\n- class name: 'Bookcorpus'\r\n- script name: `bookcorpus\/bookcorpus.py`\r\n- use url https:\/\/storage.googleapis.com\/huggingface-nlp\/datasets\/bookcorpus\/bookcorpus.tar.bz2\r\nAnd therefore the dataset name will be `bookcorpus`\r\n\r\nDon't forget to regenerate the `dataset_infos.json` and we'll be good :D ","Awesome thanks :)"],"created_at":1591534496000,"updated_at":1591951503000,"closed_at":1591951502000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/248","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/248","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/248.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/248.patch"},"body":"1. I knew there is a branch `toronto_books_corpus`\r\n - After I downloaded it, I found it is all non-english, and only have one row. \r\n- It seems that it cites the wrong paper\r\n- according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus`\r\n\r\n2. It use a text mirror in google drive\r\n- `bookscorpus.py` include a function `download_file_from_google_drive` , maybe you will want to put it elsewhere.\r\n- text mirror is found in this [comment on the issue](https:\/\/github.com\/soskek\/bookcorpus\/issues\/24#issuecomment-556024973), and it said to have the same statistics as the one in the paper.\r\n- You may want to download it and put it on your gs in case of it disappears someday.\r\n\r\n3. Copyright ?\r\nThe paper has said\r\n\r\n> **The BookCorpus Dataset.** In order to train our sentence similarity model we collected a corpus of 11,038 books ***from the web***. These are __**free books written by yet unpublished authors**__. We only included books that had more than 20K words in order to filter out perhaps noisier shorter stories. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 2 highlights the summary statistics of our book corpus.\r\n\r\nand we have changed the form (not books), so I don't think it should have that problems. Or we can state that use it at your own risk or only for academic use. I know @thomwolf should know these things more.\r\n\r\nThis should solved #131 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/248\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/247","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/247\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/247\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/247\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/247","id":632380078,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2","number":247,"title":"Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That's great!\r\n\r\nI think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning\/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win\/Mac\/Linux + various python\/env).\r\nWhat do you think @lhoestq @patrickvonplaten?","> That's great!\r\n> \r\n> I think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n> \r\n> Here is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning\/tracability), we could incorporate a hash of the final Arrow Dataset to the `dataset.json` file and have a test on it as well as CI on a diversity of platform to test the hash (Win\/Mac\/Linux + various python\/env).\r\n> What do you think @lhoestq @patrickvonplaten?\r\n\r\nI think that's a great idea! The test should be a `RUN_SLOW` test, since it takes a considerable amount of time to download the dataset and generate the examples, but I think we should add some kind of hash test for each dataset.","Really nice!!"],"created_at":1591441330000,"updated_at":1591607896000,"closed_at":1591607894000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/247","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/247","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/247.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/247.patch"},"body":"This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements.\r\n\r\nAre there other \"non-deterministic\" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ?\r\n\r\n**Important** \r\nIt does break backward compatibility for these datasets because\r\n1. When loading the complete dataset the order in which the examples are saved is different now\r\n2. When loading only part of a split, the examples themselves might be different.\r\n\r\n@patrickvonplaten - the nlp \/ longformer notebook has to be updated since the examples might now be different","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/247\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/246","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/246\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/246\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/246\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/246","id":632380054,"node_id":"MDU6SXNzdWU2MzIzODAwNTQ=","number":246,"title":"What is the best way to cache a dataset? ","user":{"login":"Mistobaan","id":112599,"node_id":"MDQ6VXNlcjExMjU5OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/112599?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Mistobaan","html_url":"https:\/\/github.com\/Mistobaan","followers_url":"https:\/\/api.github.com\/users\/Mistobaan\/followers","following_url":"https:\/\/api.github.com\/users\/Mistobaan\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Mistobaan\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Mistobaan\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Mistobaan\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Mistobaan\/orgs","repos_url":"https:\/\/api.github.com\/users\/Mistobaan\/repos","events_url":"https:\/\/api.github.com\/users\/Mistobaan\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Mistobaan\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Everything is already cached by default in \ud83e\udd17nlp (in particular dataset\nloading and all the \u201cmap()\u201d operations) so I don\u2019t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it\u2019s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <notifications@github.com> wrote:\n\n> For example if I want to use streamlit with a nlp dataset:\n>\n> @st.cache\n> def load_data():\n> return nlp.load_dataset('squad')\n>\n> This code raises the error \"uncachable object\"\n>\n> Right now I just fixed with a constant for my specific case:\n>\n> @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})\n>\n> But I was curious to know what is the best way in general\n>\n> \u2014\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/nlp\/issues\/246>, or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ABYDIHKAKO7CWGX2QY55UXLRVIO3ZANCNFSM4NV333RQ>\n> .\n>\n","Closing this one. Feel free to re-open if you have other questions !"],"created_at":1591441327000,"updated_at":1594286107000,"closed_at":1594286107000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"For example if I want to use streamlit with a nlp dataset:\r\n\r\n```\r\n@st.cache\r\ndef load_data():\r\n return nlp.load_dataset('squad')\r\n```\r\nThis code raises the error \"uncachable object\"\r\n\r\nRight now I just fixed with a constant for my specific case:\r\n```\r\n @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})\r\n```\r\nBut I was curious to know what is the best way in general\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/246\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/245","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/245\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/245\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/245\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/245","id":631985108,"node_id":"MDU6SXNzdWU2MzE5ODUxMDg=","number":245,"title":"SST-2 test labels are all -1","user":{"login":"jxmorris12","id":13238952,"node_id":"MDQ6VXNlcjEzMjM4OTUy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13238952?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jxmorris12","html_url":"https:\/\/github.com\/jxmorris12","followers_url":"https:\/\/api.github.com\/users\/jxmorris12\/followers","following_url":"https:\/\/api.github.com\/users\/jxmorris12\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jxmorris12\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jxmorris12\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jxmorris12\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jxmorris12\/orgs","repos_url":"https:\/\/api.github.com\/users\/jxmorris12\/repos","events_url":"https:\/\/api.github.com\/users\/jxmorris12\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jxmorris12\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["this also happened to me with `nlp.load_dataset('glue', 'mnli')`","Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened to me with nlp.load_datasets('glue', 'mnli')\n>\n> \u2014\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https:\/\/github.com\/huggingface\/nlp\/issues\/245#issuecomment-640083980>,\n> or unsubscribe\n> <https:\/\/github.com\/notifications\/unsubscribe-auth\/ABYDIHMVQD2EDX2HTZUXG5DRVJTWRANCNFSM4NVG3AKQ>\n> .\n>\n","Thanks @thomwolf!","@thomwolf shouldn't this be visible in the .info and\/or in the .features?","It should be in the datasets card (the README.md and on the hub) in my opinion. What do you think @yjernite?","I checked both before I got to looking at issues, so that would be fine as well.\r\n\r\nSome additional thoughts on this: Is there a specific reason why the \"test\" split even has a \"label\" column if it isn't tagged. Shouldn't there just not be any. Seems more transparent","I'm a little confused with the data size.\r\n`sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https:\/\/nlp.stanford.edu\/sentiment\/index.html which is often shown in GLUE\/SST2 reference.\r\nFrom the original data, the standard train\/dev\/test splits split is 6920\/872\/1821 for binary classification. \r\nWhy in GLUE\/SST2 the train\/dev\/test split is 67,349\/872\/1,821 ? \r\n\r\n","> I'm a little confused with the data size.\r\n> `sst2` dataset is referenced to `Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank` and the link of the dataset in the paper is https:\/\/nlp.stanford.edu\/sentiment\/index.html which is often shown in GLUE\/SST2 reference.\r\n> From the original data, the standard train\/dev\/test splits split is 6920\/872\/1821 for binary classification.\r\n> Why in GLUE\/SST2 the train\/dev\/test split is 67,349\/872\/1,821 ?\r\n\r\nHave you figured out this problem? AFAIK, the original sst-2 dataset is totally different from the GLUE\/sst-2. Do you think so?","@yc1999 Sorry, I didn't solve this conflict. In the end, I just use a local data file provided by the previous work I followed(for consistent comparison), not use `datasets` package.\r\n\r\nRelated information: https:\/\/github.com\/thunlp\/OpenAttack\/issues\/146#issuecomment-766323571"],"created_at":1591393302000,"updated_at":1627976565000,"closed_at":1591462601000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1.\r\n```\r\n>>> import nlp\r\n>>> glue = nlp.load_dataset('glue', 'sst2')\r\n>>> glue\r\n{'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 872), 'test': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 1821)}\r\n>>> list(l['label'] for l in glue['test'])\r\n[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/245\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/244","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/244\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/244\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/244\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/244","id":631869155,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx","number":244,"title":"Add Allocin\u00e9 Dataset","user":{"login":"TheophileBlard","id":37028092,"node_id":"MDQ6VXNlcjM3MDI4MDky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/37028092?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/TheophileBlard","html_url":"https:\/\/github.com\/TheophileBlard","followers_url":"https:\/\/api.github.com\/users\/TheophileBlard\/followers","following_url":"https:\/\/api.github.com\/users\/TheophileBlard\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/TheophileBlard\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/TheophileBlard\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/TheophileBlard\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/TheophileBlard\/orgs","repos_url":"https:\/\/api.github.com\/users\/TheophileBlard\/repos","events_url":"https:\/\/api.github.com\/users\/TheophileBlard\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/TheophileBlard\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["great work @TheophileBlard ","LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ","It was pretty easy actually. Documentation is on point !"],"created_at":1591384766000,"updated_at":1591861646000,"closed_at":1591861646000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/244","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/244","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/244.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/244.patch"},"body":"This is a french binary sentiment classification dataset, which was used to train this model: https:\/\/huggingface.co\/tblard\/tf-allocine.\r\n\r\nBasically, it's a french \"IMDB\" dataset, with more reviews.\r\n\r\nMore info on [this repo](https:\/\/github.com\/TheophileBlard\/french-sentiment-analysis-with-bert). ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/244\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/243","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/243\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/243\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/243\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/243","id":631735848,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI4NTY2MTEy","number":243,"title":"Specify utf-8 encoding for GLUE","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for fixing the encoding :)"],"created_at":1591374780000,"updated_at":1592428566000,"closed_at":1591605721000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/243","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/243","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/243.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/243.patch"},"body":"#242 \r\nThis makes the GLUE-MNLI dataset readable on my machine, not sure if it's a Windows-only bug.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/243\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/242","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/242\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/242\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/242\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/242","id":631733683,"node_id":"MDU6SXNzdWU2MzE3MzM2ODM=","number":242,"title":"UnicodeDecodeError when downloading GLUE-MNLI","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure","On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts would always set the encoding='utf-8' in calls to open explicitly. \r\nIn the meantime: since Python 3.7 Windows users can set the default encoding for everything including open() to Unicode by setting this environment variable: set PYTHONUTF8=1 (details can be found in [PEP 540](https:\/\/www.python.org\/dev\/peps\/pep-0540\/))\r\n\r\nFor me this fixed the problem described by the OP."],"created_at":1591374601000,"updated_at":1591718807000,"closed_at":1591605903000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When I run\r\n```python\r\ndataset = nlp.load_dataset('glue', 'mnli')\r\n```\r\nI get an encoding error (could it be because I'm using Windows?) :\r\n```python\r\n# Lots of error log lines later...\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\tqdm\\std.py in __iter__(self)\r\n 1128 try:\r\n-> 1129 for obj in iterable:\r\n 1130 yield obj\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\datasets\\glue\\5256cc2368cf84497abef1f1a5f66648522d5854b225162148cb8fc78a5a91cc\\glue.py in _generate_examples(self, data_file, split, mrpc_files)\r\n 529 \r\n--> 530 for n, row in enumerate(reader):\r\n 531 if is_cola_non_test:\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\csv.py in __next__(self)\r\n 110 self.fieldnames\r\n--> 111 row = next(self.reader)\r\n 112 self.line_num = self.reader.line_num\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\encodings\\cp1252.py in decode(self, input, final)\r\n 22 def decode(self, input, final=False):\r\n---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n 24 \r\n\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 6744: character maps to <undefined>\r\n```\r\nAnyway this can be solved by specifying to decode in UTF when reading the csv file. I am proposing a PR if that's okay.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/242\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/241","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/241\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/241\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/241\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/241","id":631703079,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI4NTQwMDM0","number":241,"title":"Fix empty cache dir","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think","> Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think\r\n\r\nNo it shouldn't force to redownload"],"created_at":1591371922000,"updated_at":1591605333000,"closed_at":1591605331000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/241","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/241","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/241.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/241.patch"},"body":"If the cache dir of a dataset is empty, the dataset fails to load and throws a FileNotFounfError. We could end up with empty cache dir because there was a line in the code that created the cache dir without using a temp dir. Using a temp dir is useful as it gets renamed to the real cache dir only if the full process is successful.\r\n\r\nSo I removed this bad line, and I also reordered things a bit to make sure that we always use a temp dir. I also added warning if we still end up with empty cache dirs in the future.\r\n\r\nThis should fix #239\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/241\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/240","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/240\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/240\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/240\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/240","id":631434677,"node_id":"MDU6SXNzdWU2MzE0MzQ2Nzc=","number":240,"title":"Deterministic dataset loading","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes good point !","I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok with it","I'm pretty sure it would solve the problem too.\r\n\r\nThe only other dataset that is not deterministic right now is `blog_authorship_corpus` (see #215) but this is a problem related to string encodings.","I think we should do the same also for `os.list_dir`"],"created_at":1591347806000,"updated_at":1591607894000,"closed_at":1591607894000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"When calling:\r\n```python \r\nimport nlp\r\ndataset = nlp.load_dataset(\"trivia_qa\", split=\"validation[:1%]\")\r\n```\r\n\r\nthe resulting dataset is not deterministic over different google colabs. \r\nAfter talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line:\r\n\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/2e0a8639a79b1abc848cff5c669094d40bba0f63\/datasets\/trivia_qa\/trivia_qa.py#L180\r\n\r\nwhich seems to return an ordering of files that depends on the filesystem:\r\nhttps:\/\/stackoverflow.com\/questions\/6773584\/how-is-pythons-glob-glob-ordered\r\n\r\nI think we should go through all the dataset scripts and make sure to have deterministic behavior.\r\n\r\nA simple solution for `glob.glob()` would be to just replace it with `sorted(glob.glob())` to have everything sorted by name. \r\n\r\nWhat do you think @lhoestq?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/240\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/239","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/239\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/239\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/239\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/239","id":631340440,"node_id":"MDU6SXNzdWU2MzEzNDA0NDA=","number":239,"title":"[Creating new dataset] Not found dataset_info.json","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I think you can just `rm` this directory and it should be good :)","@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?","Yes I have an idea of what's going on. I'm sure I can fix that","Hi, I rebase my local copy to `fix-empty-cache-dir`, and try to run again `python nlp-cli test datasets\/bookcorpus --save_infos --all_configs`.\r\n\r\nI got this, \r\n```\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 10, in <module>\r\n from nlp.commands.run_beam import RunBeamCommand\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/commands\/run_beam.py\", line 6, in <module>\r\n import apache_beam as beam\r\nModuleNotFoundError: No module named 'apache_beam'\r\n```\r\nAnd after I installed it. I got this\r\n```\r\nFile \"\/home\/yisiang\/nlp\/src\/nlp\/datasets\/bookcorpus\/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c\/bookcorpus.py\", line 88, in _split_generators\r\n downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/download_manager.py\", line 128, in download_custom\r\n downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/yisiang\/nlp\/src\/nlp\/utils\/download_manager.py\", line 126, in url_to_downloaded_path\r\n return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))\r\n File \"\/home\/yisiang\/miniconda3\/envs\/nlppr\/lib\/python3.7\/posixpath.py\", line 80, in join\r\n a = os.fspath(a)\r\n```\r\nThe problem is when I print `self._download_config.cache_dir` using pdb, it is `None`.\r\n\r\nDid I miss something ? Or can you provide a workaround first so I can keep testing my script ?","I'll close this issue because I brings more reports in another issue #249 ."],"created_at":1591337704000,"updated_at":1591534864000,"closed_at":1591534864000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, I am trying to create Toronto Book Corpus. #131 \r\n\r\nI ran\r\n`~\/nlp % python nlp-cli test datasets\/bookcorpus --save_infos --all_configs`\r\nbut this doesn't create `dataset_info.json` and try to use it\r\n```\r\nINFO:nlp.load:Checking datasets\/bookcorpus\/bookcorpus.py for additional imports.\r\nINFO:filelock:Lock 139795325778640 acquired on datasets\/bookcorpus\/bookcorpus.py.lock\r\nINFO:nlp.load:Found main folder for dataset datasets\/bookcorpus\/bookcorpus.py at \/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/datasets\/bookcorpus\r\nINFO:nlp.load:Found specific version folder for dataset datasets\/bookcorpus\/bookcorpus.py at \/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/datasets\/bookcorpus\/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9\r\nINFO:nlp.load:Found script file from datasets\/bookcorpus\/bookcorpus.py to \/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/datasets\/bookcorpus\/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9\/bookcorpus.py\r\nINFO:nlp.load:Couldn't find dataset infos file at datasets\/bookcorpus\/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset datasets\/bookcorpus\/bookcorpus.py at \/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/datasets\/bookcorpus\/8e84759446cf68d0b0deb3417e60cc331f30a3bbe58843de18a0f48e87d1efd9\/bookcorpus.json\r\nINFO:filelock:Lock 139795325778640 released on datasets\/bookcorpus\/bookcorpus.py.lock\r\nINFO:nlp.builder:Overwrite dataset info from restored data version.\r\nINFO:nlp.info:Loading Dataset info from \/home\/yisiang\/.cache\/huggingface\/datasets\/book_corpus\/plain_text\/1.0.0\r\nTraceback (most recent call last):\r\n File \"nlp-cli\", line 37, in <module>\r\n service.run()\r\n File \"\/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/commands\/test.py\", line 78, in run\r\n builders.append(builder_cls(name=config.name, data_dir=self._data_dir))\r\n File \"\/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 610, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\n File \"\/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 152, in __init__\r\n self.info = DatasetInfo.from_directory(self._cache_dir)\r\n File \"\/home\/yisiang\/miniconda3\/envs\/ml\/lib\/python3.7\/site-packages\/nlp\/info.py\", line 157, in from_directory\r\n with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), \"r\") as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/yisiang\/.cache\/huggingface\/datasets\/book_corpus\/plain_text\/1.0.0\/dataset_info.json'\r\n```\r\nbtw, `ls \/home\/yisiang\/.cache\/huggingface\/datasets\/book_corpus\/plain_text\/1.0.0\/` show me nothing is in the directory.\r\n\r\nI have also pushed the script to my fork [bookcorpus.py](https:\/\/github.com\/richardyy1188\/nlp\/blob\/bookcorpusdev\/datasets\/bookcorpus\/bookcorpus.py).\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/239\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/238","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/238\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/238\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/238\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/238","id":631260143,"node_id":"MDU6SXNzdWU2MzEyNjAxNDM=","number":238,"title":"[Metric] Bertscore : Warning : Empty candidate sentence; Setting recall to be 0.","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["This print statement comes from the official implementation of bert_score (see [here](https:\/\/github.com\/Tiiiger\/bert_score\/blob\/master\/bert_score\/utils.py#L343)). The warning shows up only if the attention mask outputs no candidate.\r\nRight now we want to only use official code for metrics to have fair evaluations, so I'm not sure we can do anything about it. Maybe you can try to create an issue on their [repo](https:\/\/github.com\/Tiiiger\/bert_score) ?"],"created_at":1591323287000,"updated_at":1593450619000,"closed_at":1593450619000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When running BERT-Score, I'm meeting this warning :\r\n\r\n> Warning: Empty candidate sentence; Setting recall to be 0.\r\n\r\nCode :\r\n\r\n```\r\nimport nlp\r\nmetric = nlp.load_metric(\"bertscore\")\r\nscores = metric.compute([\"swag\", \"swags\"], [\"swags\", \"totally something different\"], lang=\"en\", device=0)\r\n```\r\n\r\n---\r\n\r\n**What am I doing wrong \/ How can I hide this warning ?**","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/238\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/237","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/237\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/237\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/237\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/237","id":631199940,"node_id":"MDU6SXNzdWU2MzExOTk5NDA=","number":237,"title":"Can't download MultiNLI","user":{"login":"patpizio","id":15801338,"node_id":"MDQ6VXNlcjE1ODAxMzM4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/15801338?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patpizio","html_url":"https:\/\/github.com\/patpizio","followers_url":"https:\/\/api.github.com\/users\/patpizio\/followers","following_url":"https:\/\/api.github.com\/users\/patpizio\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patpizio\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patpizio\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patpizio\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patpizio\/orgs","repos_url":"https:\/\/api.github.com\/users\/patpizio\/repos","events_url":"https:\/\/api.github.com\/users\/patpizio\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patpizio\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You should use `load_dataset('glue', 'mnli')`","Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242). ","Glad it helps !\nThough I am not one of hf team, but maybe you should close this issue first."],"created_at":1591311921000,"updated_at":1591440694000,"closed_at":1591440694000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When I try to download MultiNLI with \r\n```python\r\ndataset = load_dataset('multi_nli')\r\n```\r\n\r\nI get this long error:\r\n```python\r\n---------------------------------------------------------------------------\r\nOSError Traceback (most recent call last)\r\n<ipython-input-13-3b11f6be4cb9> in <module>\r\n 1 # Load a dataset and print the first examples in the training set\r\n 2 # nli_dataset = nlp.load_dataset('multi_nli')\r\n----> 3 dataset = load_dataset('multi_nli')\r\n 4 # nli_dataset = nlp.load_dataset('multi_nli', split='validation_matched[:10%]')\r\n 5 # print(nli_dataset['train'][0])\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 514 \r\n 515 # Download and prepare data\r\n--> 516 builder_instance.download_and_prepare(\r\n 517 download_config=download_config,\r\n 518 download_mode=download_mode,\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 417 with utils.temporary_assignment(self, \"_cache_dir\", tmp_data_dir):\r\n 418 verify_infos = not save_infos and not ignore_verifications\r\n--> 419 self._download_and_prepare(\r\n 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 421 )\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 455 split_dict = SplitDict(dataset_name=self.name)\r\n 456 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 457 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 458 # Checksums verification\r\n 459 if verify_infos:\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\datasets\\multi_nli\\60774175381b9f3f1e6ae1028229e3cdb270d50379f45b9f2c01008f50f09e6b\\multi_nli.py in _split_generators(self, dl_manager)\r\n 99 def _split_generators(self, dl_manager):\r\n 100 \r\n--> 101 downloaded_dir = dl_manager.download_and_extract(\r\n 102 \"http:\/\/storage.googleapis.com\/tfds-data\/downloads\/multi_nli\/multinli_1.0.zip\"\r\n 103 )\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\utils\\download_manager.py in download_and_extract(self, url_or_urls)\r\n 214 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 215 \"\"\"\r\n--> 216 return self.extract(self.download(url_or_urls))\r\n 217 \r\n 218 def get_recorded_sizes_checksums(self):\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\utils\\download_manager.py in extract(self, path_or_paths)\r\n 194 path_or_paths.\r\n 195 \"\"\"\r\n--> 196 return map_nested(\r\n 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,\r\n 198 )\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\utils\\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)\r\n 168 return tuple(mapped)\r\n 169 # Singleton\r\n--> 170 return function(data_struct)\r\n 171 \r\n 172 \r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\utils\\download_manager.py in <lambda>(path)\r\n 195 \"\"\"\r\n 196 return map_nested(\r\n--> 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,\r\n 198 )\r\n 199 \r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\site-packages\\nlp\\utils\\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)\r\n 231 if is_zipfile(output_path):\r\n 232 with ZipFile(output_path, \"r\") as zip_file:\r\n--> 233 zip_file.extractall(output_path_extracted)\r\n 234 zip_file.close()\r\n 235 elif tarfile.is_tarfile(output_path):\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\zipfile.py in extractall(self, path, members, pwd)\r\n 1644 \r\n 1645 for zipinfo in members:\r\n-> 1646 self._extract_member(zipinfo, path, pwd)\r\n 1647 \r\n 1648 @classmethod\r\n\r\n~\\Miniconda3\\envs\\nlp\\lib\\zipfile.py in _extract_member(self, member, targetpath, pwd)\r\n 1698 \r\n 1699 with self.open(member, pwd=pwd) as source, \\\r\n-> 1700 open(targetpath, \"wb\") as target:\r\n 1701 shutil.copyfileobj(source, target)\r\n 1702 \r\n\r\nOSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\Python\\\\.cache\\\\huggingface\\\\datasets\\\\3e12413b8ec69f22dfcfd54a79d1ba9e7aac2e18e334bbb6b81cca64fd16bffc\\\\multinli_1.0\\\\Icon\\r'\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/237\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/236","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/236\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/236\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/236\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/236","id":631099875,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4","number":236,"title":"CompGuessWhat?! dataset ","user":{"login":"aleSuglia","id":1479733,"node_id":"MDQ6VXNlcjE0Nzk3MzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1479733?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aleSuglia","html_url":"https:\/\/github.com\/aleSuglia","followers_url":"https:\/\/api.github.com\/users\/aleSuglia\/followers","following_url":"https:\/\/api.github.com\/users\/aleSuglia\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aleSuglia\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aleSuglia\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aleSuglia\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aleSuglia\/orgs","repos_url":"https:\/\/api.github.com\/users\/aleSuglia\/repos","events_url":"https:\/\/api.github.com\/users\/aleSuglia\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aleSuglia\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-gameplay\") \r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-zs-gameplay\").\r\n\r\nMaybe you can refer to this file https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/discofuse\/discofuse.py","@mariamabarham Thanks for your suggestions. I've followed your advice and integrated the additional dataset using another `DatasetConfig` class. It looks like all tests passed. What do you think?","great @aleSuglia. I requested an additional review from @thomwolf @lhoestq and @patrickvonplaten @jplu . You can merge it after an approval from one of them","Looks great! Thanks for adding the dummy data :-) ","Not sure whether it's the most appropriate place but I'll ask another design question. For Vision+Language dataset, is very common to have visual features associated with each example. At the moment, for instance, I'm only integrating the image identifier so that people can later on lookup the image features during training. Do you recommend this approach or do you think it should be done in a different way?\r\n\r\nThank you for your answer!","Hi @aleSuglia your remark on the visual features is a good point.\r\n\r\nWe haven't started to dive deeply into how CV datasets are usually structured (cc @sgugger)\r\n\r\nDo you have a pointer to how visual features are currently loaded and accessed by people using GuessCompWhat? ","@thomwolf As far as I know, people using Language+Vision tasks they typically have their reference dataset (either in JSON or JSONL format) and for each example in it they have an identifier that specifies the reference image. Currently, images are represented by either pooling-based visual features (average pooling of ResNet or VGGNet features, see [DeVries et.al, 2017](https:\/\/arxiv.org\/abs\/1611.08481), [Shekhar et.al, 2019](https:\/\/www.aclweb.org\/anthology\/N19-1265.pdf)) where you have a single vector for every image. Another option is to use a set of feature maps for every image extracted from a specific layer of a CNN (see [Xu et.al, 2015](https:\/\/arxiv.org\/abs\/1502.03044)). A more common and recent option, especially with large-scale multi-modal transformers [Li et. al, 2019](https:\/\/arxiv.org\/abs\/1908.03557), is to use FastRCNN features. \r\n\r\nFor all these types of features, people use either HD5F or NumPy compressed representations. In my personal projects, I've ditched altogether HD5F because it doesn't have out-of-the-box support for multi-processing (unless you have an ad-hoc installation of it). I've been successfully using a NumPy compressed file for each image so that I can store any sort of information in it (see [numpy.savez](https:\/\/numpy.org\/doc\/stable\/reference\/generated\/numpy.savez.html)). However, I believe that Apache Arrow would be a really good fit for this type of features. \r\n\r\nLooking forward to hearing your thoughts about it!","Awesome work on this one thanks :)","@thomwolf I was thinking that I should create an issue regarding the visual features so that we can keep track of it for future work. I think it would be great to have it in NLP and I'll be happy to contribute. Let me know what you think :) "],"created_at":1591299950000,"updated_at":1591868622000,"closed_at":1591861521000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/236","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/236","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/236.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/236.patch"},"body":"Hello,\r\n\r\nThanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https:\/\/compguesswhat.github.io](https:\/\/compguesswhat.github.io)).\r\n\r\nThis pull-request adds the CompGuessWhat?! splits that have been extracted from the original dataset. This is only part of our evaluation framework because there is also an additional split of the dataset that has a completely different set of games. I didn't integrate it yet because I didn't know what would be the best practice in this case. Let me clarify the scenario.\r\n\r\nIn our paper, we have a main dataset (let's call it `compguesswhat-gameplay`) and a zero-shot dataset (let's call it `compguesswhat-zs-gameplay`). In the current code of the pull-request, I have only integrated `compguesswhat-gameplay`. I was thinking that it would be nice to have the `compguesswhat-zs-gameplay` in the same dataset class by simply specifying some particular option to the `nlp.load_dataset()` factory. For instance:\r\n\r\n```python\r\n\r\ncgw = nlp.load_dataset(\"compguesswhat\")\r\ncgw_zs = nlp.load_dataset(\"compguesswhat\", zero_shot=True)\r\n```\r\n\r\nThe other option would be to have a separate dataset class. Any preferences? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/236\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/235","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/235\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/235\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/235\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/235","id":630952297,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0","number":235,"title":"Add experimental datasets","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so I don't see why we require a new dataset folder\r\n\r\n- I'm not a big fan of adding a boolean flag to the `load_dataset()` function that basically switches between folder names on S3. The user has to know whether a dataset script is experimental or not. User installing nlp with pip won't see that there are folders called `datasets` and `datasets_experimental`\r\n\r\n- If we do this just to bypass the test, I think a good solution could be: For all tests that are too complicated to be currently tested with the testing framework, we can add a class variable called `do_test = False` to the dataset builder class and a default `do_test = True` to the abstract dataset class and skip all tests that have that variable in the dataset test framework similar to what is done to beam datasets: https:\/\/github.com\/huggingface\/nlp\/blob\/2e0a8639a79b1abc848cff5c669094d40bba0f63\/tests\/test_dataset_common.py#L79 \r\nWe can also print a warning for all dataset tests having `do_test = False`. This variable would only concern testing and we would not have a problem removing it at a later stage IMO.\r\n\r\n- This way the datascripts are backward compatible and can be used with earlier versions of `nlp` (not that this matters too much atm) \r\n\r\nWhat is your opinion on this @lhoestq @thomwolf ?","Very cool to have add those datasets :)\r\nI understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n\r\nI like the idea of the `do_tests=False` class variable. \r\nHowever it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n\r\nIf we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.","Yeah I really like the idea of a partial test.\r\n\r\nMy main concern with the class variable is visibility, but having a warning would help with that. Maybe even get the user to agree > \"are you sure you want to go ahead?\"","> Very cool to have add those datasets :)\r\n> I understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n> \r\n> I like the idea of the `do_tests=False` class variable.\r\n> However it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n> \r\n> If we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.\r\n\r\n`test_dummy_data=False` sounds good to me!","There we go: added a `test_dummy_data` class variable that is `False` by default for the `BeamBasedBuilder` and `True` for everyone else (except the new `explainlikeimfive` and `wiki_snippets`)\r\n\r\nNote that `wiki_snippets` should become obsolete as soon as @lhoestq adds in the `IndexedDataset` class","Great! LGTM!"],"created_at":1591286096000,"updated_at":1591976335000,"closed_at":1591976335000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/235","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/235","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/235.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/235.patch"},"body":"## Adding an *experimental datasets* folder\r\n\r\nAfter using the \ud83e\udd17nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to share my work with the community.\r\n\r\nMy suggestion would be to add a **datasets\\_experimental** folder so we can start making these new datasets public without having to completely re-think testing for every single one. We would allow contributors to submit dataset PRs in this folder, but require an explanation for why the current testing suite doesn't work for them. We can then aggregate the feedback and periodically see what's missing from the current tests.\r\n\r\nI have added a **datasets\\_experimental** folder to the repository and S3 bucket with two initial datasets: ELI5 (explainlikeimfive) and a Wikipedia Snippets dataset to support indexing (wiki\\_snippets)\r\n\r\n### ELI5\r\n#### Dataset description\r\nThis allows people to download the [ELI5: Long Form Question Answering](https:\/\/arxiv.org\/abs\/1907.09190) dataset, along with two variants based on the r\/askscience and r\/AskHistorians. Full Reddit dumps for each month are downloaded from [pushshift](https:\/\/files.pushshift.io\/reddit\/), filtered for submissions and comments from the desired subreddits, then deleted one at a time to save space. The resulting dataset is split into a training, validation, and test dataset for r\/explainlikeimfive, r\/askscience, and r\/AskHistorians respectively, where each item is a question along with all of its high scoring answers.\r\n\r\n#### Issues with the current testing\r\n1. the list of files to be downloaded is not pre-defined, but rather determined by parsing an index web page at run time. This is necessary as the name and compression type of the dump files changes from month to month as the pushshift website is maintained. Currently, the dummy folder requires the user to know which files will be downloaded.\r\n2. to save time, the script works on the compressed files using the corresponding python packages rather than first running `download\\_and\\_extract` then filtering the extracted files. \r\n\r\n### Wikipedia Snippets\r\n#### Dataset description\r\nThis script creates a *snippets* version of a source Wikipedia dataset: each article is split into passages of fixed length which can then be indexed using ElasticSearch or a dense indexer. The script currently handles all **wikipedia** and **wiki40b** source datasets, and allows the user to choose the passage length and how much overlap they want across passages. In addition to the passage text, each snippet also has the article title, list of titles of sections covered by the text, and information to map the passage back to the initial dataset at the paragraph and character level.\r\n\r\n#### Issues with the current testing\r\n1. The DatasetBuilder needs to call `nlp.load_dataset()`. Currently, testing is not recursive (the test doesn't know where to find the dummy data for the source dataset)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/235\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/234","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/234\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/234\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/234\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/234","id":630534427,"node_id":"MDU6SXNzdWU2MzA1MzQ0Mjc=","number":234,"title":"Huggingface NLP, Uploading custom dataset","user":{"login":"Nouman97","id":42269506,"node_id":"MDQ6VXNlcjQyMjY5NTA2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42269506?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Nouman97","html_url":"https:\/\/github.com\/Nouman97","followers_url":"https:\/\/api.github.com\/users\/Nouman97\/followers","following_url":"https:\/\/api.github.com\/users\/Nouman97\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Nouman97\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Nouman97\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Nouman97\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Nouman97\/orgs","repos_url":"https:\/\/api.github.com\/users\/Nouman97\/repos","events_url":"https:\/\/api.github.com\/users\/Nouman97\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Nouman97\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`","To load a dataset you need to have a script that defines the format of the examples, the splits and the way to generate examples. As your dataset has the same format of squad, you can just copy the squad script (see the [datasets](https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets) forlder) and just replace the url to load the data to your local or remote path.\r\n\r\nThen what you can do is `load_dataset(<path\/to\/your\/script>)`","Also if you want to upload your script, you should be able to use the `nlp-cli`.\r\n\r\nUnfortunately the upload feature was not shipped in the latest version 0.2.0. so right now you can either clone the repo to use it or wait for the next release. We will add some docs to explain how to upload datasets.\r\n","Since the latest release 0.2.1 you can use \r\n```bash\r\nnlp-cli upload_dataset <path\/to\/dataset>\r\n```\r\nwhere `<path\/to\/dataset>` is a path to a folder containing your script (ex: `squad.py`).\r\nThis will upload the script under your namespace on our S3.\r\n\r\nOptionally the folder can also contain `dataset_infos.json` generated using\r\n```bash\r\nnlp-cli test <path\/to\/dataset> --all_configs --save_infos\r\n```\r\n\r\nThen you should be able to do\r\n```python\r\nnlp.load_dataset(\"my_namespace\/dataset_name\")\r\n```"],"created_at":1591250346000,"updated_at":1594028006000,"closed_at":1594028006000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\n\r\nDoes anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp.\r\n\r\nThank you!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/234\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/233","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/233\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/233\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/233\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/233","id":630432132,"node_id":"MDU6SXNzdWU2MzA0MzIxMzI=","number":233,"title":"Fail to download c4 english corpus","user":{"login":"donggyukimc","id":16605764,"node_id":"MDQ6VXNlcjE2NjA1NzY0","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/16605764?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/donggyukimc","html_url":"https:\/\/github.com\/donggyukimc","followers_url":"https:\/\/api.github.com\/users\/donggyukimc\/followers","following_url":"https:\/\/api.github.com\/users\/donggyukimc\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/donggyukimc\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/donggyukimc\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/donggyukimc\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/donggyukimc\/orgs","repos_url":"https:\/\/api.github.com\/users\/donggyukimc\/repos","events_url":"https:\/\/api.github.com\/users\/donggyukimc\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/donggyukimc\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can find more info on beam datasets [here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/docs\/beam_dataset.md).\r\n\r\nOur goal in the future is to make available an already-processed version of C4 (as we do for wikipedia for example) so that users without apache beam runtimes can load it.","@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4\/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to \/home\/devops\/.cache\/huggingface\/datasets\/c4\/en\/2.3.0\/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/datasets\/c4\/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7\/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n\/data\/anaconda\/envs\/hf\/lib\/python3.6\/site-packages\/nlp\/utils\/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '\/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?","I have the same problem as @prashant-kikani","Looks like a bug in the dataset script, can you open an issue ?","I see the same issue as @prashant-kikani. I'm using `datasets` version 1.2.0 to download C4."],"created_at":1591232798000,"updated_at":1610090252000,"closed_at":1591607819000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"i run following code to download c4 English corpus.\r\n\r\n```\r\ndataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner'\r\n, data_dir='\/mypath')\r\n```\r\n\r\nand i met failure as follows\r\n\r\n```\r\nDownloading and preparing dataset c4\/en (download: Unknown size, generated: Unknown size, total: Unknown size) to \/home\/adam\/.cache\/huggingface\/datasets\/c4\/en\/2.3.0...\r\nTraceback (most recent call last):\r\n File \"download_corpus.py\", line 38, in <module>\r\n , data_dir='\/home\/adam\/data\/corpus\/en\/c4')\r\n File \"\/home\/adam\/anaconda3\/envs\/adam\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 520, in load_dataset\r\n save_infos=save_infos,\r\n File \"\/home\/adam\/anaconda3\/envs\/adam\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 420, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"\/home\/adam\/anaconda3\/envs\/adam\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 816, in _download_and_prepare\r\n dl_manager, verify_infos=False, pipeline=pipeline,\r\n File \"\/home\/adam\/anaconda3\/envs\/adam\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 457, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/adam\/anaconda3\/envs\/adam\/lib\/python3.7\/site-packages\/nlp\/datasets\/c4\/f545de9f63300d8d02a6795e2eb34e140c47e62a803f572ac5599e170ee66ecc\/c4.py\", line 175, in _split_generators\r\n dl_manager.download_checksums(_CHECKSUMS_URL)\r\nAttributeError: 'DownloadManager' object has no attribute 'download_checksums\r\n\r\n```\r\ncan i get any advice?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/233\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/232","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/232\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/232\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/232\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/232","id":630029568,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI3MjI5NDcy","number":232,"title":"Nlp cli fix endpoints","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM \ud83d\udc4d "],"created_at":1591193439000,"updated_at":1591606978000,"closed_at":1591606977000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/232","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/232","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/232.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/232.patch"},"body":"With this PR users will be able to upload their own datasets and metrics.\r\n\r\nAs mentioned in #181, I had to use the new endpoints and revert the use of dataclasses (just in case we have changes in the API in the future).\r\n\r\nWe now distinguish commands for datasets and commands for metrics:\r\n```bash\r\nnlp-cli upload_dataset <path\/to\/dataset>\r\nnlp-cli upload_metric <path\/to\/metric>\r\nnlp-cli s3_datasets {rm, ls}\r\nnlp-cli s3_metrics {rm, ls}\r\n```\r\n\r\nDoes it sound good to you @julien-c @thomwolf ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/232\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/231","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/231\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/231\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/231\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/231","id":629988694,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz","number":231,"title":"Add .download to MockDownloadManager","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1591190400000,"updated_at":1591194356000,"closed_at":1591194355000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/231","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/231","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/231.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/231.patch"},"body":"One method from the DownloadManager was missing and some users couldn't run the tests because of that.\r\n@yjernite ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/231\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/230","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/230\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/230\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/230\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/230","id":629983684,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI3MTkzMTQ0","number":230,"title":"Don't force to install apache beam for wikipedia dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1591189987000,"updated_at":1591194849000,"closed_at":1591194847000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/230","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/230","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/230.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/230.patch"},"body":"As pointed out in #227, we shouldn't force users to install apache beam if the processed dataset can be downloaded. I moved the imports of some datasets to avoid this problem","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/230\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/229","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/229\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/229\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/229\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/229","id":629956490,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5","number":229,"title":"Rename dataset_infos.json to dataset_info.json","user":{"login":"aswin-giridhar","id":11817160,"node_id":"MDQ6VXNlcjExODE3MTYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11817160?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aswin-giridhar","html_url":"https:\/\/github.com\/aswin-giridhar","followers_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/followers","following_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/orgs","repos_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/repos","events_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewer cache."],"created_at":1591187504000,"updated_at":1591188774000,"closed_at":1591188513000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/229","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/229","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/229.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/229.patch"},"body":"As the file required for the viewing in the live nlp viewer is named as dataset_info.json","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/229\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/228","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/228\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/228\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/228\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/228","id":629952402,"node_id":"MDU6SXNzdWU2Mjk5NTI0MDI=","number":228,"title":"Not able to access the XNLI dataset","user":{"login":"aswin-giridhar","id":11817160,"node_id":"MDQ6VXNlcjExODE3MTYw","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/11817160?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/aswin-giridhar","html_url":"https:\/\/github.com\/aswin-giridhar","followers_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/followers","following_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/orgs","repos_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/repos","events_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/aswin-giridhar\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"closed","locked":false,"assignee":{"login":"srush","id":35882,"node_id":"MDQ6VXNlcjM1ODgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35882?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/srush","html_url":"https:\/\/github.com\/srush","followers_url":"https:\/\/api.github.com\/users\/srush\/followers","following_url":"https:\/\/api.github.com\/users\/srush\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/srush\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/srush\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/srush\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/srush\/orgs","repos_url":"https:\/\/api.github.com\/users\/srush\/repos","events_url":"https:\/\/api.github.com\/users\/srush\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/srush\/received_events","type":"User","site_admin":false},"assignees":[{"login":"srush","id":35882,"node_id":"MDQ6VXNlcjM1ODgy","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/35882?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/srush","html_url":"https:\/\/github.com\/srush","followers_url":"https:\/\/api.github.com\/users\/srush\/followers","following_url":"https:\/\/api.github.com\/users\/srush\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/srush\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/srush\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/srush\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/srush\/orgs","repos_url":"https:\/\/api.github.com\/users\/srush\/repos","events_url":"https:\/\/api.github.com\/users\/srush\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/srush\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Added pull request to change the name of the file from dataset_infos.json to dataset_info.json","Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ","Update: The dataset_info.json error is gone, but we have a new one instead:\r\n```\r\nConnectionError: Couldn't reach https:\/\/www.nyu.edu\/projects\/bowman\/xnli\/XNLI-1.0.zip\r\n```\r\nI am not able to reproduce on my side unfortunately. Any idea @srush ?","xnli is now properly shown in the viewer.\r\nClosing this one."],"created_at":1591187114000,"updated_at":1595007862000,"closed_at":1595007862000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error.\r\n\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/home\/sasha\/.cache\/huggingface\/datasets\/xnli\/plain_text\/1.0.0\/dataset_info.json'\r\nTraceback:\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/ScriptRunner.py\", line 322, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"\/home\/sasha\/nlp_viewer\/run.py\", line 86, in <module>\r\n dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None)\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 591, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/caching.py\", line 575, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"\/home\/sasha\/nlp_viewer\/run.py\", line 72, in get\r\n builder_instance = builder_cls(name=conf)\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 610, in __init__\r\n super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/builder.py\", line 152, in __init__\r\n self.info = DatasetInfo.from_directory(self._cache_dir)\r\nFile \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/info.py\", line 157, in from_directory\r\n with open(os.path.join(dataset_info_dir, DATASET_INFO_FILENAME), \"r\") as f:\r\n```\r\n\r\nIs it possible to see if the dataset_info.json is correctly placed?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/228\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/227","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/227\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/227\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/227\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/227","id":629845704,"node_id":"MDU6SXNzdWU2Mjk4NDU3MDQ=","number":227,"title":"Should we still have to force to install apache_beam to download wikipedia ?","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Thanks for your message \ud83d\ude0a \r\nIndeed users shouldn't have to install those dependencies","Got it, feel free to close this issue when you think it\u2019s resolved.","It should be good now :)"],"created_at":1591176800000,"updated_at":1591197941000,"closed_at":1591197941000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. \ud83d\ude0d\ud83d\ude0d\ud83d\ude0d\r\n\r\nBut at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time.\r\n\r\nMaybe we should not force users to install these ? Or we just add them to`nlp`'s dependency ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/227\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/226","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/226\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/226\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/226\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/226","id":628344520,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI1OTA0MjEz","number":226,"title":"add BlendedSkillTalk dataset","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome :D"],"created_at":1591008885000,"updated_at":1591195043000,"closed_at":1591195042000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/226","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/226","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/226.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/226.patch"},"body":"This PR add the BlendedSkillTalk dataset, which is used to fine tune the blenderbot.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/226\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/225","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/225\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/225\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/225\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/225","id":628083366,"node_id":"MDU6SXNzdWU2MjgwODMzNjY=","number":225,"title":"[ROUGE] Different scores with `files2rouge`","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400959,"node_id":"MDU6TGFiZWwyMDY3NDAwOTU5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Metric%20discussion","name":"Metric discussion","color":"d722e8","default":false,"description":"Discussions on the metrics"}],"state":"closed","locked":false,"assignee":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"assignees":[{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P\/R\/F scores. If I recall correctly, files2rouge relies on the Perl, script, which among other things normalizes all numbers to a special token: in the case you presented, this should account for a good chunk of the difference.\r\n\r\nWe may end up adding in more versions of the metric, but probably not for a while (@lhoestq correct me if I'm wrong). However, feel free to take a stab at adding it in yourself and submitting a PR if you're interested!","Thank you for your kind answer.\r\n\r\nAs a side question : Isn't it better to have a package that normalize more ?\r\n\r\nI understand to idea of having a package that does minimal post-processing for transparency.\r\n\r\nBut it means that people using different architecture (with different tokenizers for example) will have difference in ROUGE scores even if their predictions are actually similar. \r\nThe goal of `nlp` is to have _one package to rule them all_, right ?\r\n\r\nI will look into it but I'm not sure I have the required skill for this ^^ ","You're right, there's a pretty interesting trade-off here between robustness and sensitivity :) The flip side of your argument is that we also still want the metric to be sensitive to model mistakes. How we think about number normalization for example has evolved a fair bit since the Perl script was written: at the time, ROUGE was used mostly to evaluate short-medium text summarization systems, where there were only a few numbers in the input and it was assumed that the most popular methods in use at the time would get those right. However, as your example showcases, that assumption does not hold any more, and we do want to be able to penalize a model that generates a wrong numerical value.\r\n\r\nAlso, we think that abstracting away tokenization differences is the role of the model\/tokenizer: if you use the \ud83e\udd17Tokenizers library for example, it will handle that for you ;)\r\n\r\nFinally, there is a lot of active research on developing model-powered metrics that are both more sensitive and more robust than ROUGE. Check out for example BERTscore, which is implemented in this library!"],"created_at":1590972636000,"updated_at":1591198038000,"closed_at":1591198038000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`.\r\n\r\nHere is a self-contained notebook to reproduce both scores : https:\/\/colab.research.google.com\/drive\/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing\r\n\r\n---\r\n\r\n`nlp` : (Only mid F-scores)\r\n\r\n>rouge1 0.33508031962733364\r\nrouge2 0.14574333776191592\r\nrougeL 0.2321187823256159\r\n\r\n`files2rouge` :\r\n\r\n>Running ROUGE...\r\n===========================\r\n1 ROUGE-1 Average_R: 0.48873 (95%-conf.int. 0.41192 - 0.56339)\r\n1 ROUGE-1 Average_P: 0.29010 (95%-conf.int. 0.23605 - 0.34445)\r\n1 ROUGE-1 Average_F: 0.34761 (95%-conf.int. 0.29479 - 0.39871)\r\n===========================\r\n1 ROUGE-2 Average_R: 0.20280 (95%-conf.int. 0.14969 - 0.26244)\r\n1 ROUGE-2 Average_P: 0.12772 (95%-conf.int. 0.08603 - 0.17752)\r\n1 ROUGE-2 Average_F: 0.14798 (95%-conf.int. 0.10517 - 0.19240)\r\n===========================\r\n1 ROUGE-L Average_R: 0.32960 (95%-conf.int. 0.26501 - 0.39676)\r\n1 ROUGE-L Average_P: 0.19880 (95%-conf.int. 0.15257 - 0.25136)\r\n1 ROUGE-L Average_F: 0.23619 (95%-conf.int. 0.19073 - 0.28663)\r\n\r\n---\r\n\r\nWhen using longer predictions\/gold, the difference is bigger. \r\n**How can I reproduce same score as `files2rouge` ?**\r\n\r\n@lhoestq \r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/225\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/224","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/224\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/224\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/224\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/224","id":627791693,"node_id":"MDU6SXNzdWU2Mjc3OTE2OTM=","number":224,"title":"[Feature Request\/Help] BLEURT model -> PyTorch","user":{"login":"adamwlev","id":6889910,"node_id":"MDQ6VXNlcjY4ODk5MTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6889910?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/adamwlev","html_url":"https:\/\/github.com\/adamwlev","followers_url":"https:\/\/api.github.com\/users\/adamwlev\/followers","following_url":"https:\/\/api.github.com\/users\/adamwlev\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/adamwlev\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/adamwlev\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/adamwlev\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/adamwlev\/orgs","repos_url":"https:\/\/api.github.com\/users\/adamwlev\/repos","events_url":"https:\/\/api.github.com\/users\/adamwlev\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/adamwlev\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"assignees":[{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Is there any update on this? \r\n\r\nThanks!","Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?","We currently provide a wrapper on the TensorFlow implementation: https:\/\/huggingface.co\/metrics\/bleurt\r\n\r\nWe have long term plans to better handle model-based metrics, but they probably won't be implemented right away\r\n\r\n@adamwlev it would still be cool to add the BLEURT checkpoints to the transformers repo if you're interested, but that would best be discussed there :) \r\n\r\nclosing for now","Hi there. We ran into the same problem this year (converting BLEURT to PyTorch) and thanks to @adamwlev found his colab notebook which didn't work but served as a good starting point. Finally, we **made it work** by doing just two simple conceptual fixes: \r\n\r\n1. Transposing 'kernel' layers instead of 'dense' ones when copying params from the original model;\r\n2. Taking pooler_output as a cls_state in forward function of the BleurtModel class.\r\n\r\nPlus few minor syntactical fixes for the outdated parts. The result is still not exactly the same, but is very close to the expected one (1.0483 vs 1.0474).\r\n\r\nFind the fixed version here (fixes are commented): https:\/\/colab.research.google.com\/drive\/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing \r\n"],"created_at":1590863440000,"updated_at":1630594937000,"closed_at":1609754012000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Twitter).\r\n\r\nI had a go of just like manually using the checkpoint that they publish which includes the weights. It seems like the architecture is exactly aligned with the out-of-the-box BertModel in transformers just with a single linear layer on top of the CLS embedding. I loaded all the weights to the PyTorch model but I am not able to get the same numbers as the BLEURT package's python api. Here is my colab notebook where I tried https:\/\/colab.research.google.com\/drive\/1Bfced531EvQP_CpFvxwxNl25Pj6ptylY?usp=sharing . If you have any pointers on what might be going wrong that would be much appreciated!\r\n\r\nThank you muchly!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/224\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/223","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/223\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/223\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/223\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/223","id":627683386,"node_id":"MDU6SXNzdWU2Mjc2ODMzODY=","number":223,"title":"[Feature request] Add FLUE dataset ","user":{"login":"lbourdois","id":58078086,"node_id":"MDQ6VXNlcjU4MDc4MDg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/58078086?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lbourdois","html_url":"https:\/\/github.com\/lbourdois","followers_url":"https:\/\/api.github.com\/users\/lbourdois\/followers","following_url":"https:\/\/api.github.com\/users\/lbourdois\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lbourdois\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lbourdois\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lbourdois\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lbourdois\/orgs","repos_url":"https:\/\/api.github.com\/users\/lbourdois\/repos","events_url":"https:\/\/api.github.com\/users\/lbourdois\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lbourdois\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @lbourdois, yes please share it with us","@mariamabarham \r\nI put all the datasets on this drive: https:\/\/1drv.ms\/u\/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n\u2022 For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre-training for French},\r\n> author={Hang Le and Lo\u00efc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Beno\u00eet Crabb\u00e9 and Laurent Besacier and Didier Schwab},\r\n> year={2019},\r\n> eprint={1912.05372},\r\n> archivePrefix={arXiv},\r\n> primaryClass={cs.CL}\r\n> }\r\n\r\n\u2022 The Github repo of FLUE is avaible here : https:\/\/github.com\/getalp\/Flaubert\/tree\/master\/flue\r\n\r\n\r\n\r\nInformation related to the different tasks of FLUE : \r\n\r\n**1. Classification**\r\nThree dataframes are available: \r\n- Book\r\n- DVD\r\n- Music\r\nFor each of these dataframes is available a set of training and test data, and a third one containing unlabelled data.\r\n\r\nCitation : \r\n>@dataset{prettenhofer_peter_2010_3251672,\r\n author = {Prettenhofer, Peter and\r\n Stein, Benno},\r\n title = {{Webis Cross-Lingual Sentiment Dataset 2010 (Webis- \r\n CLS-10)}},\r\n month = jul,\r\n year = 2010,\r\n publisher = {Zenodo},\r\n doi = {10.5281\/zenodo.3251672},\r\n url = {https:\/\/doi.org\/10.5281\/zenodo.3251672}\r\n}\r\n\r\n\r\n**2. Paraphrasing** \r\nFrench part of the PAWS-X dataset (https:\/\/github.com\/google-research-datasets\/paws).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nCitation : \r\n> @InProceedings{pawsx2019emnlp,\r\n> title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},\r\n> author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},\r\n> booktitle = {Proc. of EMNLP},\r\n> year = {2019}\r\n> }\r\n\r\n\r\n\r\n**3. Natural Language Inference**\r\nFrench part of the XNLI dataset (https:\/\/github.com\/facebookresearch\/XNLI).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nFor the dev and test datasets, extra columns compared to the train dataset were available so I left them in the dataframe (I didn't know if these columns could be useful for other tasks or not). \r\nIn the context of the FLUE benchmark, only the columns gold_label, sentence1 and sentence2 are useful.\r\n\r\n\r\nCitation : \r\n\r\n> @InProceedings{conneau2018xnli,\r\n> author = \"Conneau, Alexis\r\n> and Rinott, Ruty\r\n> and Lample, Guillaume\r\n> and Williams, Adina\r\n> and Bowman, Samuel R.\r\n> and Schwenk, Holger\r\n> and Stoyanov, Veselin\",\r\n> title = \"XNLI: Evaluating Cross-lingual Sentence Representations\",\r\n> booktitle = \"Proceedings of the 2018 Conference on Empirical Methods\r\n> in Natural Language Processing\",\r\n> year = \"2018\",\r\n> publisher = \"Association for Computational Linguistics\",\r\n> location = \"Brussels, Belgium\",\r\n\r\n\r\n**4. Parsing**\r\nThe dataset used by the FLUE authors for this task is not freely available.\r\nUsers of your library will therefore not be able to access it.\r\nNevertheless, I think maybe it is useful to add a link to the site where to request this dataframe: http:\/\/ftb.linguist.univ-paris-diderot.fr\/telecharger.php?langue=en \r\n(personally it was sent to me less than 48 hours after I requested it).\r\n\r\n\r\n**5. Word Sense Disambiguation Tasks**\r\n5.1 Verb Sense Disambiguation\r\n\r\nTwo dataframes are available: train and test\r\nFor both dataframes, 4 columns are available: document, sentence, lemma and word.\r\nI created the document column thinking that there were several documents in the dataset but afterwards it turns out that there were not: several sentences but only one document. It's up to you to keep it or not when importing these two dataframes.\r\n\r\nThe sentence column is used to determine to which sentence the word in the word column belongs. It is in the form of a dictionary {'id': 'd000.s001', 'idx': '1'}. I thought for a while to keep only the idx because the id doesn't matter any more information. Nevertheless for the test dataset, the dictionary has an extra value indicating the source of the sentence. I don't know if it's useful or not, that's why I left the dictionary just in case. The user is free to do what he wants with it.\r\n\r\nCitation : \r\n\r\n> Segonne, V., Candito, M., and Crabb \u0301e, B. (2019). Usingwiktionary as a resource for wsd: the case of frenchverbs. InProceedings of the 13th International Confer-ence on Computational Semantics-Long Papers, pages259\u2013270\r\n\r\n5.2 Noun Sense Disambiguation\r\nTwo dataframes are available: 2 train and 1 test\r\n\r\nI confess I didn't fully understand the procedure for this task.\r\n\r\nCitation : \r\n\r\n> @dataset{loic_vial_2019_3549806,\r\n> author = {Lo\u00efc Vial},\r\n> title = {{French Word Sense Disambiguation with Princeton \r\n> WordNet Identifiers}},\r\n> month = nov,\r\n> year = 2019,\r\n> publisher = {Zenodo},\r\n> version = {1.0},\r\n> doi = {10.5281\/zenodo.3549806},\r\n> url = {https:\/\/doi.org\/10.5281\/zenodo.3549806}\r\n> }\r\n\r\nFinally, additional information about FLUE is available in the FlauBERT publication : \r\nhttps:\/\/arxiv.org\/abs\/1912.05372 (p. 4).\r\n\r\n\r\nHoping to have provided you with everything you need to add this benchmark :) \r\n","https:\/\/github.com\/huggingface\/datasets\/pull\/943"],"created_at":1590828735000,"updated_at":1607002773000,"closed_at":1607002773000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French.\r\n\r\nIn other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned.\r\n\r\nIf it is not the case, I can provide each of the cleaned FLUE datasets (in the form of a directly exploitable dataset rather than in the original xml formats which require additional processing, with the French part for cases where the dataset is based on a multilingual dataframe, etc.).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/223\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/222","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/222\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/222\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/222\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/222","id":627586690,"node_id":"MDU6SXNzdWU2Mjc1ODY2OTA=","number":222,"title":"Colab Notebook breaks when downloading the squad dataset","user":{"login":"carlos-aguayo","id":338917,"node_id":"MDQ6VXNlcjMzODkxNw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/338917?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/carlos-aguayo","html_url":"https:\/\/github.com\/carlos-aguayo","followers_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/followers","following_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/orgs","repos_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/repos","events_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/carlos-aguayo\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`","It still breaks very near the end\r\n\r\n![image](https:\/\/user-images.githubusercontent.com\/338917\/83312264-aa96a600-a1df-11ea-987f-2f4a0474247e.png)\r\n","When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.\r\nIf you don't restart, then it breaks like in your first message ","Thanks for reporting the second one ! We'll update the notebook to fix this one :)","This trick from @thomwolf seems to be the most reliable solution to fix this colab notebook issue:\r\n\r\n```python\r\n# install nlp\r\n!pip install -qq nlp==0.2.0\r\n\r\n# Make sure that we have a recent version of pyarrow in the session before we continue - otherwise reboot Colab to activate it\r\nimport pyarrow\r\nif int(pyarrow.__version__.split('.')[1]) < 16:\r\n import os\r\n os.kill(os.getpid(), 9)\r\n```","The second part got fixed here: 2cbc656d6fc4b18ce57eac070baec05b31180d39\r\n\r\nThanks! I'm then closing this issue."],"created_at":1590792959000,"updated_at":1591230065000,"closed_at":1591230065000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"When I run the notebook in Colab\r\nhttps:\/\/colab.research.google.com\/github\/huggingface\/nlp\/blob\/master\/notebooks\/Overview.ipynb\r\nbreaks when running this cell:\r\n![image](https:\/\/user-images.githubusercontent.com\/338917\/83311709-ffd1b800-a1dd-11ea-8394-3a87df0d7f8b.png)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/222\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/221","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/221\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/221\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/221\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/221","id":627300648,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0","number":221,"title":"Fix tests\/test_dataset_common.py","user":{"login":"tayciryahmed","id":13635495,"node_id":"MDQ6VXNlcjEzNjM1NDk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13635495?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tayciryahmed","html_url":"https:\/\/github.com\/tayciryahmed","followers_url":"https:\/\/api.github.com\/users\/tayciryahmed\/followers","following_url":"https:\/\/api.github.com\/users\/tayciryahmed\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tayciryahmed\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tayciryahmed\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tayciryahmed\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tayciryahmed\/orgs","repos_url":"https:\/\/api.github.com\/users\/tayciryahmed\/repos","events_url":"https:\/\/api.github.com\/users\/tayciryahmed\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tayciryahmed\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md) ?"],"created_at":1590761535000,"updated_at":1591014042000,"closed_at":1590764543000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/221","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/221","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/221.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/221.patch"},"body":"When I run the command `RUN_SLOW=1 pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument \"'download_and_prepare_kwargs'\"` at the level of `load_dataset`. Indeed, this [function](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/load.py#L441) no longer has the argument `download_and_prepare_kwargs` but rather `download_config`. So here I change the tests accordingly. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/221\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/220","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/220\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/220\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/220\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/220","id":627280683,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy","number":220,"title":"dataset_arcd","user":{"login":"tayciryahmed","id":13635495,"node_id":"MDQ6VXNlcjEzNjM1NDk1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13635495?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tayciryahmed","html_url":"https:\/\/github.com\/tayciryahmed","followers_url":"https:\/\/api.github.com\/users\/tayciryahmed\/followers","following_url":"https:\/\/api.github.com\/users\/tayciryahmed\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tayciryahmed\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tayciryahmed\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tayciryahmed\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tayciryahmed\/orgs","repos_url":"https:\/\/api.github.com\/users\/tayciryahmed\/repos","events_url":"https:\/\/api.github.com\/users\/tayciryahmed\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tayciryahmed\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["you can rebase from master to fix the CI error :)","Awesome !"],"created_at":1590760010000,"updated_at":1590764320000,"closed_at":1590764241000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/220","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/220","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/220.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/220.patch"},"body":"Added Arabic Reading Comprehension Dataset (ARCD): https:\/\/arxiv.org\/abs\/1906.05394","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/220\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/219","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/219\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/219\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/219\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/219","id":627235893,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx","number":219,"title":"force mwparserfromhell as third party","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590755597000,"updated_at":1590759013000,"closed_at":1590759012000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/219","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/219","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/219.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/219.patch"},"body":"This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/219\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/218","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/218\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/218\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/218\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/218","id":627173407,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz","number":218,"title":"Add Natual Questions and C4 scripts","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590748830000,"updated_at":1590755461000,"closed_at":1590755460000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/218","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/218","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/218.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/218.patch"},"body":"Scripts are ready !\r\nHowever they are not processed nor directly available from gcp yet.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/218\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/217","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/217\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/217\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/217\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/217","id":627128403,"node_id":"MDU6SXNzdWU2MjcxMjg0MDM=","number":217,"title":"Multi-task dataset mixing","user":{"login":"ghomasHudson","id":13795113,"node_id":"MDQ6VXNlcjEzNzk1MTEz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13795113?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ghomasHudson","html_url":"https:\/\/github.com\/ghomasHudson","followers_url":"https:\/\/api.github.com\/users\/ghomasHudson\/followers","following_url":"https:\/\/api.github.com\/users\/ghomasHudson\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ghomasHudson\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ghomasHudson\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ghomasHudson\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ghomasHudson\/orgs","repos_url":"https:\/\/api.github.com\/users\/ghomasHudson\/repos","events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ghomasHudson\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"},{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypothesis**: The St. Louis Cardinals have always won.\r\n> \r\n> - **Premise**: yeah well losing is i mean i\u2019m i\u2019m originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but \r\n\r\nwas flattened to a single input:\r\n\r\n> mnli hypothesis: The St. Louis Cardinals have always won. premise:\r\n> yeah well losing is i mean i\u2019m i\u2019m originally from Saint Louis and Saint Louis Cardinals\r\n> when they were there were uh a mostly a losing team but.\r\n\r\nThis flattening is actually a very simple operation in `nlp` already. You would just need to do the following:\r\n\r\n```python \r\ndef flatten_inputs(example):\r\n return {\"input\": \"mnli hypothesis: \" + example['hypothesis'] + \" premise: \" + example['premise']}\r\n\r\nt5_ready_mnli_ds = mnli_ds.map(flatten_inputs, remove_columns=[<all columns except output>])\r\n```\r\n\r\nSo I guess converting the datasets into the same format can be left to the user for now. \r\nThen the question is how we can merge the datasets. I would probably be in favor of a simple \r\n\r\n```python \r\ndataset.add()\r\n```\r\n\r\nfunction that checks if the dataset is of the same format and if yes merges the two datasets. Finally, how should the sampling be implemented? **Examples-proportional mixing** corresponds to just merging the datasets and shuffling. For the other two sampling approaches we would need some higher-level features, maybe even a `dataset.sample()` function for merged datasets. \r\n\r\nWhat are your thoughts on this @thomwolf @lhoestq @ghomasHudson @enzoampil ?","I agree that we should leave the flattening of the dataset to the user for now. Especially because although the T5 framing seems obvious, there are slight variations on how the T5 authors do it in comparison to other approaches such as gpt-3 and decaNLP.\r\n\r\nIn terms of sampling, Examples-proportional mixing does seem the simplest to implement so would probably be a good starting point.\r\n\r\nTemperature-scaled mixing would probably most useful, offering flexibility as it can simulate the other 2 methods by setting the temperature parameter. There is a [relevant part of the T5 repo](https:\/\/github.com\/google-research\/text-to-text-transfer-transformer\/blob\/03c94165a7d52e4f7230e5944a0541d8c5710788\/t5\/data\/utils.py#L889-L1118) which should help with implementation.\r\n\r\nAccording to the T5 authors, equal-mixing performs worst. Among the other two methods, tuning the K value (the artificial dataset size limit) has a large impact.\r\n","I agree with going with temperature-scaled mixing for its flexibility!\r\n\r\nFor the function that combines the datasets, I also find `dataset.add()` okay while also considering that users may want it to be easy to combine a list of say 10 data sources in one go.\r\n\r\n`dataset.sample()` should also be good. By the looks of it, we're planning to have as main parameters: `temperature`, and `K`.\r\n\r\nOn converting the datasets to the same format, I agree that we can leave these to the users for now. But, I do imagine it'd be an awesome feature for the future to have this automatically handled, based on a chosen *approach* to formatting :smile: \r\n\r\nE.g. T5, GPT-3, decaNLP, original raw formatting, or a contributed way of formatting in text-to-text. ","This is an interesting discussion indeed and it would be nice to make multi-task easier.\r\n\r\nProbably the best would be to have a new type of dataset especially designed for that in order to easily combine and sample from the multiple datasets.\r\n\r\nThis way we could probably handle the combination of datasets with differing schemas as well (unlike T5).","@thomwolf Are you suggesting making a wrapper class which can take existing datasets as arguments and do all the required sampling\/combining, to present the same interface as a normal dataset?\r\n\r\nThat doesn't seem too complicated to implement.\r\n","I guess we're looking at the end user writing something like:\r\n``` python\r\nds = nlp.load_dataset('multitask-t5',datasets=[\"squad\",\"cnn_dm\",...], k=1000, t=2.0)\r\n```\r\nUsing the t5 method of combining here (or this could be a function passed in as an arg) \r\n\r\nPassing kwargs to each 'sub-dataset' might become tricky.","From thinking upon @thomwolf 's suggestion, I've started experimenting:\r\n```python\r\nclass MultitaskDataset(DatasetBuilder):\r\n def __init__(self, *args, **kwargs):\r\n super(MultitaskDataset, self).__init__(*args, **kwargs)\r\n self._datasets = kwargs.get(\"datasets\")\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"source\": nlp.Value(\"string\"),\r\n \"target\": nlp.Sequence(nlp.Value(\"string\"))\r\n })\r\n )\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self._datasets'''\r\n min_set = None\r\n for dataset in self._datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n....\r\n\r\n# Maybe this?:\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\nmultitask_dataset = nlp.load_dataset(\r\n 'multitask_dataset',\r\n datasets=[squad,cnn_dailymail], \r\n k=1000, \r\n t=2.0\r\n)\r\n\r\n```\r\n\r\nDoes anyone know what methods of `MultitaskDataset` I would need to implement? Maybe `as_dataset` and `download_and_prepare`? Most of these should be just calling the methods of the sub-datasets. \r\n\r\nI'm assuming DatasetBuilder is better than the more specific `GeneratorBasedBuilder`, `BeamBasedBuilder`, etc....\r\n\r\nOne of the other problems is that the dataset size is unknown till you construct it (as you can pick the sub-datasets). Am hoping not to need to make changes to `nlp.load_dataset` just for this class.\r\n\r\nI'd appreciate it if anyone more familiar with nlp's internal workings could tell me if I'm on the right track!","I think I would probably go for a `MultiDataset` wrapper around a list of `Dataset`.\r\n\r\nI'm not sure we need to give it `k` and `t` parameters at creation, it can maybe be something along the lines of:\r\n```python\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\n\r\nmultitask_dataset = nlp.MultiDataset(squad, cnn_dm)\r\n\r\nbatch = multitask_dataset.sample(10, temperature=2.0, k=1000)\r\n```\r\n\r\nThe first proof-of-concept for multi-task datasets could definitely require that the provided datasets have the same name\/type for columns (if needed you easily rename\/cast a column prior to instantiating the `MultiDataset`).\r\n\r\nIt's good to think about it for some time though and don't overfit too much on the T5 examples (in particular for the ways\/kwargs for sampling among datasets).","The problem with changing `k` and `t` per sampling is that you'd have to somehow remember which examples you'd already returned while re-weighting the remaining examples based on the new `k` and `t`values. It seems possible but complicated (I can't really see a reason why you'd want to change the weighting of datasets after you constructed the multidataset).\r\n\r\nWouldn't it be convenient if it implemented the dataset interface? Then if someone has code using a single nlp dataset, they can replace it with a multitask combination of more datasets without having to change other code. We would at least need to be able to pass it into a `DataLoader`.\r\n\r\n","A very janky (but working) implementation of `multitask_dataset.sample()` could be something like this:\r\n```python\r\nimport nlp\r\nimport torch\r\n\r\nclass MultiDataset():\r\n def __init__(self, *args, temperature=2.0, k=1000, maximum=None, scale=1):\r\n self.datasets = args\r\n self._dataloaders = {}\r\n for split in self._get_common_splits():\r\n split_datasets = [ds[split] for ds in self.datasets]\r\n mixing_rates = self._calc_mixing_rates(split_datasets,temperature, k, maximum, scale)\r\n weights = []\r\n for i in range(len(self.datasets)):\r\n weights += [mixing_rates[i]]*len(self.datasets[i][split])\r\n self._dataloaders[split] = torch.utils.data.DataLoader(torch.utils.data.ConcatDataset(split_datasets),\r\n sampler=torch.utils.data.sampler.WeightedRandomSampler(\r\n num_samples=len(weights),\r\n weights = weights,\r\n replacement=True),\r\n shuffle=False)\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in self.datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n\r\n def _calc_mixing_rates(self,datasets, temperature=2.0, k=1000, maximum=None, scale=1):\r\n '''Work out the weighting of each dataset based on t and k'''\r\n mixing_rates = []\r\n for dataset in datasets:\r\n rate = len(dataset)\r\n rate *= scale\r\n if maximum:\r\n rate = min(rate, maximum)\r\n if temperature != 1.0:\r\n rate = rate ** (1.0\/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\n def sample(self,n,split):\r\n batch = []\r\n for example in self._dataloaders[split]:\r\n batch.append(example)\r\n n -= 1\r\n if n == 0:\r\n return batch\r\n\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\nmultitask_dataset = MultiDataset(squad, cnn_dm)\r\nbatch = multitask_dataset.sample(100,\"train\")\r\n```\r\n\r\nThere's definitely a more sensible way than embedding `DataLoader`s inside. ","There is an interesting related investigation by @zphang here https:\/\/colab.research.google.com\/github\/zphang\/zphang.github.io\/blob\/master\/files\/notebooks\/Multi_task_Training_with_Transformers_NLP.ipynb","Good spot! Here are my thoughts:\r\n\r\n- Aside: Adding `MultitaskModel` to transformers might be a thing to raise - even though having task-specific heads has become unfashionable in recent times in favour of text-to-text type models.\r\n- Adding the task name as an extra field also seems useful for these kind of models which have task-specific heads\r\n- There is some validation of our approach that the user should be expected to `map` datasets into a common form.\r\n- The size-proportional sampling (also called \"Examples-proportional mixing\") used here doesn't perform too badly in the T5 paper (it's comparable to temperature-scaled mixing in many cases but less flexible. This is only reasonable with a `K` maximum size parameter to prevent very large datasets dominating). This might be good for a first prototype using:\r\n ```python\r\n def __iter__(self):\r\n \"\"\"\r\n For each batch, sample a task, and yield a batch from the respective\r\n task Dataloader.\r\n\r\n We use size-proportional sampling, but you could easily modify this\r\n to sample from some-other distribution.\r\n \"\"\"\r\n task_choice_list = []\r\n for i, task_name in enumerate(self.task_name_list):\r\n task_choice_list += [i] * self.num_batches_dict[task_name]\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n dataloader_iter_dict = {\r\n task_name: iter(dataloader) \r\n for task_name, dataloader in self.dataloader_dict.items()\r\n }\r\n for task_choice in task_choice_list:\r\n task_name = self.task_name_list[task_choice]\r\n yield next(dataloader_iter_dict[task_name]) \r\n ```\r\n We'd just need to pull samples from the raw datasets and not from `DataLoader`s for each task. We can assume the user has done `dataset.shuffle()` if they want to.\r\n\r\n Other sampling methods can later be implemented by changing how the `task_choice_list` is generated. This should allow more flexibility and not tie us to specific methods for sampling among datasets.\r\n","Another thought: Multitasking over benchmarks (represented as Meta-datasets in nlp) is probably a common use case. Would be nice to pass an entire benchmark to our `MultiDataset` wrapper rather than having to pass individual components.","Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n\r\n- I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n- I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n- I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n- I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n- This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\nclass MultiDataset:\r\n def __init__(self,tasks):\r\n self.tasks = tasks\r\n\r\n # Create random order of tasks\r\n # Using size-proportional sampling\r\n task_choice_list = []\r\n for i, task in enumerate(self.tasks):\r\n task_choice_list += [i] * len(task)\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n # Add index into each dataset\r\n # - We don't want to shuffle within each task\r\n counters = {}\r\n self.task_choice_list = []\r\n for i in range(len(task_choice_list)):\r\n idx = counters.get(task_choice_list[i],0)\r\n self.task_choice_list.append((task_choice_list[i],idx))\r\n counters[task_choice_list[i]] = idx + 1\r\n\r\n\r\n def __len__(self):\r\n return np.sum([len(t) for t in self.tasks])\r\n\r\n def __repr__(self):\r\n task_str = \", \".join([str(t) for t in self.tasks])\r\n return f\"MultiDataset(tasks: {task_str})\"\r\n\r\n def __getitem__(self,key):\r\n if isinstance(key, int):\r\n task_idx, example_idx = self.task_choice_list[key]\r\n task = self.tasks[task_idx]\r\n example = task[example_idx]\r\n example[\"task_name\"] = task.info.builder_name\r\n return example\r\n elif isinstance(key, slice):\r\n raise NotImplementedError()\r\n\r\n def __iter__(self):\r\n for i in range(len(self)):\r\n yield self[i]\r\n\r\n\r\ndef load_multitask(*datasets):\r\n '''Create multitask datasets per split'''\r\n\r\n def _get_common_splits(datasets):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n common_splits = _get_common_splits(datasets)\r\n out = {}\r\n for split in common_splits:\r\n out[split] = MultiDataset([d[split] for d in datasets])\r\n return out\r\n\r\n\r\n##########################################\r\n# Dataset Flattening\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n \"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\n#############################################\r\n\r\nmtds = load_multitask(squad,cnn_dm)\r\n\r\nfor example in mtds[\"train\"]:\r\n print(example[\"task_name\"],example[\"target\"])\r\n```\r\nLet me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.","Hey! Happy to jump into the discussion here. I'm still getting familiar with bits of this code, but the reasons I sampled over data loaders rather than datasets is 1) ensuring that each sampled batch corresponds to only 1 task (in case of different inputs formats\/downstream models) and 2) potentially having different batch sizes per task (e.g. some tasks have very long\/short inputs). How are you currently dealing with these in your PR?","The short answer is - I'm not! Everything is currently on a per-example basis. It would be fairly simple to add a `batch_size` argument which would ensure that every `batch_size` examples come from the same task. That should suit most use-cases (unless you wanted to ensure batches all came from the same task and apply something like `SortishSampler` on each task first)\r\n\r\nYour notebook was really inspiring by the way - thanks!","@zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.","mt-dnn's [batcher.py](https:\/\/github.com\/namisan\/mt-dnn\/blob\/master\/mt_dnn\/batcher.py) might be worth looking at.","> @zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.\r\n\r\nI think having different batch sizes per task is particularly helpful in some scenarios where each task has different amount of data. For example, the problem I'm currently facing is one task has tens of thousands of samples while one task has a couple hundreds. I think in this case different batch size could help. But if using the same batch size is a lot simpler to implement, I guess it makes sense to go with that.","I think that instead of proportional to size sampling you should specify weights or probabilities for drawing a batch from each dataset. We should also ensure that the smaller datasets are repeated so that the encoder layer doesn't overtrain on the largest dataset.","Are there any references for people doing different batch sizes per task in the literature? I've only seen constant batch sizes with differing numbers of batches for each task which seems sufficient to prevent the impact of large datasets (Read 3.5.3 of the [T5 paper](https:\/\/arxiv.org\/pdf\/1910.10683.pdf) for example).\r\n\r\n","Hi,\r\nregarding building T5 dataset , I think we can use datasets https:\/\/github.com\/huggingface\/datasets and then need something similar to tf.data.experimental.sample_from_datasets, do you know if similar functionality exist in pytorch? Which can sample multiple datasets with the given rates. thanks. "],"created_at":1590744146000,"updated_at":1603701993000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).\r\n\r\nThe [T5 paper](https:\/\/arxiv.org\/pdf\/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:\r\n- **Examples-proportional mixing** - sample from tasks proportionally to their dataset size\r\n- **Equal mixing** - sample uniformly from each task\r\n- **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1\/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T.\r\n\r\nFollowing this discussion https:\/\/github.com\/huggingface\/transformers\/issues\/4340 in [transformers](https:\/\/github.com\/huggingface\/transformers), @enzoampil suggested that the `nlp` library might be a better place for this functionality.\r\n\r\nSome method for combining datasets could be implemented ,e.g.\r\n```\r\ndataset = nlp.load_multitask(['squad','imdb','cnn_dm'], temperature=2.0, ...)\r\n```\r\n\r\nWe would need a few additions:\r\n- Method of identifying the tasks - how can we support adding a string to each task as an identifier: e.g. 'summarisation: '?\r\n- Method of combining the metrics - a standard approach is to use the specific metric for each task and add them together for a combined score.\r\n\r\nIt would be great to support common use cases such as pretraining on the GLUE benchmark before fine-tuning on each GLUE task in turn. \r\n\r\nI'm willing to write bits\/most of this I just need some guidance on the interface and other library details so I can integrate it properly.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/217\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/216","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/216\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/216\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/216\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/216","id":626896890,"node_id":"MDU6SXNzdWU2MjY4OTY4OTA=","number":216,"title":"\u2753 How to get ROUGE-2 with the ROUGE metric ?","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird","For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\"rouge2\"])\r\n```\r\n\r\nNote that I just did a PR to have both `.add` and `.add_batch` for metrics, that's why now this is `rouge.add(lp, lg)` and not `rouge.add([lp], [lg])`","Well I just tested with the official script and both rouge1 and rougeL return exactly the same thing for the input you gave, so this is actually fine ^^\r\n\r\nI hope it helped :)"],"created_at":1590709652000,"updated_at":1590969875000,"closed_at":1590969875000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric.\r\n\r\n---\r\n\r\nI compute scores with :\r\n\r\n```python\r\nimport nlp\r\n\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add([lp], [lg])\r\nscore = rouge.compute()\r\n```\r\n\r\nthen : _(print only the F-score for readability)_\r\n\r\n```python\r\nfor k, s in score.items():\r\n print(k, s.mid.fmeasure)\r\n```\r\n\r\nIt gives :\r\n\r\n>rouge1 0.7915168355671788\r\nrougeL 0.7915168355671788\r\n\r\n---\r\n\r\n**How can I get the ROUGE-2 score ?**\r\n\r\nAlso, it's seems weird that ROUGE-1 and ROUGE-L scores are the same. Did I made a mistake ?\r\n\r\n@lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/216\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/215","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/215\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/215\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/215\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/215","id":626867879,"node_id":"MDU6SXNzdWU2MjY4Njc4Nzk=","number":215,"title":"NonMatchingSplitsSizesError when loading blog_authorship_corpus","user":{"login":"cedricconol","id":52105365,"node_id":"MDQ6VXNlcjUyMTA1MzY1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/52105365?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/cedricconol","html_url":"https:\/\/github.com\/cedricconol","followers_url":"https:\/\/api.github.com\/users\/cedricconol\/followers","following_url":"https:\/\/api.github.com\/users\/cedricconol\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/cedricconol\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/cedricconol\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/cedricconol\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/cedricconol\/orgs","repos_url":"https:\/\/api.github.com\/users\/cedricconol\/repos","events_url":"https:\/\/api.github.com\/users\/cedricconol\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/cedricconol\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInfo(name='validation', num_bytes=37500394, num_examples=31277,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='validation',\r\nnum_bytes=35652716, num_examples=30804, dataset_name='blog_authorship_corpus')}]\r\n```\r\nwhich is different from the `dataset_infos.json` and also different from yours.\r\n\r\nIt looks like the script for generating examples is not consistent","The files provided by the authors are corrupted and the script seems to ignore the xml files that can't be decoded (it does `try:... except UnicodeDecodeError`). Maybe depending of the environment some files can be opened and some others don't but not sure why","Feel free to do `ignore_verifications=True` for now... The verifications only include a check on the checksums of the downloaded files, and a check on the number of examples in each splits.","I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset. ","> I'm getting this same issue when loading the `imdb` corpus via `dataset = load_dataset(\"imdb\")`. When I try `ignore_verifications=True`, no examples are read into the `train` portion of the dataset.\r\n\r\nWhen the checksums don't match, it may mean that the file you downloaded is corrupted. In this case you can try to load the dataset again `load_dataset(\"imdb\", download_mode=\"force_redownload\")`\r\n\r\nAlso I just checked on my side and it worked fine:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"imdb\")\r\nprint(len(dataset[\"train\"]))\r\n# 25000\r\n```\r\n\r\nLet me know if redownloading fixes your issue @EmilyAlsentzer .\r\nIf not, feel free to open a separate issue.","It doesn't seem to fix the problem. I'll open a separate issue. Thanks. ","I wasn't aware of the \"force_redownload\" option and manually removed the '\/home\/me\/.cache\/huggingface\/datasets\/' dir, this worked for me (dataset 'cnn_dailymail')","Yes I think this might not be documented well enough. Let\u2019s add it to the doc @lhoestq @SBrandeis.\r\nAnd everything on how to control the cache behavior better (removing, overriding, changing the path, etc)"],"created_at":1590706519000,"updated_at":1609969023000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`. \r\n\r\n```\r\nraise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', \r\nnum_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), \r\n'recorded': SplitInfo(name='train', num_bytes=616473500, num_examples=536323, \r\ndataset_name='blog_authorship_corpus')}, {'expected': SplitInfo(name='validation', \r\nnum_bytes=37500394, num_examples=31277, dataset_name='blog_authorship_corpus'), \r\n'recorded': SplitInfo(name='validation', num_bytes=30786661, num_examples=27766, \r\ndataset_name='blog_authorship_corpus')}]\r\n```\r\n\r\nUpon checking it seems like there is a disparity between the information in `datasets\/blog_authorship_corpus\/dataset_infos.json` and what was downloaded. Although I can get away with this by passing `ignore_verifications=True` in `load_dataset`, I'm thinking doing so might give problems later on.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/215\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/214","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/214\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/214\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/214\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/214","id":626641549,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx","number":214,"title":"[arrow_dataset.py] add new filter function","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.","Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n```python\r\nfor i in range(num_examples):\r\n example = map_nested(lambda x: x[i], batch)\r\n # ... then test to keep it or not\r\n```","> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome I'll check it out :-) ","> Instead of flattening everything and rebuilding the example, maybe we can try to access the examples like this:\r\n> \r\n> ```python\r\n> for i in range(num_examples):\r\n> example = map_nested(lambda x: x[i], batch)\r\n> # ... then test to keep it or not\r\n> ```\r\n\r\nAwesome this function is definitely much nicer!","Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n```python\r\none_example = {\r\n \"title\": \"blabla\",\r\n \"paragraphs\": [\r\n \"p1\", \"p2\", ...\r\n ]\r\n}\r\n```","We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.","> Actually I just realized that `map_nested` might not work either as it applies the function at the very last list of the structure. However we can imagine that a single example has also a list in its structure:\r\n> \r\n> ```python\r\n> one_example = {\r\n> \"title\": \"blabla\",\r\n> \"paragraphs\": [\r\n> \"p1\", \"p2\", ...\r\n> ]\r\n> }\r\n> ```\r\n\r\nThey both work. I'm using it on trivia_qa which is pretty nested. If you use the option `dict_only=True` I think it's fine.","> We'll probably have to take into account the `dset._data.schema` to extract the examples from the batch.\r\n\r\nWhy? ","Actually it's fine. I guess this is going to be yet another thing to be unit-tested just to make sure ^^","Yes, I will need to add tests and documentation! \r\n@thomwolf - would a function like this be ok? It abstracts `.map()` a bit which might be hard to understand. ","I tried on some datasets with nested structure and it works fine ! Great work :D \r\n","Awesome :-), I will add documentation and some simple unittests","Ok merging!"],"created_at":1590682900000,"updated_at":1590752609000,"closed_at":1590751940000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/214","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/214","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/214.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/214.patch"},"body":"The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.\r\nI think, filtering out examples is also a very common operation people would like to perform on datasets.\r\n\r\nThis PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.\r\n\r\nHere is a sample code you can play around with:\r\n\r\n```python\r\nds = nlp.load_dataset(\"squad\", split=\"validation[:10%]\")\r\n\r\n\r\ndef remove_under_idx_5(example, idx):\r\n return idx < 5\r\n\r\n\r\ndef only_keep_examples_with_is_in_context(example):\r\n return \"is\" in example[\"context\"]\r\n\r\n\r\nresult_keep_only_first_5 = ds.filter(remove_under_idx_5, with_indices=True, load_from_cache_file=False)\r\nresult_keep_examples_with_is_in_context = ds.filter(only_keep_examples_with_is_in_context, load_from_cache_file=False)\r\n\r\nprint(\"Original number of examples: {}\".format(len(ds)))\r\nprint(\"First five examples number of examples: {}\".format(len(result_keep_only_first_5)))\r\nprint(\"Is in context examples number of examples: {}\".format(len(result_keep_examples_with_is_in_context)))\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/214\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/213","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/213\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/213\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/213\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/213","id":626587995,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3","number":213,"title":"better message if missing beam options","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590678417000,"updated_at":1590745877000,"closed_at":1590745876000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/213","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/213","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/213.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/213.patch"},"body":"WDYT @yjernite ?\r\nFor example:\r\n```python\r\ndataset = nlp.load_dataset('wikipedia', '20200501.aa')\r\n```\r\nRaises:\r\n```\r\nMissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https:\/\/beam.apache.org\/documentation\/runners\/capability-matrix\/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n\t`load_dataset('wikipedia', '20200501.aa', beam_runner='DirectRunner')`\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/213\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/212","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/212\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/212\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/212\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/212","id":626580198,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy","number":212,"title":"have 'add' and 'add_batch' for metrics","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590677807000,"updated_at":1590748865000,"closed_at":1590748864000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/212","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/212","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/212.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/212.patch"},"body":"This should fix #116 \r\n\r\nPreviously the `.add` method of metrics expected a batch of examples.\r\nNow `.add` expects one prediction\/reference and `.add_batch` expects a batch.\r\nI think it is more coherent with the way the ArrowWriter works.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/212\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/211","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/211\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/211\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/211\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/211","id":626565994,"node_id":"MDU6SXNzdWU2MjY1NjU5OTQ=","number":211,"title":"[Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"assignees":[{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\n----> 3 ds.map(lambda x: x, load_from_cache_file=False)\r\n\r\n~\/python_bin\/nlp\/arrow_dataset.py in map(self, function, with_indices, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, arrow_schema, disable_nullable)\r\n 549\r\n 550 if update_data:\r\n--> 551 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n 552\r\n 553 # Create new Dataset from buffer or file\r\n\r\n~\/python_bin\/nlp\/arrow_writer.py in finalize(self, close_stream)\r\n 182 def finalize(self, close_stream=True):\r\n 183 if self.pa_writer is not None:\r\n--> 184 self.write_on_file()\r\n 185 self.pa_writer.close()\r\n 186 if close_stream:\r\n\r\n~\/python_bin\/nlp\/arrow_writer.py in write_on_file(self)\r\n 104 \"\"\"\r\n 105 if self.current_rows:\r\n--> 106 pa_array = pa.array(self.current_rows, type=self._type)\r\n 107 first_example = pa.array(self.current_rows[0:1], type=self._type)[0]\r\n 108 # Sanity check\r\n\r\n~\/hugging_face\/venv_3.7\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib.array()\r\n\r\n~\/hugging_face\/venv_3.7\/lib\/python3.7\/site-packages\/pyarrow\/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n~\/hugging_face\/venv_3.7\/lib\/python3.7\/site-packages\/pyarrow\/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowInvalid: Could not convert TagMe with type str: converting to null type\r\n```","Actually thinking a bit more about it, it's probably a data sample that is not correct in `trivia_qa`. But I'm a bit surprised though that we managed to write it in .arrow format and now cannot write it anymore after an \"identity\" mapping.","I don't have this error :x","Interesting, maybe I have a very old cache of trivia_qa...thanks for checking","I'm running it right now on colab to double check","Actually, I know what the problem is...I'm quite sure it's a bug. Here we take some test inputs: https:\/\/github.com\/huggingface\/nlp\/blob\/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f\/src\/nlp\/arrow_dataset.py#L472\r\n\r\nIt might be that in the test inputs, a `Sequence` type value is an emtpy list. So in my case I have `ds[0][\"entity_pages'][\"wiki_context\"] = []`. => this leads to an `arrow_schema` equal to `null` for `[\"entity_pages'][\"wiki_context\"]` => see line: https:\/\/github.com\/huggingface\/nlp\/blob\/0e0ef12c14d2175e0b0bd7d8aa814b09e2cd7e1f\/src\/nlp\/arrow_dataset.py#L501 instead of list of string which it should for other examples. \r\n\r\nGuess it's an edge case, but it can happen.","Good point, I think the schema should be infered at the writing stage where we have a `writer_batch_size` number of examples (typically 10k) so it's even less likely to run into this scenario."],"created_at":1590676694000,"updated_at":1595499316000,"closed_at":1595499316000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Running the following code \r\n\r\n```\r\nimport nlp\r\nds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\nds.map(lambda x: x, load_from_cache_file=False)\r\n```\r\n\r\ntriggers a `ArrowInvalid: Could not convert TagMe with type str: converting to null type` error.\r\n\r\nOn the other hand if we remove a certain column of `trivia_qa` which seems responsible for the bug, it works:\r\n\r\n```\r\nimport nlp\r\nds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's cached afterwards...\r\nds.map(lambda x: x, remove_columns=[\"entity_pages\"], load_from_cache_file=False)\r\n```\r\n\r\n. Seems quite hard to debug what's going on here... @lhoestq @thomwolf - do you have a good first guess what the problem could be?\r\n\r\n**Note** BTW: I think this could be a good test to check that the datasets work correctly: Take a tiny portion of the dataset and check that it can be written correctly.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/211\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/210","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/210\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/210\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/210\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/210","id":626504243,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz","number":210,"title":"fix xnli metric kwargs description","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590672104000,"updated_at":1590672131000,"closed_at":1590672130000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/210","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/210","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/210.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/210.patch"},"body":"The text was wrong as noticed in #202 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/210\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/209","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/209\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/209\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/209\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/209","id":626405849,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4","number":209,"title":"Add a Google Drive exception for small files","user":{"login":"airKlizz","id":25703835,"node_id":"MDQ6VXNlcjI1NzAzODM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25703835?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/airKlizz","html_url":"https:\/\/github.com\/airKlizz","followers_url":"https:\/\/api.github.com\/users\/airKlizz\/followers","following_url":"https:\/\/api.github.com\/users\/airKlizz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/airKlizz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/airKlizz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/airKlizz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/airKlizz\/orgs","repos_url":"https:\/\/api.github.com\/users\/airKlizz\/repos","events_url":"https:\/\/api.github.com\/users\/airKlizz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/airKlizz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-contribute-to-nlp","Nice ! ","``make style`` done! Thanks for the approvals."],"created_at":1590662417000,"updated_at":1590678904000,"closed_at":1590678904000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/209","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/209","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/209.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/209.patch"},"body":"I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive. \r\n\r\nOne of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly. \r\n\r\nCurrently the ``nlp`` raises a error: ``ConnectionError: Couldn't reach https:\/\/drive.google.com\/uc?export=download&id=1DGnbUY9zwiThTdgUvVTSAvSVHoloCgun`` while the url is working. So I just add a new exception as you have already done for ``firebasestorage.googleapis.com`` : \r\n\r\n```\r\nelif (response.status_code == 400 and \"firebasestorage.googleapis.com\" in url) or (response.status_code == 405 and \"drive.google.com\" in url)\r\n```\r\n\r\nI make an example of the error that you can run on [![Open In Colab](https:\/\/colab.research.google.com\/assets\/colab-badge.svg)](https:\/\/colab.research.google.com\/drive\/1ae_JJ9uvUt-9GBh0uGZhjbF5aXkl-BPv?usp=sharing)\r\n\r\nI avoid the error by adding an exception but there is maybe a proper way to do it.\r\n\r\nMany thanks :hugs:\r\nBest,","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/209\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/208","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/208\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/208\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/208\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/208","id":626398519,"node_id":"MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx","number":208,"title":"[Dummy data] insert config name instead of config ","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590661699000,"updated_at":1590670081000,"closed_at":1590670080000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/208","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/208","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/208.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/208.patch"},"body":"Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself. \r\n\r\nAlso, @lhoestq fixed small import bug introduced by beam command I think.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/208\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/207","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/207\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/207\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/207\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/207","id":625932200,"node_id":"MDU6SXNzdWU2MjU5MzIyMDA=","number":207,"title":"Remove test set from NLP viewer","user":{"login":"chrisdonahue","id":748399,"node_id":"MDQ6VXNlcjc0ODM5OQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/748399?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/chrisdonahue","html_url":"https:\/\/github.com\/chrisdonahue","followers_url":"https:\/\/api.github.com\/users\/chrisdonahue\/followers","following_url":"https:\/\/api.github.com\/users\/chrisdonahue\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/chrisdonahue\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/chrisdonahue\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/chrisdonahue\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/chrisdonahue\/orgs","repos_url":"https:\/\/api.github.com\/users\/chrisdonahue\/repos","events_url":"https:\/\/api.github.com\/users\/chrisdonahue\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/chrisdonahue\/received_events","type":"User","site_admin":false},"labels":[{"id":2107841032,"node_id":"MDU6TGFiZWwyMTA3ODQxMDMy","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/nlp-viewer","name":"nlp-viewer","color":"94203D","default":false,"description":""}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["~is the viewer also open source?~\r\n[is a streamlit app!](https:\/\/docs.streamlit.io\/en\/latest\/getting_started.html)","Appears that [two thirds of those polled on Twitter](https:\/\/twitter.com\/srush_nlp\/status\/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data."],"created_at":1590604327000,"updated_at":1591198147000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"While the new [NLP viewer](https:\/\/huggingface.co\/nlp\/viewer\/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and small things like this can help increase awareness.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/207\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/206","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/206\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/206\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/206\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/206","id":625842989,"node_id":"MDU6SXNzdWU2MjU4NDI5ODk=","number":206,"title":"[Question] Combine 2 datasets which have the same columns","user":{"login":"airKlizz","id":25703835,"node_id":"MDQ6VXNlcjI1NzAzODM1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25703835?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/airKlizz","html_url":"https:\/\/github.com\/airKlizz","followers_url":"https:\/\/api.github.com\/users\/airKlizz\/followers","following_url":"https:\/\/api.github.com\/users\/airKlizz\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/airKlizz\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/airKlizz\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/airKlizz\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/airKlizz\/orgs","repos_url":"https:\/\/api.github.com\/users\/airKlizz\/repos","events_url":"https:\/\/api.github.com\/users\/airKlizz\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/airKlizz\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.","Ok great! I will look at it. Thanks"],"created_at":1590596752000,"updated_at":1591780274000,"closed_at":1591780274000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi,\r\n\r\nI am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-specific prefixes for T5 : *summarize:* for english and *zusammenfassen:* for german)\r\n\r\nMy issue is that I want to train T5 on the combined english and german datasets to see if it improves results. So I would like to combine 2 datasets (which have the same columns) to make one and train T5 on it. I was wondering if there is a proper way to do it? I assume that it can be done by combining all examples of each dataset but maybe you have a better solution.\r\n\r\nHoping this is clear enough,\r\n\r\nThanks a lot \ud83d\ude0a\r\nBest","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/206\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/205","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/205\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/205\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/205\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/205","id":625839335,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1","number":205,"title":"Better arrow dataset iter","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590596421000,"updated_at":1590597598000,"closed_at":1590597596000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/205","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/205","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/205.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/205.patch"},"body":"I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow).\r\nWith these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/205\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/204","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/204\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/204\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/204\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/204","id":625655849,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw","number":204,"title":"Add Dataflow support + Wikipedia + Wiki40b","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590582769000,"updated_at":1590653435000,"closed_at":1590653434000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/204","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/204","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/204.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/204.patch"},"body":"# Add Dataflow support + Wikipedia + Wiki40b\r\n\r\n## Support datasets processing with Apache Beam\r\n\r\nSome datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc.\r\n\r\nTo process such datasets with Beam, I added a command to run beam pipelines `nlp-cli run_beam path\/to\/dataset\/script`. Then I used it to process the english + french wikipedia, and the english of wiki40b.\r\nThe processed arrow files are on GCS and are the result of a Dataflow job.\r\n\r\nI added a markdown documentation file in `docs` that explains how to use it properly.\r\n\r\n## Load already processed datasets\r\n\r\nNow that we have those datasets already processed, I made it possible to load datasets that are already processed. You can do `load_dataset('wikipedia', '20200501.en')` and it will download the processed files from the Hugging Face GCS directly into the user's cache and be ready to use !\r\n\r\nThe Wikipedia dataset was already asked in #187 and this PR should soon allow to add Natural Questions as asked in #129 \r\n\r\n## Other changes in the code\r\n\r\nTo make things work, I had to do a few adjustments:\r\n- add a `ship_files_with_pipeline` method to the `DownloadManager`. This is because beam pipelines can be run in the cloud and therefore need to have access to your downloaded data. I used it in the wikipedia script:\r\n ```python\r\n if not pipeline.is_local():\r\n downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)\r\n ```\r\n- add parquet to arrow conversion. This is because the output of beam pipelines are parquet files so we need to convert them to arrow and have the arrow files on GCS\r\n- add a test script with a dummy beam dataset\r\n- minor adjustments to allow read\/write operations on remote files using `apache_beam.io.filesystems.FileSystems` if we want (it can be connected to gcp, s3, hdfs, etc...)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/204\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/203","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/203\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/203\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/203\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/203","id":625515488,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3","number":203,"title":"Raise an error if no config name for datasets like glue","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590570238000,"updated_at":1590597639000,"closed_at":1590597638000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/203","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/203","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/203.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/203.patch"},"body":"Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.\r\nFor example for glue there are cola, sst2, mrpc etc.\r\n\r\nCurrently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to pick one of the available configs (as proposed in #152). For example for glue, the message looks like:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']\r\nExample of usage:\r\n\t`load_dataset('glue', 'cola')`\r\n```\r\n\r\nThe error is raised if the config name is missing and if there are >=2 possible configs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/203\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/202","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/202\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/202\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/202\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/202","id":625493983,"node_id":"MDU6SXNzdWU2MjU0OTM5ODM=","number":202,"title":"Mistaken `_KWARGS_DESCRIPTION` for XNLI metric","user":{"login":"phiyodr","id":33572125,"node_id":"MDQ6VXNlcjMzNTcyMTI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/33572125?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/phiyodr","html_url":"https:\/\/github.com\/phiyodr","followers_url":"https:\/\/api.github.com\/users\/phiyodr\/followers","following_url":"https:\/\/api.github.com\/users\/phiyodr\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/phiyodr\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/phiyodr\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/phiyodr\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/phiyodr\/orgs","repos_url":"https:\/\/api.github.com\/users\/phiyodr\/repos","events_url":"https:\/\/api.github.com\/users\/phiyodr\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/phiyodr\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed, good catch ! thanks\r\nFixing it right now"],"created_at":1590568482000,"updated_at":1590672156000,"closed_at":1590672156000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi!\r\n\r\nThe [`_KWARGS_DESCRIPTION`](https:\/\/github.com\/huggingface\/nlp\/blob\/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56\/metrics\/xnli\/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https:\/\/github.com\/huggingface\/nlp\/blob\/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56\/metrics\/bleu\/bleu.py#L58) metric:\r\n\r\n```\r\n_KWARGS_DESCRIPTION = \"\"\"\r\nComputes XNLI score which is just simple accuracy.\r\nArgs:\r\n predictions: list of translations to score.\r\n Each translation should be tokenized into a list of tokens.\r\n references: list of lists of references for each translation.\r\n Each reference should be tokenized into a list of tokens.\r\n max_order: Maximum n-gram order to use when computing BLEU score.\r\n smooth: Whether or not to apply Lin et al. 2004 smoothing.\r\nReturns:\r\n 'bleu': bleu score,\r\n 'precisions': geometric mean of n-gram precisions,\r\n 'brevity_penalty': brevity penalty,\r\n 'length_ratio': ratio of lengths,\r\n 'translation_length': translation_length,\r\n 'reference_length': reference_length\r\n\"\"\"\r\n```\r\n\r\nBut it should be something like:\r\n\r\n```\r\n_KWARGS_DESCRIPTION = \"\"\"\r\nComputes XNLI score which is just simple accuracy.\r\nArgs:\r\n predictions: Predicted labels.\r\n references: Ground truth labels.\r\nReturns:\r\n 'accuracy': accuracy\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/202\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/201","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/201\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/201\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/201\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/201","id":625235430,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw","number":201,"title":"Fix typo in README","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Amazing, @LysandreJik!","Really did my best!"],"created_at":1590531501000,"updated_at":1590536431000,"closed_at":1590534056000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/201","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/201","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/201.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/201.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/201\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/200","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/200\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/200\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/200\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/200","id":625226638,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0","number":200,"title":"[ArrowWriter] Set schema at first write example","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?"],"created_at":1590530388000,"updated_at":1590570474000,"closed_at":1590570473000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/200","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/200","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/200.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/200.patch"},"body":"Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so).\r\n\r\nI noticed that it was not done if the first example is added via `.write`, so I added it for coherence.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/200\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/199","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/199\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/199\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/199\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/199","id":625217440,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx","number":199,"title":"Fix GermEval 2014 dataset infos","user":{"login":"stefan-it","id":20651387,"node_id":"MDQ6VXNlcjIwNjUxMzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20651387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stefan-it","html_url":"https:\/\/github.com\/stefan-it","followers_url":"https:\/\/api.github.com\/users\/stefan-it\/followers","following_url":"https:\/\/api.github.com\/users\/stefan-it\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stefan-it\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stefan-it\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stefan-it\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stefan-it\/orgs","repos_url":"https:\/\/api.github.com\/users\/stefan-it\/repos","events_url":"https:\/\/api.github.com\/users\/stefan-it\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stefan-it\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hopefully. this also fixes the dataset view on https:\/\/huggingface.co\/nlp\/viewer\/ :)","Oh good catch ! This should fix it indeed"],"created_at":1590529304000,"updated_at":1590529824000,"closed_at":1590529824000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/199","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/199","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/199.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/199.patch"},"body":"Hi,\r\n\r\nthis PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/199\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/198","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/198\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/198\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/198\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/198","id":625200627,"node_id":"MDU6SXNzdWU2MjUyMDA2Mjc=","number":198,"title":"Index outside of table length","user":{"login":"casajarm","id":305717,"node_id":"MDQ6VXNlcjMwNTcxNw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/305717?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/casajarm","html_url":"https:\/\/github.com\/casajarm","followers_url":"https:\/\/api.github.com\/users\/casajarm\/followers","following_url":"https:\/\/api.github.com\/users\/casajarm\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/casajarm\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/casajarm\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/casajarm\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/casajarm\/orgs","repos_url":"https:\/\/api.github.com\/users\/casajarm\/repos","events_url":"https:\/\/api.github.com\/users\/casajarm\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/casajarm\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sounds like something related to the nlp viewer @srush ","Fixed. "],"created_at":1590527380000,"updated_at":1590533029000,"closed_at":1590533029000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955).\r\n\r\n> ValueError: Index (2000) outside of table length (2000).\r\n> Traceback:\r\n> File \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/streamlit\/ScriptRunner.py\", line 322, in _run_script\r\n> exec(code, module.__dict__)\r\n> File \"\/home\/sasha\/nlp_viewer\/run.py\", line 116, in <module>\r\n> v = d[item][k]\r\n> File \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 338, in __getitem__\r\n> output_all_columns=self._output_all_columns,\r\n> File \"\/home\/sasha\/.local\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 290, in _getitem\r\n> raise ValueError(f\"Index ({key}) outside of table length ({self._data.num_rows}).\")","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/198\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/197","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/197\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/197\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/197\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/197","id":624966904,"node_id":"MDU6SXNzdWU2MjQ5NjY5MDQ=","number":197,"title":"Scientific Papers only downloading Pubmed","user":{"login":"antmarakis","id":17463361,"node_id":"MDQ6VXNlcjE3NDYzMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17463361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/antmarakis","html_url":"https:\/\/github.com\/antmarakis","followers_url":"https:\/\/api.github.com\/users\/antmarakis\/followers","following_url":"https:\/\/api.github.com\/users\/antmarakis\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/antmarakis\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/antmarakis\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/antmarakis\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/antmarakis\/orgs","repos_url":"https:\/\/api.github.com\/users\/antmarakis\/repos","events_url":"https:\/\/api.github.com\/users\/antmarakis\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/antmarakis\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi so there are indeed two configurations in the datasets as you can see [here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/scientific_papers\/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.load_dataset('scientific_papers', 'arxiv')\r\n```\r\n\r\nThis issues is actually related to a similar user-experience issue with GLUE. When several configurations are available and the first configuration is loaded by default (see issue #152 and #130), it seems to be unexpected for users.\r\n\r\nI think we should maybe raise a (very explicit) error when there are several configurations available and the user doesn't specify one.\r\n\r\nWhat do you think @lhoestq @patrickvonplaten @mariamabarham ?","Yes, it looks like the right thing to do ","Now if you don't specify which part you want, it raises an error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['pubmed', 'arxiv']\r\nExample of usage:\r\n\t`load_dataset('scientific_papers', 'pubmed')`\r\n```"],"created_at":1590506327000,"updated_at":1590653968000,"closed_at":1590653968000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi!\r\n\r\nI have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following:\r\n\r\n```\r\ndataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.')\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.05k\/5.05k [00:00<00:00, 2.66MB\/s]\r\nDownloading: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.90k\/4.90k [00:00<00:00, 2.42MB\/s]\r\nDownloading and preparing dataset scientific_papers\/pubmed (download: 4.20 GiB, generated: 2.33 GiB, total: 6.53 GiB) to .\/scientific_papers\/pubmed\/1.1.1...\r\nDownloading: 3.62GB [00:40, 90.5MB\/s]\r\nDownloading: 880MB [00:08, 101MB\/s]\r\nDataset scientific_papers downloaded and prepared to .\/scientific_papers\/pubmed\/1.1.1. Subsequent calls will reuse this data.\r\n```\r\n\r\nonly a pubmed folder is created. There doesn't seem to be something for arxiv. Are these two datasets merged? Or have I misunderstood something?\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/197\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/196","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/196\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/196\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/196\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/196","id":624901266,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw","number":196,"title":"Check invalid config name","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https:\/\/drive.google.com\/`\r\n\r\n","> I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https:\/\/drive.google.com\/`\r\n\r\nThe filenames of the dummy data are now encoded (see #173). So this is not a problem anymore.\r\n\r\nThe problem here is different and comes from the directory names where we save the arrow files (basically `dataset_name\/config_name\/version`). In this case we could have invalid directory names because of the config name\r\n","Okay great then.","I like the method, but I'm wondering whether it should just be a test method instead of a `__post_init__` function. From a logical point of view the only reason this error would be thrown is because of an invalid config name introduced when creating the dataset script \/ adding a new dataset => so I think it might be better to write a simple test for this in `test_dataset_common.py`...what do you think @lhoestq ?","`test_dataset_common.py` only tests canonical datasets no ? What if users wants to create their own script ?","> `test_dataset_common.py` only tests canonical datasets no ? What if users wants to create their own script ?\r\n\r\nIt tests all dataset that can be loaded either locally or on AWS (which includes all non-canonical datasets as well)...by their own script you mean like a private dataset script that they don't want to be public? I guess even then they could locally run the test functions to check...","We could have a bunch of simple consistency tests that run before uploading with the CLI (without loading data if we don't want to force the user to have dummy data)?","Let's say someone want to create his own private script. As the script is not meant to be shared, it's not going to be placed in `\/datasets` right ? Maybe the script is going to be inside another project. If I'm not wrong in this case the `test_dataset_common.py` is not going to test his script.\r\n\r\nRaising an error in the post init is a sanity check that would tell the user immediately what's wrong.\r\nThe error is raised if he tried to load the script or if he uses `nlp-cli test`","> Let's say someone want to create his own private script. As the script is not meant to be shared, it's not going to be placed in `\/datasets` right ? Maybe the script is going to be inside another project. If I'm not wrong in this case the `test_dataset_common.py` is not going to test his script.\r\n> \r\n> Raising an error in the post init is a sanity check that would tell the user immediately what's wrong.\r\n> The error is raised if he tried to load the script or if he uses `nlp-cli test`\r\n\r\nOK, fair point! I'm good with this then :-) ","I'm fine with this as well (even though I understand what you meant @patrickvonplaten, we can still change it later if needed)","> We could have a bunch of simple consistency tests that run before uploading with the CLI (without loading data if we don't want to force the user to have dummy data)?\r\n\r\nYes! I guess that's a big question whether we should force the user to add dummy data. It's probably too tedious for the user...so when uploading to circle ci should we just check \r\n- 1) All configs can be instantiated (if there are any)\r\n- 2) The BuilderClass can be instantiated ... \r\n- 3) ... maybe some more\r\n\r\nand maybe suggest to the user to add dummy data using the dummy data command?","I really like that we have a test with dummy data for canonical datasets. This is insurance that they'll keep working in the long run. \r\n\r\nOn the other hand I understand that we will probably not force this practice for scripts uploaded on S3 by a user under his namespace (non-canonical), as it is tedious. As I understand right now the test is done for all the datasets on aws, even the non-canonical ? We should think about different tests for non-canonical datasets.\r\n\r\nI also like the idea of a simple consistency test !","Merging this one for now, we can think about the test for non-canonical datasets later"],"created_at":1590501171000,"updated_at":1590527096000,"closed_at":1590527095000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/196","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/196","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/196.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/196.patch"},"body":"As said in #194, we should raise an error if the config name has bad characters.\r\nBad characters are those that are not allowed for directory names on windows.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/196\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/195","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/195\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/195\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/195\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/195","id":624858686,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzMTg1NTAy","number":195,"title":"[Dummy data command] add new case to command","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq - tiny change in the dummy data command, should be good to merge."],"created_at":1590497447000,"updated_at":1590503908000,"closed_at":1590503907000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/195","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/195","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/195.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/195.patch"},"body":"Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/195\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/194","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/194\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/194\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/194\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/194","id":624854897,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIzMTgyNDM5","number":194,"title":"Add Dataset: Qanta","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.","It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `\/` etc.\r\n\r\nI'll add some lines to raise an error if the config name is invalid.","Thanks for fixing things up! I'm curious to take a look at the zip files now to know the format for future reference."],"created_at":1590497075000,"updated_at":1590512297000,"closed_at":1590498980000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/194","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/194","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/194.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/194.patch"},"body":"Fixes dummy data for #169 @EntilZha","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/194\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/193","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/193\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/193\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/193\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/193","id":624655558,"node_id":"MDU6SXNzdWU2MjQ2NTU1NTg=","number":193,"title":"[Tensorflow] Use something else than `from_tensor_slices()`","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.","Is `tf.data.Dataset.from_generator` working on TPU ?","`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"\/usr\/local\/lib\/python3.6\/contextlib.py\", line 88, in __exit__\r\n next(self.gen)\r\n File \"\/home\/usr\/.venv\/bart\/lib\/python3.6\/site-packages\/tensorflow_core\/python\/eager\/context.py\", line 1900, in execution_mode\r\n executor_new.wait()\r\n File \"\/home\/usr\/.venv\/bart\/lib\/python3.6\/site-packages\/tensorflow_core\/python\/eager\/executor.py\", line 67, in wait\r\n pywrap_tensorflow.TFE_ExecutorWaitForAllPendingNodes(self._handle)\r\ntensorflow.python.framework.errors_impl.NotFoundError: No registered 'PyFunc' OpKernel for 'CPU' devices compatible with node {{node PyFunc}}\r\n . Registered: <no registered kernels>\r\n\r\n [[PyFunc]]\r\n```\r\n\r\n---\r\n\r\n@lhoestq It seems you merged some changes that allow lazy-loading. **Can you give an example of how to use ?** Maybe the Colab notebook should be updated with this method as well.","Could you send me the code you used to run create the dataset using `.from_generator` ? What version of tensorflow are you using ?","I'm using TF2.2\r\n\r\nHere is my code :\r\n```\r\nimport nlp\r\nfrom transformers import BartTokenizer\r\n\r\ntokenizer = BartTokenizer.from_pretrained('bart-large')\r\n\r\ndef encode(sample):\r\n article_inputs = tokenizer.encode_plus(sample[\"article\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n summary_inputs = tokenizer.encode_plus(sample[\"highlights\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n\r\n article_inputs.update({\"lm_labels\": summary_inputs['input_ids']})\r\n return article_inputs\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail', '3.0.0', split='test')\r\ncnn_dm = cnn_dm.map(encode)\r\n\r\ndef gen():\r\n for sample in cnn_dm:\r\n s = {}\r\n s['input_ids'] = sample['input_ids']\r\n s['attention_mask'] = sample['attention_mask']\r\n s['lm_labels'] = sample['lm_labels']\r\n yield s\r\n\r\ndataset = tf.data.Dataset.from_generator(gen, output_types={k: tf.int32 for k in ['input_ids', 'attention_mask', 'lm_labels']}, output_shapes={k: tf.TensorShape([tokenizer.model_max_length]) for k in ['input_ids', 'attention_mask', 'lm_labels']}\r\n```","Apparently we'll have to wait for the next tensorflow release to use `.from_generator` and TPU. See https:\/\/github.com\/tensorflow\/tensorflow\/issues\/34346#issuecomment-598262489","Fixed by https:\/\/github.com\/huggingface\/datasets\/pull\/339"],"created_at":1590477554000,"updated_at":1603812491000,"closed_at":1603812491000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"In the example notebook, the TF Dataset is built using `from_tensor_slices()` :\r\n\r\n```python\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']\r\ntrain_tf_dataset.set_format(type='tensorflow', columns=columns)\r\nfeatures = {x: train_tf_dataset[x] for x in columns[:3]} \r\nlabels = {\"output_1\": train_tf_dataset[\"start_positions\"]}\r\nlabels[\"output_2\"] = train_tf_dataset[\"end_positions\"]\r\ntfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)\r\n```\r\n\r\nBut according to [official tensorflow documentation](https:\/\/www.tensorflow.org\/guide\/data#consuming_numpy_arrays), this will load the entire dataset to memory.\r\n\r\n**This defeats one purpose of this library, which is lazy loading.**\r\n\r\nIs there any other way to load the `nlp` dataset into TF dataset lazily ?\r\n\r\n---\r\n\r\nFor example, is it possible to use [Arrow dataset](https:\/\/www.tensorflow.org\/io\/api_docs\/python\/tfio\/arrow\/ArrowDataset) ? If yes, is there any code example ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/193\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/192","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/192\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/192\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/192\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/192","id":624397592,"node_id":"MDU6SXNzdWU2MjQzOTc1OTI=","number":192,"title":"[Question] Create Apache Arrow dataset from raw text file","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https:\/\/arrow.apache.org\/docs\/python\/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can find inspiration from the ones we've already written in `\/datasets`. Here is a link to the steps to [add a dataset](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset)","Hello @mrm8488 and @lhoestq \r\n\r\nIs there a way to convert a dataset to Apache arrow format (locally\/personal use) & use it before sending it to hugging face?\r\n\r\nThanks :)","> Is there a way to convert a dataset to Apache arrow format (locally\/personal use) & use it before sending it to hugging face?\r\n\r\nSure, to get a dataset in arrow format you can either:\r\n- [load from local files (txt, json, csv)](https:\/\/huggingface.co\/nlp\/loading_datasets.html?highlight=csv#from-local-files)\r\n- OR [load from python data (dict, pandas)](https:\/\/huggingface.co\/nlp\/loading_datasets.html?highlight=csv#from-in-memory-data)\r\n- OR [create your own dataset script](https:\/\/huggingface.co\/nlp\/loading_datasets.html?highlight=csv#using-a-custom-dataset-loading-script)\r\n"],"created_at":1590424967000,"updated_at":1603812022000,"closed_at":1603812022000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as \"Crime and punishment\" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide?\r\nIs the worth of send it to you and add it to the NLP library?\r\nThanks, Manu\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/192\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/191","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/191\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/191\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/191\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/191","id":624394936,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIyODI3MDMy","number":191,"title":"[Squad es] add dataset_infos","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590424552000,"updated_at":1590424799000,"closed_at":1590424798000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/191","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/191","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/191.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/191.patch"},"body":"@mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/191\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/190","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/190\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/190\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/190\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/190","id":624124600,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIyNjA4NzAw","number":190,"title":"add squad Spanish v1 and v2","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice ! :) \r\nCan we group them into one dataset with two versions, instead of having two datasets ?","Yes sure, I can use the version as config name","@lhoestq can you check? I grouped them","Awesome :) feel free to merge after fixing the test in the CI","@mariamabarham - feel free to merge when you're ready. I only checked the dummy files. I did not run the SLOW tests. "],"created_at":1590394120000,"updated_at":1590424126000,"closed_at":1590424125000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/190","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/190","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/190.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/190.patch"},"body":"This PR add the Spanish Squad versions 1 and 2 datasets. \r\nFixes #164 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/190\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/189","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/189\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/189\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/189\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/189","id":624048881,"node_id":"MDU6SXNzdWU2MjQwNDg4ODE=","number":189,"title":"[Question] BERT-style multiple choice formatting","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"","I think I've resolved it. For others' reference: to convert from using the [`MultipleChoiceDataset` class](https:\/\/github.com\/huggingface\/transformers\/blob\/a34a9896ac2a4a33ff9cd805c76eed914c8d8965\/examples\/multiple-choice\/utils_multiple_choice.py#L82)\/[`run_multiple_choice.py`](https:\/\/github.com\/huggingface\/transformers\/blob\/a34a9896ac2a4a33ff9cd805c76eed914c8d8965\/examples\/multiple-choice\/run_multiple_choice.py) script in Huggingface Transformers, I've done the following for hellaswag:\r\n\r\n1. converted the `convert_examples_to_features()` function to only take one input and return a dictionary rather than a list:\r\n```\r\ndef convert_examples_to_features(example, tokenizer, max_length):\r\n\r\n choices_inputs = defaultdict(list)\r\n for ending_idx, ending in enumerate(example['endings']['ending']):\r\n text_a = example['ctx']\r\n text_b = ending\r\n\r\n inputs = tokenizer.encode_plus(\r\n text_a,\r\n text_b,\r\n add_special_tokens=True,\r\n max_length=max_length,\r\n pad_to_max_length=True,\r\n return_overflowing_tokens=True,\r\n )\r\n if \"num_truncated_tokens\" in inputs and inputs[\"num_truncated_tokens\"] > 0:\r\n logger.info(\r\n \"Attention! you are cropping tokens (swag task is ok). \"\r\n \"If you are training ARC and RACE and you are poping question + options,\"\r\n \"you need to try to use a bigger max seq length!\"\r\n )\r\n\r\n for key in inputs:\r\n choices_inputs[key].append(inputs[key])\r\n \r\n choices_inputs['label'] = int(example['label'])\r\n\r\n return choices_inputs\r\n```\r\n2. apply this directly (instance-wise) to dataset, convert dataset to torch tensors. Dataset is then ready to be passed to `Trainer` instance.\r\n\r\n```\r\ndataset['train'] = dataset['train'].map(lambda x: convert_examples_to_features(x, tokenizer, max_length), batched=False)\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'label']\r\ndataset['train'].set_format(type='torch', columns=columns)\r\n```"],"created_at":1590383465000,"updated_at":1590431908000,"closed_at":1590431908000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the number of answer choices in the MCQ instead of single items. I'm a bit confused on what the output of my feature conversion function should be when using `dataset.map()` to ensure similar behavior.\r\n\r\nThanks!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/189\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/188","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/188\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/188\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/188\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/188","id":623890430,"node_id":"MDU6SXNzdWU2MjM4OTA0MzA=","number":188,"title":"When will the remaining math_dataset modules be added as dataset objects","user":{"login":"tylerroost","id":31251196,"node_id":"MDQ6VXNlcjMxMjUxMTk2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/31251196?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tylerroost","html_url":"https:\/\/github.com\/tylerroost","followers_url":"https:\/\/api.github.com\/users\/tylerroost\/followers","following_url":"https:\/\/api.github.com\/users\/tylerroost\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tylerroost\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tylerroost\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tylerroost\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tylerroost\/orgs","repos_url":"https:\/\/api.github.com\/users\/tylerroost\/repos","events_url":"https:\/\/api.github.com\/users\/tylerroost\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tylerroost\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard","Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, in particular the doc.\r\nYou should expect some bumps on the road.\r\n\r\nTo get you started, you can check the datasets scripts in the `.\/datasets` folder on the repo and find the one on math_datasets that will need to be modified. Then you should check the original repository on the math_dataset to see where the other files to download are located and what is the expected format for the various parts of the dataset.\r\n\r\nTo get a general overview on how datasets scripts are written and used, you can read the nice tutorial on how to add a new dataset for TensorFlow Dataset [here](https:\/\/www.tensorflow.org\/datasets\/add_dataset), our API is not exactly identical but it can give you a high-level overview.","Thanks I'll give it a look"],"created_at":1590335212000,"updated_at":1590346428000,"closed_at":1590346428000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/188\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/187","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/187\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/187\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/187\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/187","id":623627800,"node_id":"MDU6SXNzdWU2MjM2Mjc4MDA=","number":187,"title":"[Question] How to load wikipedia ? Beam runner ?","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?","Yes we (well @lhoestq) are very actively working on this."],"created_at":1590229132000,"updated_at":1590365522000,"closed_at":1590365522000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"When `nlp.load_dataset('wikipedia')`, I got\r\n* `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.`\r\n* `AttributeError: 'NoneType' object has no attribute 'size'`\r\n\r\nCould somebody tell me what should I do ? \r\n\r\n# Env\r\nOn Colab,\r\n```\r\ngit clone https:\/\/github.com\/huggingface\/nlp\r\ncd nlp\r\npip install -q .\r\n```\r\n```\r\n%pip install -q apache_beam mwparserfromhell\r\n-> ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible.\r\nERROR: google-api-python-client 1.7.12 has requirement httplib2<1dev,>=0.17.0, but you'll have httplib2 0.12.0 which is incompatible.\r\nERROR: chainer 6.5.0 has requirement typing-extensions<=3.6.6, but you'll have typing-extensions 3.7.4.2 which is incompatible.\r\n```\r\n```\r\npip install -q apache-beam[interactive]\r\nERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 5.10.0 which is incompatible.\r\n```\r\n\r\n# The whole message\r\n```\r\nWARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.\r\n\r\nDownloading and preparing dataset wikipedia\/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/wikipedia\/20200501.aa\/1.0.0...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()\r\n\r\n44 frames\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/iobase.py in process(self, element, init_result)\r\n 1081 writer.write(e)\r\n-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]\r\n 1083 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/filebasedsink.py in close(self)\r\n 422 def close(self):\r\n--> 423 self.sink.close(self.temp_handle)\r\n 424 return self.temp_shard_path\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/parquetio.py in close(self, writer)\r\n 537 if len(self._buffer[0]) > 0:\r\n--> 538 self._flush_buffer()\r\n 539 if self._record_batches_byte_size > 0:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/parquetio.py in _flush_buffer(self)\r\n 569 for b in x.buffers():\r\n--> 570 size = size + b.size\r\n 571 self._record_batches_byte_size = self._record_batches_byte_size + size\r\n\r\nAttributeError: 'NoneType' object has no attribute 'size'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n<ipython-input-9-340aabccefff> in <module>()\r\n----> 1 dset = nlp.load_dataset('wikipedia')\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 518 download_mode=download_mode,\r\n 519 ignore_verifications=ignore_verifications,\r\n--> 520 save_infos=save_infos,\r\n 521 )\r\n 522 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)\r\n 370 verify_infos = not save_infos and not ignore_verifications\r\n 371 self._download_and_prepare(\r\n--> 372 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 373 )\r\n 374 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 770 with beam.Pipeline(runner=beam_runner, options=beam_options,) as pipeline:\r\n 771 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 772 dl_manager, pipeline=pipeline, verify_infos=False\r\n 773 ) # TODO{beam} verify infos\r\n 774 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)\r\n 501 def __exit__(self, exc_type, exc_val, exc_tb):\r\n 502 if not exc_type:\r\n--> 503 self.run().wait_until_finish()\r\n 504 \r\n 505 def visit(self, visitor):\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/pipeline.py in run(self, test_runner_api)\r\n 481 return Pipeline.from_runner_api(\r\n 482 self.to_runner_api(use_fake_coders=True), self.runner,\r\n--> 483 self._options).run(False)\r\n 484 \r\n 485 if self._options.view_as(TypeOptions).runtime_type_check:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/pipeline.py in run(self, test_runner_api)\r\n 494 finally:\r\n 495 shutil.rmtree(tmpdir)\r\n--> 496 return self.runner.run_pipeline(self, self._options)\r\n 497 \r\n 498 def __enter__(self):\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/direct\/direct_runner.py in run_pipeline(self, pipeline, options)\r\n 128 runner = BundleBasedDirectRunner()\r\n 129 \r\n--> 130 return runner.run_pipeline(pipeline, options)\r\n 131 \r\n 132 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in run_pipeline(self, pipeline, options)\r\n 553 \r\n 554 self._latest_run_result = self.run_via_runner_api(\r\n--> 555 pipeline.to_runner_api(default_environment=self._default_environment))\r\n 556 return self._latest_run_result\r\n 557 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in run_via_runner_api(self, pipeline_proto)\r\n 563 # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to\r\n 564 # the teststream (if any), and all the stages).\r\n--> 565 return self.run_stages(stage_context, stages)\r\n 566 \r\n 567 @contextlib.contextmanager\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in run_stages(self, stage_context, stages)\r\n 704 stage,\r\n 705 pcoll_buffers,\r\n--> 706 stage_context.safe_coders)\r\n 707 metrics_by_stage[stage.name] = stage_results.process_bundle.metrics\r\n 708 monitoring_infos_by_stage[stage.name] = (\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders)\r\n 1071 cache_token_generator=cache_token_generator)\r\n 1072 \r\n-> 1073 result, splits = bundle_manager.process_bundle(data_input, data_output)\r\n 1074 \r\n 1075 def input_for(transform_id, input_id):\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)\r\n 2332 \r\n 2333 with UnboundedThreadPoolExecutor() as executor:\r\n-> 2334 for result, split_result in executor.map(execute, part_inputs):\r\n 2335 \r\n 2336 split_result_list += split_result\r\n\r\n\/usr\/lib\/python3.6\/concurrent\/futures\/_base.py in result_iterator()\r\n 584 # Careful not to keep a reference to the popped future\r\n 585 if timeout is None:\r\n--> 586 yield fs.pop().result()\r\n 587 else:\r\n 588 yield fs.pop().result(end_time - time.monotonic())\r\n\r\n\/usr\/lib\/python3.6\/concurrent\/futures\/_base.py in result(self, timeout)\r\n 430 raise CancelledError()\r\n 431 elif self._state == FINISHED:\r\n--> 432 return self.__get_result()\r\n 433 else:\r\n 434 raise TimeoutError()\r\n\r\n\/usr\/lib\/python3.6\/concurrent\/futures\/_base.py in __get_result(self)\r\n 382 def __get_result(self):\r\n 383 if self._exception:\r\n--> 384 raise self._exception\r\n 385 else:\r\n 386 return self._result\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/utils\/thread_pool_executor.py in run(self)\r\n 42 # If the future wasn't cancelled, then attempt to execute it.\r\n 43 try:\r\n---> 44 self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))\r\n 45 except BaseException as exc:\r\n 46 # Even though Python 2 futures library has #set_exection(),\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in execute(part_map)\r\n 2329 self._registered,\r\n 2330 cache_token_generator=self._cache_token_generator)\r\n-> 2331 return bundle_manager.process_bundle(part_map, expected_outputs)\r\n 2332 \r\n 2333 with UnboundedThreadPoolExecutor() as executor:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)\r\n 2243 process_bundle_descriptor_id=self._bundle_descriptor.id,\r\n 2244 cache_tokens=[next(self._cache_token_generator)]))\r\n-> 2245 result_future = self._worker_handler.control_conn.push(process_bundle_req)\r\n 2246 \r\n 2247 split_results = [] # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/portability\/fn_api_runner.py in push(self, request)\r\n 1557 self._uid_counter += 1\r\n 1558 request.instruction_id = 'control_%s' % self._uid_counter\r\n-> 1559 response = self.worker.do_instruction(request)\r\n 1560 return ControlFuture(request.instruction_id, response)\r\n 1561 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py in do_instruction(self, request)\r\n 413 # E.g. if register is set, this will call self.register(request.register))\r\n 414 return getattr(self, request_type)(\r\n--> 415 getattr(request, request_type), request.instruction_id)\r\n 416 else:\r\n 417 raise NotImplementedError\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/sdk_worker.py in process_bundle(self, request, instruction_id)\r\n 448 with self.maybe_profile(instruction_id):\r\n 449 delayed_applications, requests_finalization = (\r\n--> 450 bundle_processor.process_bundle(instruction_id))\r\n 451 monitoring_infos = bundle_processor.monitoring_infos()\r\n 452 monitoring_infos.extend(self.state_cache_metrics_fn())\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py in process_bundle(self, instruction_id)\r\n 837 for data in data_channel.input_elements(instruction_id,\r\n 838 expected_transforms):\r\n--> 839 input_op_by_transform_id[data.transform_id].process_encoded(data.data)\r\n 840 \r\n 841 # Finish all operations.\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/bundle_processor.py in process_encoded(self, encoded_windowed_values)\r\n 214 decoded_value = self.windowed_coder_impl.decode_from_stream(\r\n 215 input_stream, True)\r\n--> 216 self.output(decoded_value)\r\n 217 \r\n 218 def try_split(self, fraction_of_remainder, total_buffer_size):\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/worker\/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/future\/utils\/__init__.py in raise_with_traceback(exc, traceback)\r\n 417 if traceback == Ellipsis:\r\n 418 _, _, traceback = sys.exc_info()\r\n--> 419 raise exc.with_traceback(traceback)\r\n 420 \r\n 421 else:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/runners\/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/iobase.py in process(self, element, init_result)\r\n 1080 for e in bundle[1]: # values\r\n 1081 writer.write(e)\r\n-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]\r\n 1083 \r\n 1084 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/filebasedsink.py in close(self)\r\n 421 \r\n 422 def close(self):\r\n--> 423 self.sink.close(self.temp_handle)\r\n 424 return self.temp_shard_path\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/parquetio.py in close(self, writer)\r\n 536 def close(self, writer):\r\n 537 if len(self._buffer[0]) > 0:\r\n--> 538 self._flush_buffer()\r\n 539 if self._record_batches_byte_size > 0:\r\n 540 self._write_batches(writer)\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/apache_beam\/io\/parquetio.py in _flush_buffer(self)\r\n 568 for x in arrays:\r\n 569 for b in x.buffers():\r\n--> 570 size = size + b.size\r\n 571 self._record_batches_byte_size = self._record_batches_byte_size + size\r\n\r\nAttributeError: 'NoneType' object has no attribute 'size' [while running 'train\/Save to parquet\/Write\/WriteImpl\/WriteBundles']\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/187\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/186","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/186\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/186\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/186\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/186","id":623595180,"node_id":"MDU6SXNzdWU2MjM1OTUxODA=","number":186,"title":"Weird-ish: Not creating unique caches for different phases","user":{"login":"zphang","id":1668462,"node_id":"MDQ6VXNlcjE2Njg0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1668462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zphang","html_url":"https:\/\/github.com\/zphang","followers_url":"https:\/\/api.github.com\/users\/zphang\/followers","following_url":"https:\/\/api.github.com\/users\/zphang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zphang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zphang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zphang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zphang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zphang\/repos","events_url":"https:\/\/api.github.com\/users\/zphang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zphang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon","Good catch, it looks fixed.\r\n"],"created_at":1590216058000,"updated_at":1590265338000,"closed_at":1590265337000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Sample code:\r\n\r\n```python\r\nimport nlp\r\ndataset = nlp.load_dataset('boolq')\r\n\r\ndef func1(x):\r\n return x\r\n\r\ndef func2(x):\r\n return None\r\n\r\ntrain_output = dataset[\"train\"].map(func1)\r\nvalid_output = dataset[\"validation\"].map(func1)\r\nprint()\r\nprint(len(train_output), len(valid_output))\r\n# Output: 9427 9427\r\n```\r\n\r\nThe map method in both cases seem to be pointing to the same cache, so the latter call based on the validation data will return the processed train data cache.\r\n\r\nWhat's weird is that the following doesn't seem to be an issue:\r\n\r\n```python\r\ntrain_output = dataset[\"train\"].map(func2)\r\nvalid_output = dataset[\"validation\"].map(func2)\r\nprint()\r\nprint(len(train_output), len(valid_output))\r\n# 9427 3270\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/186\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/185","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/185\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/185\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/185\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/185","id":623172484,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2","number":185,"title":"[Commands] In-detail instructions to create dummy data folder","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["awesome !"],"created_at":1590150385000,"updated_at":1590156395000,"closed_at":1590156394000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/185","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/185","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/185.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/185.patch"},"body":"### Dummy data command \r\n\r\nThis PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files. \r\n\r\nIt would be great if you can try it out by moving the current dummy_data folder of any dataset in `.\/datasets` with `mv datasets\/<dataset_script>\/dummy_data datasets\/<dataset_name>\/dummy_data_copy` and running the command `python nlp-cli dummy_data .\/datasets\/<dataset_name>` to see if you like the instructions. \r\n\r\n### CONTRIBUTING.md\r\nAlso the CONTRIBUTING.md is made cleaner including a new section on \"How to add a dataset\". \r\n\r\n### Current PRs \r\nIt would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/185\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/184","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/184\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/184\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/184\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/184","id":623120929,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIxODQ5MTQ3","number":184,"title":"Use IndexError instead of ValueError when index out of range","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590144222000,"updated_at":1590654678000,"closed_at":1590654678000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/184","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/184","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/184.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/184.patch"},"body":"**`default __iter__ needs IndexError`**.\r\n\r\nWhen I want to create a wrapper of arrow dataset to adapt to fastai,\r\nI don't know how to initialize it, so I didn't use inheritance but use object composition.\r\nI wrote sth like this.\r\n```\r\nclas HF_dataset():\r\n def __init__(self, arrow_dataset):\r\n self.dset = arrow_dataset\r\n def __getitem__(self, i):\r\n return self.my_get_item(self.dset)\r\n```\r\nBut `for sample in my_dataset:` gave me `ValueError(f\"Index ({key}) outside of table length ({self._data.num_rows}).\")` . This is because default `__iter__` will stop when it catched `IndexError`.\r\n\r\nYou can also see my [work](https:\/\/github.com\/richardyy1188\/Pretrain-MLM-and-finetune-on-GLUE-with-fastai\/blob\/master\/GLUE_with_fastai.ipynb) that uses fastai2 to show\/load batches from huggingface\/nlp GLUE datasets\r\n\r\nSo I hope we can use `IndexError` instead to let other people who want to wrap it for any purpose won't be caught by this caveat.\r\n\r\nBTW, I super appreciate your work, both transformers and nlp save my life. \ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\udc96\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/184\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/183","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/183\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/183\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/183\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/183","id":623054270,"node_id":"MDU6SXNzdWU2MjMwNTQyNzA=","number":183,"title":"[Bug] labels of glue\/ax are all -1 ","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.","Ah, yeah. Why it didn\u2019t occur to me. \ud83d\ude02\nThank you for your comment."],"created_at":1590137016000,"updated_at":1590185645000,"closed_at":1590185645000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"```\r\nax = nlp.load_dataset('glue', 'ax')\r\nfor i in range(30): print(ax['test'][i]['label'], end=', ')\r\n```\r\n```\r\n-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, \r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/183\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/182","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/182\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/182\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/182\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/182","id":622646770,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIxNDcxMjg4","number":182,"title":"Update newsroom.py","user":{"login":"yoavartzi","id":3289873,"node_id":"MDQ6VXNlcjMyODk4NzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3289873?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yoavartzi","html_url":"https:\/\/github.com\/yoavartzi","followers_url":"https:\/\/api.github.com\/users\/yoavartzi\/followers","following_url":"https:\/\/api.github.com\/users\/yoavartzi\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yoavartzi\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yoavartzi\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yoavartzi\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yoavartzi\/orgs","repos_url":"https:\/\/api.github.com\/users\/yoavartzi\/repos","events_url":"https:\/\/api.github.com\/users\/yoavartzi\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yoavartzi\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":[],"created_at":1590080863000,"updated_at":1590165503000,"closed_at":1590165503000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/182","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/182","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/182.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/182.patch"},"body":"Updated the URL for Newsroom download so it's more robust to future changes.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/182\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/181","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/181\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/181\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/181\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/181","id":622634420,"node_id":"MDU6SXNzdWU2MjI2MzQ0MjA=","number":181,"title":"Cannot upload my own dataset","user":{"login":"korakot","id":3155646,"node_id":"MDQ6VXNlcjMxNTU2NDY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3155646?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/korakot","html_url":"https:\/\/github.com\/korakot","followers_url":"https:\/\/api.github.com\/users\/korakot\/followers","following_url":"https:\/\/api.github.com\/users\/korakot\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/korakot\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/korakot\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/korakot\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/korakot\/orgs","repos_url":"https:\/\/api.github.com\/users\/korakot\/repos","events_url":"https:\/\/api.github.com\/users\/korakot\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/korakot\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.","I now try with the sample `datasets\/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file \/content\/csv\/csv.py to S3 under filename csv\/csv.py and namespace korakot\r\nAbout to upload file \/content\/csv\/dummy\/0.0.0\/dummy_data.zip to S3 under filename csv\/dummy\/0.0.0\/dummy_data.zip and namespace korakot\r\nProceed? [Y\/n] y\r\nUploading... This might take a while if files are large\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/bin\/nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/commands\/user.py\", line 234, in run\r\n token=token, filename=filename, filepath=filepath, organization=self.args.organization\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/hf_api.py\", line 141, in presign_and_upload\r\n urls = self.presign(token, filename=filename, organization=organization)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/hf_api.py\", line 132, in presign\r\n return PresignedUrl(**d)\r\nTypeError: __init__() got an unexpected keyword argument 'cdn'\r\n```\r\n","We haven't tested the dataset upload feature yet cc @julien-c \r\nThis is on our short\/mid-term roadmap though","Even if I fix the `TypeError: __init__() got an unexpected keyword argument 'cdn'` error, it looks like it still uploads to `https:\/\/s3.amazonaws.com\/models.huggingface.co\/bert\/<namespace>\/<dataset_name>` instead of `https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/<namespace>\/<dataset_name>`","@lhoestq The endpoints in https:\/\/github.com\/huggingface\/nlp\/blob\/master\/src\/nlp\/hf_api.py should be (depending on the type of file):\r\n```\r\nPOST \/api\/datasets\/presign\r\nGET \/api\/datasets\/listObjs\r\nDELETE \/api\/datasets\/deleteObj\r\nPOST \/api\/metrics\/presign \r\nGET \/api\/metrics\/listObjs\r\nDELETE \/api\/metrics\/deleteObj\r\n```\r\n\r\nIn addition to this, @thomwolf cleaned up the objects with dataclasses but you should revert this and re-align to the hf_api that's in this branch of transformers: https:\/\/github.com\/huggingface\/transformers\/pull\/4632 (so that potential new JSON attributes in the API output don't break existing versions of any library)","New commands are\r\n```\r\nnlp-cli upload_dataset <path\/to\/dataset>\r\nnlp-cli upload_metric <path\/to\/metric>\r\nnlp-cli s3_datasets {rm, ls}\r\nnlp-cli s3_metrics {rm, ls}\r\n```\r\nClosing this issue."],"created_at":1590079552000,"updated_at":1592518482000,"closed_at":1592518482000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I look into `nlp-cli` and `user.py` to learn how to upload my own data.\r\n\r\nIt is supposed to work like this\r\n- Register to get username, password at huggingface.co\r\n- `nlp-cli login` and type username, passworld\r\n- I have a single file to upload at `.\/ttc\/ttc_freq_extra.csv`\r\n- `nlp-cli upload ttc\/ttc_freq_extra.csv`\r\n\r\nBut I got this error.\r\n\r\n```\r\n2020-05-21 16:33:52.722464: I tensorflow\/stream_executor\/platform\/default\/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file \/content\/ttc\/ttc_freq_extra.csv to S3 under filename ttc\/ttc_freq_extra.csv and namespace korakot\r\nProceed? [Y\/n] y\r\nUploading... This might take a while if files are large\r\nTraceback (most recent call last):\r\n File \"\/usr\/local\/bin\/nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/commands\/user.py\", line 234, in run\r\n token=token, filename=filename, filepath=filepath, organization=self.args.organization\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/hf_api.py\", line 141, in presign_and_upload\r\n urls = self.presign(token, filename=filename, organization=organization)\r\n File \"\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/hf_api.py\", line 132, in presign\r\n return PresignedUrl(**d)\r\nTypeError: __init__() got an unexpected keyword argument 'cdn'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/181\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/180","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/180\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/180\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/180\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/180","id":622556861,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIxMzk5Nzg2","number":180,"title":"Add hall of fame","user":{"login":"clmnt","id":821155,"node_id":"MDQ6VXNlcjgyMTE1NQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/821155?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/clmnt","html_url":"https:\/\/github.com\/clmnt","followers_url":"https:\/\/api.github.com\/users\/clmnt\/followers","following_url":"https:\/\/api.github.com\/users\/clmnt\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/clmnt\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/clmnt\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/clmnt\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/clmnt\/orgs","repos_url":"https:\/\/api.github.com\/users\/clmnt\/repos","events_url":"https:\/\/api.github.com\/users\/clmnt\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/clmnt\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1590072828000,"updated_at":1590165316000,"closed_at":1590165314000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/180","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/180","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/180.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/180.patch"},"body":"powered by https:\/\/github.com\/sourcerer-io\/hall-of-fame","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/180\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/179","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/179\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/179\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/179\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/179","id":622525410,"node_id":"MDU6SXNzdWU2MjI1MjU0MTA=","number":179,"title":"[Feature request] separate split name and split instructions","user":{"login":"yjernite","id":10469459,"node_id":"MDQ6VXNlcjEwNDY5NDU5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/10469459?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/yjernite","html_url":"https:\/\/github.com\/yjernite","followers_url":"https:\/\/api.github.com\/users\/yjernite\/followers","following_url":"https:\/\/api.github.com\/users\/yjernite\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/yjernite\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/yjernite\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/yjernite\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/yjernite\/orgs","repos_url":"https:\/\/api.github.com\/users\/yjernite\/repos","events_url":"https:\/\/api.github.com\/users\/yjernite\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/yjernite\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split(\"train_stage2\")`, etc. or something like that.","Thanks for the tip! I ended up setting up three different versions of the dataset with their own configs.\r\n\r\nfor the named splits, I was trying with `nlp.Split(\"train-stage1\")`, which fails. Changing to `nlp.Split(\"train_stage1\")` works :) I looked for examples of what works in the code comments, it may be worth adding some examples of valid\/invalid names in there?"],"created_at":1590070251000,"updated_at":1590154268000,"closed_at":1590154267000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction.\r\n\r\nThis makes it impossible to have several training sets, which can occur when:\r\n- A dataset corresponds to a collection of sub-datasets\r\n- A dataset was built in stages, adding new examples at each stage\r\n\r\nWould it be possible to have two separate fields in the Split class, a name \/instruction and a unique ID that is used as the key in the builder's split_dict ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/179\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/178","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/178\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/178\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/178\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/178","id":621979849,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwOTMyMDI5","number":178,"title":"[Manual data] improve error message for manual data in general","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589998245000,"updated_at":1589998732000,"closed_at":1589998730000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/178","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/178","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/178.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/178.patch"},"body":"`nlp.load(\"xsum\")` now leads to the following error message:\r\n\r\n![Screenshot from 2020-05-20 20-05-28](https:\/\/user-images.githubusercontent.com\/23423619\/82481825-3587ea00-9ad6-11ea-9ca2-5794252c6ac7.png)\r\n\r\nI guess the manual download instructions for `xsum` can also be improved.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/178\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/177","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/177\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/177\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/177\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/177","id":621975368,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwOTI4MzE0","number":177,"title":"Xsum manual download instruction","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589997761000,"updated_at":1589998610000,"closed_at":1589998609000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/177","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/177","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/177.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/177.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/177\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/176","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/176\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/176\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/176\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/176","id":621934638,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky","number":176,"title":"[Tests] Refactor MockDownloadManager","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589994456000,"updated_at":1589998639000,"closed_at":1589998638000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/176","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/176","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/176.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/176.patch"},"body":"Clean mock download manager class. \r\nThe print function was not of much help I think. \r\nWe should think about adding a command that creates the dummy folder structure for the user.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/176\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/175","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/175\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/175\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/175\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/175","id":621929428,"node_id":"MDU6SXNzdWU2MjE5Mjk0Mjg=","number":175,"title":"[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589994032000,"updated_at":1589998730000,"closed_at":1589998730000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"v 0.1.0 from pip\r\n\r\n```python\r\nimport nlp\r\nxsum = nlp.load_dataset('xsum')\r\n```\r\n\r\nIssue is `dl_manager.manual_dir`is `None`\r\n\r\n```python\r\n\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-42-8a32f066f3bd> in <module>\r\n----> 1 xsum = nlp.load_dataset('xsum')\r\n\r\n~\/miniconda3\/envs\/nb\/lib\/python3.7\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 515 download_mode=download_mode,\r\n 516 ignore_verifications=ignore_verifications,\r\n--> 517 save_infos=save_infos,\r\n 518 )\r\n 519 \r\n\r\n~\/miniconda3\/envs\/nb\/lib\/python3.7\/site-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)\r\n 361 verify_infos = not save_infos and not ignore_verifications\r\n 362 self._download_and_prepare(\r\n--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 364 )\r\n 365 # Sync info\r\n\r\n~\/miniconda3\/envs\/nb\/lib\/python3.7\/site-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 397 split_dict = SplitDict(dataset_name=self.name)\r\n 398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 400 # Checksums verification\r\n 401 if verify_infos:\r\n\r\n~\/miniconda3\/envs\/nb\/lib\/python3.7\/site-packages\/nlp\/datasets\/xsum\/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472\/xsum.py in _split_generators(self, dl_manager)\r\n 102 with open(dl_path, \"r\") as json_file:\r\n 103 split_ids = json.load(json_file)\r\n--> 104 downloaded_path = os.path.join(dl_manager.manual_dir, \"xsum-extracts-from-downloads\")\r\n 105 return [\r\n 106 nlp.SplitGenerator(\r\n\r\n~\/miniconda3\/envs\/nb\/lib\/python3.7\/posixpath.py in join(a, *p)\r\n 78 will be discarded. An empty last part will result in a path that\r\n 79 ends with a separator.\"\"\"\r\n---> 80 a = os.fspath(a)\r\n 81 sep = _get_sep(a)\r\n 82 path = a\r\n\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\n\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/175\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/174","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/174\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/174\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/174\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/174","id":621928403,"node_id":"MDU6SXNzdWU2MjE5Mjg0MDM=","number":174,"title":"nlp.load_dataset('xsum') -> TypeError","user":{"login":"sshleifer","id":6045025,"node_id":"MDQ6VXNlcjYwNDUwMjU=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6045025?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sshleifer","html_url":"https:\/\/github.com\/sshleifer","followers_url":"https:\/\/api.github.com\/users\/sshleifer\/followers","following_url":"https:\/\/api.github.com\/users\/sshleifer\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sshleifer\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sshleifer\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sshleifer\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sshleifer\/orgs","repos_url":"https:\/\/api.github.com\/users\/sshleifer\/repos","events_url":"https:\/\/api.github.com\/users\/sshleifer\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sshleifer\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589993949000,"updated_at":1589996626000,"closed_at":1589996626000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/174\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/173","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/173\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/173\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/173\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/173","id":621764932,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy","number":173,"title":"Rm extracted test dirs","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).","Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!"],"created_at":1589981448000,"updated_at":1590165276000,"closed_at":1590165275000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/173","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/173","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/173.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/173.patch"},"body":"All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories\r\n\r\nFurthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get removed after testing.\r\n\r\nFinally there was a bug in the `mock_download_manager` that would let it create directories with invalid names, as in #172. I fixed that by encoding url arguments. I had to rename the dummy data for `scientific_papers` and `cnn_dailymail` (the aws tests don't pass for those 2 in this PR, but they will once aws will be synced, as the local ones do)\r\n\r\nLet me know if it sounds good to you @patrickvonplaten . I'm still not entirely familiar with the mock downloader","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/173\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/172","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/172\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/172\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/172\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/172","id":621377386,"node_id":"MDU6SXNzdWU2MjEzNzczODY=","number":172,"title":"Clone not working on Windows environment","user":{"login":"codehunk628","id":51091425,"node_id":"MDQ6VXNlcjUxMDkxNDI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51091425?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/codehunk628","html_url":"https:\/\/github.com\/codehunk628","followers_url":"https:\/\/api.github.com\/users\/codehunk628\/followers","following_url":"https:\/\/api.github.com\/users\/codehunk628\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/codehunk628\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/codehunk628\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/codehunk628\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/codehunk628\/orgs","repos_url":"https:\/\/api.github.com\/users\/codehunk628\/repos","events_url":"https:\/\/api.github.com\/users\/codehunk628\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/codehunk628\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Should be fixed on master now :)","Thanks @lhoestq \ud83d\udc4d Now I can uninstall WSL and get back to work with windows.\ud83d\ude42"],"created_at":1589935514000,"updated_at":1590238153000,"closed_at":1590233272000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Cloning in a windows environment is not working because of use of special character '?' in folder name ..\r\nPlease consider changing the folder name ....\r\nReference to folder -\r\nnlp\/datasets\/cnn_dailymail\/dummy\/3.0.0\/3.0.0\/dummy_data-zip-extracted\/dummy_data\/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs\/dailymail\/stories\/\r\n\r\nerror log:\r\nfatal: cannot create directory at 'datasets\/cnn_dailymail\/dummy\/3.0.0\/3.0.0\/dummy_data-zip-extracted\/dummy_data\/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs': Invalid argument\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/172\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/171","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/171\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/171\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/171\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/171","id":621199128,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwMjk0ODM0","number":171,"title":"fix squad metric format","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)","This is kinda related to one thing I had in mind which is that we may want to be able to dump our model predictions in a `Dataset` as well so that we don't keep them in memory (and we can export them in a nice format later as well when we will have a serialization formats).\r\n\r\nMaybe this is overkill though, I haven't fully wraped my head around this.","I'm also perfectly fine with merging this PR in the current state and working on a larger scope later.","This is the format needed to run the official script directly. The format of the squad dataset is different from the input of the metric. \r\n\r\n> One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n> \r\n> (maybe it's not really possible in general though)\r\n\r\nOk I see. I'll try to use the same format","Ok with this update I changed the format to fit the squad dataset format.\r\nNow you can do:\r\n```python\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"\/Users\/quentinlhoest\/Desktop\/hf\/nlp-bis\/metrics\/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```"],"created_at":1589913456000,"updated_at":1590154610000,"closed_at":1590154608000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/171","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/171","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/171.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/171.patch"},"body":"The format of the squad metric was wrong.\r\nThis should fix #143 \r\n\r\nI tested with\r\n```python3\r\npredictions = [\r\n {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}\r\n]\r\nreferences = [\r\n {'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'}\r\n]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/171\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/170","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/170\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/170\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/170\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/170","id":621119747,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwMjMwMDIx","number":170,"title":"Rename anli dataset","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589905617000,"updated_at":1589977389000,"closed_at":1589977388000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/170","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/170","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/170.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/170.patch"},"body":"What we have now as the `anli` dataset is actually the \u03b1NLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https:\/\/github.com\/facebookresearch\/anli](https:\/\/github.com\/facebookresearch\/anli)).\r\n\r\nI renamed the current `anli` dataset by `art`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/170\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/169","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/169\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/169\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/169\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/169","id":621099682,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw","number":169,"title":"Adding Qanta (Quizbowl) Dataset","user":{"login":"EntilZha","id":1382460,"node_id":"MDQ6VXNlcjEzODI0NjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1382460?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/EntilZha","html_url":"https:\/\/github.com\/EntilZha","followers_url":"https:\/\/api.github.com\/users\/EntilZha\/followers","following_url":"https:\/\/api.github.com\/users\/EntilZha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/EntilZha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/EntilZha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/EntilZha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/EntilZha\/orgs","repos_url":"https:\/\/api.github.com\/users\/EntilZha\/repos","events_url":"https:\/\/api.github.com\/users\/EntilZha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/EntilZha\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is correct following the instructions here: https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset ? \r\n\r\nIf the tests described in 5. of https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset pass we can merge the PR :-) ","I updated to the most recent master and followed the steps, but still having the similar error where it can't find the correct file since the path to the directory is given, rather than the individual files within them. This still something wrong about how I'm inputting the data or how the tests are reading it?","It's the dummy_data structure. You actually have to call the dummy data file name `dummy_data` (not .json anything). So there should not be a `dummy_data` folder but for each config only a `dummy_data` which contains your json dummy data. Can you maybe try once more - if it doesn't work I do it for you :-). ","Would that work if there are multiple files? In my case, I'm including something similar to squad 1.0\/2.0 where we have the main dataset + an additional challenge set in different files. Would I have the zip decompress to two files in that case?","This dataset was actually a special case. It helped us improve the dummy data instructions :-), see #195 .Close this PR and merge #194."],"created_at":1589904181000,"updated_at":1590497551000,"closed_at":1590497551000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/169","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/169","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/169.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/169.patch"},"body":"This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https:\/\/arxiv.org\/abs\/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https:\/\/www.aclweb.org\/anthology\/Q19-1029\/) (adversarial fold)\r\n\r\nThis partially continues a discussion around fixing dummy data from https:\/\/github.com\/huggingface\/nlp\/issues\/161\r\n\r\nI ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader.\r\n\r\n```python\r\nimport nlp\r\n# Default is full question\r\ndata = nlp.load_dataset('.\/datasets\/qanta') \r\n# Four configs\r\n# Primarily useful for training\r\ndata = nlp.load_dataset('.\/datasets\/qanta', 'mode=sentences,char_skip=25') \r\n# Primarily used in evaluation\r\ndata = nlp.load_dataset('.\/datasets\/qanta', 'mode=first,char_skip=25') \r\ndata = nlp.load_dataset('.\/datasets\/qanta', 'mode=full,char_skip=25') \r\n# Primarily useful in evaluation and \"live\" play\r\ndata = nlp.load_dataset('.\/datasets\/qanta', 'mode=runs,char_skip=25') \r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/169\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/168","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/168\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/168\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/168\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/168","id":620959819,"node_id":"MDU6SXNzdWU2MjA5NTk4MTk=","number":168,"title":"Loading 'wikitext' dataset fails","user":{"login":"itay1itzhak","id":25987633,"node_id":"MDQ6VXNlcjI1OTg3NjMz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/25987633?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/itay1itzhak","html_url":"https:\/\/github.com\/itay1itzhak","followers_url":"https:\/\/api.github.com\/users\/itay1itzhak\/followers","following_url":"https:\/\/api.github.com\/users\/itay1itzhak\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/itay1itzhak\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/itay1itzhak\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/itay1itzhak\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/itay1itzhak\/orgs","repos_url":"https:\/\/api.github.com\/users\/itay1itzhak\/repos","events_url":"https:\/\/api.github.com\/users\/itay1itzhak\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/itay1itzhak\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128","Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.","Closing as it is a duplicate","Hi,\r\nThe squad bug seems to be fixed, but the loading of the 'wikitext' still suffers from this problem (on Colab with pyarrow=0.17.1).","When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\nYou just have to restart the runtime and it should be fine.","That was it, thanks!"],"created_at":1589893469000,"updated_at":1590529612000,"closed_at":1590529612000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Loading the 'wikitext' dataset fails with Attribute error:\r\n\r\nCode to reproduce (From example notebook):\r\n\r\nimport nlp\r\nwikitext_dataset = nlp.load_dataset('wikitext')\r\n\r\n\r\nError:\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-17-d5d9df94b13c> in <module>()\r\n 11 \r\n 12 # Load a dataset and print the first examples in the training set\r\n---> 13 wikitext_dataset = nlp.load_dataset('wikitext')\r\n 14 print(wikitext_dataset['train'][0])\r\n\r\n6 frames\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 518 download_mode=download_mode,\r\n 519 ignore_verifications=ignore_verifications,\r\n--> 520 save_infos=save_infos,\r\n 521 )\r\n 522 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)\r\n 363 verify_infos = not save_infos and not ignore_verifications\r\n 364 self._download_and_prepare(\r\n--> 365 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 366 )\r\n 367 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 416 try:\r\n 417 # Prepare split will record examples associated to the split\r\n--> 418 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 419 except OSError:\r\n 420 raise OSError(\"Cannot find data file. \" + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or \"\"))\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 594 example = self.info.features.encode_example(record)\r\n 595 writer.write(example)\r\n--> 596 num_examples, num_bytes = writer.finalize()\r\n 597 \r\n 598 assert num_examples == num_examples, f\"Expected to write {split_info.num_examples} but wrote {num_examples}\"\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in finalize(self, close_stream)\r\n 173 def finalize(self, close_stream=True):\r\n 174 if self.pa_writer is not None:\r\n--> 175 self.write_on_file()\r\n 176 self.pa_writer.close()\r\n 177 if close_stream:\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in write_on_file(self)\r\n 124 else:\r\n 125 # All good\r\n--> 126 self._write_array_on_file(pa_array)\r\n 127 self.current_rows = []\r\n 128 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/arrow_writer.py in _write_array_on_file(self, pa_array)\r\n 93 def _write_array_on_file(self, pa_array):\r\n 94 \"\"\"Write a PyArrow Array\"\"\"\r\n---> 95 pa_batch = pa.RecordBatch.from_struct_array(pa_array)\r\n 96 self._num_bytes += pa_array.nbytes\r\n 97 self.pa_writer.write_batch(pa_batch)\r\n\r\nAttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/168\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/167","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/167\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/167\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/167\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/167","id":620908786,"node_id":"MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw","number":167,"title":"[Tests] refactor tests","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice !"],"created_at":1589888612000,"updated_at":1589905032000,"closed_at":1589905030000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/167","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/167","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/167.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/167.patch"},"body":"This PR separates AWS and Local tests to remove these ugly statements in the script:\r\n```python\r\n if \"\/\" not in dataset_name:\r\n logging.info(\"Skip {} because it is a canonical dataset\")\r\n return\r\n```\r\n\r\nTo run a `aws` test, one should now run the following command: \r\n\r\n```python \r\npytest -s tests\/test_dataset_common.py::AWSDatasetTest::test_builder_class_wmt14\r\n```\r\n\r\nThe same `local` test, can be run with:\r\n```python \r\npytest -s tests\/test_dataset_common.py::LocalDatasetTest::test_builder_class_wmt14\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/167\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/166","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/166\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/166\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/166\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/166","id":620850218,"node_id":"MDU6SXNzdWU2MjA4NTAyMTg=","number":166,"title":"Add a method to shuffle a dataset","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)","+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster than do shuffle in dataset, especially when doing shuffle every epoch.\r\n\r\nAlso +1 for the naming convention.","As you might already know the issue of dataset shuffling came up in the nlp code [walkthrough](https:\/\/youtu.be\/G3pOvrKkFuk?t=3204) by Yannic Kilcher\r\n","We added the `.shuffle` method :)\r\n\r\nClosing this one."],"created_at":1589882926000,"updated_at":1592924853000,"closed_at":1592924852000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.\r\n\r\nAlso, we could maybe have a clear indication of which method modify in-place and which methods return\/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-place. What do you think?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/166\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/165","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/165\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/165\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/165\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/165","id":620758221,"node_id":"MDU6SXNzdWU2MjA3NTgyMjE=","number":165,"title":"ANLI","user":{"login":"douwekiela","id":6024930,"node_id":"MDQ6VXNlcjYwMjQ5MzA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6024930?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/douwekiela","html_url":"https:\/\/github.com\/douwekiela","followers_url":"https:\/\/api.github.com\/users\/douwekiela\/followers","following_url":"https:\/\/api.github.com\/users\/douwekiela\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/douwekiela\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/douwekiela\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/douwekiela\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/douwekiela\/orgs","repos_url":"https:\/\/api.github.com\/users\/douwekiela\/repos","events_url":"https:\/\/api.github.com\/users\/douwekiela\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/douwekiela\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589874657000,"updated_at":1589977387000,"closed_at":1589977387000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Can I recommend the following:\r\n\r\nFor ANLI, use https:\/\/github.com\/facebookresearch\/anli. As that paper says, \"Our dataset is not\r\nto be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself \u03b1NLI, or ART.\". \r\n\r\nIndeed, the paper cited under what is currently called anli says in the abstract \"We introduce a challenge dataset, ART\".\r\n\r\nThe current naming will confuse people :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/165\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/164","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/164\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/164\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/164\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/164","id":620540250,"node_id":"MDU6SXNzdWU2MjA1NDAyNTA=","number":164,"title":"Add Spanish POR and NER Datasets","user":{"login":"mrm8488","id":3653789,"node_id":"MDQ6VXNlcjM2NTM3ODk=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/3653789?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mrm8488","html_url":"https:\/\/github.com\/mrm8488","followers_url":"https:\/\/api.github.com\/users\/mrm8488\/followers","following_url":"https:\/\/api.github.com\/users\/mrm8488\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mrm8488\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mrm8488\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mrm8488\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mrm8488\/orgs","repos_url":"https:\/\/api.github.com\/users\/mrm8488\/repos","events_url":"https:\/\/api.github.com\/users\/mrm8488\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mrm8488\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hello @mrm8488, are these datasets official datasets published in an NLP\/CL\/ML venue?","What about this one: https:\/\/github.com\/ccasimiro88\/TranslateAlignRetrieve?"],"created_at":1589840301000,"updated_at":1590424125000,"closed_at":1590424125000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hi guys,\r\nIn order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks.\r\nI can provide it in raw and preprocessed formats.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/164\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/163","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/163\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/163\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/163\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/163","id":620534307,"node_id":"MDU6SXNzdWU2MjA1MzQzMDc=","number":163,"title":"[Feature request] Add cos-e v1.0","user":{"login":"sarahwie","id":8027676,"node_id":"MDQ6VXNlcjgwMjc2NzY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8027676?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/sarahwie","html_url":"https:\/\/github.com\/sarahwie","followers_url":"https:\/\/api.github.com\/users\/sarahwie\/followers","following_url":"https:\/\/api.github.com\/users\/sarahwie\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/sarahwie\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/sarahwie\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/sarahwie\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/sarahwie\/orgs","repos_url":"https:\/\/api.github.com\/users\/sarahwie\/repos","events_url":"https:\/\/api.github.com\/users\/sarahwie\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/sarahwie\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann","cos_e v1.0 is related to CQA v1.0 but only CQA v1.11 dataset is available on their website. Indeed their is lots of ids in cos_e v1, which are not in CQA v1.11 or the other way around.\r\n@sarahwie, @thomwolf, @nazneenrajani, @bmccann do you know where I can find CQA v1.0\r\n","@mariamabarham I'm also not sure where to find CQA 1.0. Perhaps it's not possible to include this version of the dataset. I'll close the issue if that's the case.","I do have a copy of the dataset. I can upload it to our repo.","Great @nazneenrajani. let me know once done.\r\nThanks","@mariamabarham @sarahwie I added them to the cos-e repo https:\/\/github.com\/salesforce\/cos-e\/tree\/master\/data\/v1.0","You can now do\r\n```python\r\nfrom nlp import load_dataset\r\ncos_e = load_dataset(\"cos_e\", \"v1.0\")\r\n```\r\nThanks @mariamabarham !","Thanks!","@mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended). ","> @mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended).\r\n\r\nIn the new version of `nlp`, if you try `cos_e = load_dataset(\"cos_e\")` it throws this error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['v1.0', 'v1.11']\r\nExample of usage:\r\n\t`load_dataset('cos_e', 'v1.0')`\r\n```\r\nFor datasets with at least two configurations, we now force the user to pick one (no default)"],"created_at":1589839526000,"updated_at":1592349325000,"closed_at":1592333526000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https:\/\/www.aclweb.org\/anthology\/P19-1487\/), and v1.11 has noted [annotation](https:\/\/github.com\/salesforce\/cos-e\/issues\/2) [issues](https:\/\/arxiv.org\/pdf\/2004.14546.pdf).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/163\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/162","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/162\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/162\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/162\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/162","id":620513554,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky","number":162,"title":"fix prev files hash in map","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome! ","Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified","Perfect then :)"],"created_at":1589836851000,"updated_at":1589837781000,"closed_at":1589837780000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/162","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/162","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/162.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/162.patch"},"body":"Fix the `.map` issue in #160.\r\nThis makes sure it takes the previous files when computing the hash.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/162\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/161","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/161\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/161\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/161\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/161","id":620487535,"node_id":"MDU6SXNzdWU2MjA0ODc1MzU=","number":161,"title":"Discussion on version identifier & MockDataLoaderManager for test data","user":{"login":"EntilZha","id":1382460,"node_id":"MDQ6VXNlcjEzODI0NjA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1382460?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/EntilZha","html_url":"https:\/\/github.com\/EntilZha","followers_url":"https:\/\/api.github.com\/users\/EntilZha\/followers","following_url":"https:\/\/api.github.com\/users\/EntilZha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/EntilZha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/EntilZha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/EntilZha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/EntilZha\/orgs","repos_url":"https:\/\/api.github.com\/users\/EntilZha\/repos","events_url":"https:\/\/api.github.com\/users\/EntilZha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/EntilZha\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ","I have an initial version here: https:\/\/github.com\/EntilZha\/nlp\/tree\/master\/datasets\/qanta Thats pretty close to what I'll do as a PR, but still want to do some more sanity checks\/tests (just got tests passing).\r\n\r\nI figured out how to get all tests passing by adding a download command and some finagling with the data zip https:\/\/github.com\/EntilZha\/nlp\/blob\/master\/tests\/utils.py#L127\r\n\r\n","I'm quite positive that you can just replace the `dl_manager.download()` statements here: https:\/\/github.com\/EntilZha\/nlp\/blob\/4d46443b65f1f756921db8275594e6af008a1de7\/datasets\/qanta\/qanta.py#L194 with `dl_manager.download_and_extract()` even though you don't extract anything. I would prefer to avoid adding more functions to the MockDataLoadManager and keep it as simple as possible (It's already to complex now IMO). \r\n\r\nCould you check if you can replace the `download()` function? ","I might be doing something wrong, but swapping those two gives this error:\r\n```\r\n> with open(path) as f:\r\nE IsADirectoryError: [Errno 21] Is a directory: 'datasets\/qanta\/dummy\/mode=first,char_skip=25\/2018.4.18\/dummy_data-zip-extracted\/dummy_data'\r\n\r\nsrc\/nlp\/datasets\/qanta\/3d965403133687b819905ead4b69af7bcee365865279b2f797c79f809b4490c3\/qanta.py:280: IsADirectoryError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n```\r\n\r\nSo it seems like the directory name is getting passed. Is this not functioning as expected, or is there some caching happening maybe? I deleted the dummy files and re-ran the import script with no changes. I'm digging a bit in with a debugger, but no clear reason yet","From what I can tell here: https:\/\/github.com\/huggingface\/nlp\/blob\/master\/tests\/utils.py#L115\r\n\r\n1. `data_url` is the correct http link\r\n2. `path_to_dummy_data` is a directory, which is causing the issue\r\n\r\nThat path comes from `download_dummy_data`, which I think assumes that the data comes from the zip file, but isn't aware of individual files. So it seems like it data manager needs to be aware if the url its getting is for a file or a zip\/directory, and pass this information along. This might happen in `download_dummy_data`, but probably better to happen in `download_and_extract`? Maybe a simple check to see if `os.path.basename` returns the dummy data zip filename, if not then join paths with the basename of the url?","I think the dataset script works correctly. Just the dummy data structure seems to be wrong. I will soon add more commands that should make the create of the dummy data easier.\r\n\r\nI'd recommend that you won't concentrate too much on the dummy data.\r\nIf you manage to load the dataset correctly via:\r\n\r\n```python \r\n# use local path to qanta\r\nnlp.load_dataset(\".\/datasets\/qanta\")\r\n```\r\n\r\nthen feel free to open a PR and we will look into the dummy data problem together :-) \r\n\r\nAlso please make sure that the Version is in the format 1.0.0 (three numbers separated by two points) - not a date. ","The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n\r\nOn version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?","> The script loading seems to work fine so I'll work on getting a PR open after a few sanity checks on the data.\r\n> \r\n> On version, we currently have it versioned with YYYY.MM.DD scheme so it would be nice to not change that, but will it cause issues?\r\n\r\nIt would cause issues for sure for the tests....not sure if it would also cause issues otherwise.\r\n\r\nI would prefer to keep the same version style as we have for other models. You could for example simply add version 1.0.0 and add a comment with the date you currently use for the versioning.\r\n\r\n What is your opinion regarding the version here @lhoestq @mariamabarham @thomwolf ? ","Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia","> Maybe use the YYYY.MM.DD as the config name ? That's what we are doing for wikipedia\r\n\r\nI'm not sure if this will work because the name should be unique and it seems that he has multiple config name in his data with the same version.\r\nAs @patrickvonplaten suggested, I think you can add a comment about the version in the data description.","Actually maybe our versioning format (inherited from tfds) is too strong for what we use it for?\r\nWe could allow any string maybe?\r\n\r\nI see it more and more like an identifier for the user that we will back with a serious hashing\/versioning system.- so we could let the user quite free on it.","I'm good with either putting it in description, adding it to the config, or loosening version formatting. I mostly don't have a full conceptual grasp of what each identifier ends up meaning in the datasets code so hard to evaluate the best approach.\r\n\r\nFor background, the multiple formats is a consequence of:\r\n\r\n1. Each example is one multi-sentence trivia question\r\n2. For training, its better to treat each sentence as an example\r\n3. For evaluation, should test on: (1) first sentence, (2) full question, and (3) partial questions (does the model get the question right having seen the first half)\r\n\r\nWe use the date format for version since: (1) we expect some degree of updates since new questions come in every year and (2) the timestamp itself matches the Wikipedia dump that it is dependent on (so similar to the Wikipedia dataset).\r\n\r\nperhaps this is better discussed in https:\/\/github.com\/huggingface\/nlp\/pull\/169 or update title?"],"created_at":1589833890000,"updated_at":1590343803000,"closed_at":null,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp\/utils\/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers the error. If I can get something to work, I can include it in my data PR once I'm done.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/161\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/160","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/160\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/160\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/160\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/160","id":620448236,"node_id":"MDU6SXNzdWU2MjA0NDgyMzY=","number":160,"title":"caching in map causes same result to be returned for train, validation and test","user":{"login":"dpressel","id":247881,"node_id":"MDQ6VXNlcjI0Nzg4MQ==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/247881?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/dpressel","html_url":"https:\/\/github.com\/dpressel","followers_url":"https:\/\/api.github.com\/users\/dpressel\/followers","following_url":"https:\/\/api.github.com\/users\/dpressel\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/dpressel\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/dpressel\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/dpressel\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/dpressel\/orgs","repos_url":"https:\/\/api.github.com\/users\/dpressel\/repos","events_url":"https:\/\/api.github.com\/users\/dpressel\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/dpressel\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"assignees":[{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ","Hi, the full example was listed in the PR above, but here is the exact link:\r\n\r\nhttps:\/\/github.com\/dpressel\/mead-baseline\/blob\/3c1aa3ca062cb23f303ca98ac40b6652b37ee971\/api-examples\/layers-classify-hf-datasets.py\r\n\r\nThe problem is coming from\r\n```\r\n if cache_file_name is None:\r\n # we create a unique hash from the function, current dataset file and the mapping args\r\n cache_kwargs = {\r\n \"with_indices\": with_indices,\r\n \"batched\": batched,\r\n \"batch_size\": batch_size,\r\n \"remove_columns\": remove_columns,\r\n \"keep_in_memory\": keep_in_memory,\r\n \"load_from_cache_file\": load_from_cache_file,\r\n \"cache_file_name\": cache_file_name,\r\n \"writer_batch_size\": writer_batch_size,\r\n \"arrow_schema\": arrow_schema,\r\n \"disable_nullable\": disable_nullable,\r\n }\r\n cache_file_name = self._get_cache_file_path(function, cache_kwargs)\r\n```\r\nThe cached value is always the same, but I was able to change that by just renaming the function each time which seems to fix the issue.","Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq ","This fixed my issue (I think)\r\n\r\nhttps:\/\/github.com\/dpressel\/mead-baseline\/commit\/48aa8ecde4b307bd3e7dde5fe71e43a1d4769ee1","> Ok, I think @lhoestq has already found a solution :-) Maybe you can chime in @lhoestq\r\n\r\nOh, awesome! I see the PR, Ill check it out","The PR should prevent the cache from losing track of the of the dataset type (based on the location of its data). Not sure about your second problem though (cache off).","Yes, with caching on, it seems to work without the function renaming hack, I mentioned this also in the PR. Thanks!"],"created_at":1589829723000,"updated_at":1589837780000,"closed_at":1589837780000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"hello,\r\n\r\nI am working on a program that uses the `nlp` library with the `SST2` dataset.\r\n\r\nThe rough outline of the program is:\r\n\r\n```\r\nimport nlp as nlp_datasets\r\n...\r\nparser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')\r\n...\r\ndataset = nlp_datasets.load_dataset(*args.dataset)\r\n...\r\n# Create feature vocabs\r\nvocabs = create_vocabs(dataset.values(), vectorizers)\r\n...\r\n# Create a function to vectorize based on vectorizers and vocabs:\r\n\r\nprint('TS', train_set.num_rows)\r\nprint('VS', valid_set.num_rows)\r\nprint('ES', test_set.num_rows)\r\n\r\n# factory method to create a `convert_to_features` function based on vocabs\r\nconvert_to_features = create_featurizer(vectorizers, vocabs)\r\ntrain_set = train_set.map(convert_to_features, batched=True)\r\ntrain_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])\r\ntrain_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz)\r\n\r\nvalid_set = valid_set.map(convert_to_features, batched=True)\r\nvalid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])\r\nvalid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz)\r\n\r\ntest_set = test_set.map(convert_to_features, batched=True)\r\ntest_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])\r\ntest_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz)\r\n\r\nprint('TS', train_set.num_rows)\r\nprint('VS', valid_set.num_rows)\r\nprint('ES', test_set.num_rows)\r\n\r\n```\r\nIm not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets:\r\n\r\n```\r\nTS 67349\r\nVS 872\r\nES 1821\r\nTS 67349\r\nVS 67349\r\nES 67349\r\n```\r\n\r\nThe behavior changes if I turn off the caching but then the results fail:\r\n\r\n```\r\ntrain_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False)\r\n...\r\nvalid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False)\r\n...\r\ntest_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False)\r\n```\r\n\r\nNow I get the right set of features back...\r\n```\r\nTS 67349\r\nVS 872\r\nES 1821\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 68\/68 [00:00<00:00, 92.78it\/s]\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1\/1 [00:00<00:00, 75.47it\/s]\r\n 0%| | 0\/2 [00:00<?, ?it\/s]TS 67349\r\nVS 872\r\nES 1821\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2\/2 [00:00<00:00, 77.19it\/s]\r\n```\r\nbut I think its losing track of the original training set:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\/home\/dpressel\/dev\/work\/baseline\/api-examples\/layers-classify-hf-datasets.py\", line 148, in <module>\r\n for x in train_loader:\r\n File \"\/home\/dpressel\/anaconda3\/lib\/python3.7\/site-packages\/torch\/utils\/data\/dataloader.py\", line 345, in __next__\r\n data = self._next_data()\r\n File \"\/home\/dpressel\/anaconda3\/lib\/python3.7\/site-packages\/torch\/utils\/data\/dataloader.py\", line 385, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"\/home\/dpressel\/anaconda3\/lib\/python3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/dpressel\/anaconda3\/lib\/python3.7\/site-packages\/torch\/utils\/data\/_utils\/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"\/home\/dpressel\/anaconda3\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 338, in __getitem__\r\n output_all_columns=self._output_all_columns,\r\n File \"\/home\/dpressel\/anaconda3\/lib\/python3.7\/site-packages\/nlp\/arrow_dataset.py\", line 294, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pydict())\r\n File \"pyarrow\/table.pxi\", line 1211, in pyarrow.lib.Table.slice\r\n File \"pyarrow\/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow\/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000)\r\n\r\nProcess finished with exit code 1\r\n```\r\n\r\nThe full-example program (minus the print stmts) is here:\r\nhttps:\/\/github.com\/dpressel\/mead-baseline\/pull\/620\/files\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/160\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/159","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/159\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/159\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/159\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/159","id":620420700,"node_id":"MDU6SXNzdWU2MjA0MjA3MDA=","number":159,"title":"How can we add more datasets to nlp library?","user":{"login":"Tahsin-Mayeesha","id":17886829,"node_id":"MDQ6VXNlcjE3ODg2ODI5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17886829?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha","html_url":"https:\/\/github.com\/Tahsin-Mayeesha","followers_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/followers","following_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/orgs","repos_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/repos","events_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Tahsin-Mayeesha\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Found it. https:\/\/github.com\/huggingface\/nlp\/tree\/master\/datasets"],"created_at":1589826931000,"updated_at":1589827028000,"closed_at":1589827027000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/159\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/158","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/158\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/158\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/158\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/158","id":620396658,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy","number":158,"title":"add Toronto Books Corpus","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589824485000,"updated_at":1591861755000,"closed_at":1589873696000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/158","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/158","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/158.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/158.patch"},"body":"This PR adds the Toronto Books Corpus.\r\n.\r\nIt on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX\/Moses Downloads** [here](http:\/\/opus.nlpl.eu\/Books.php )","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/158\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/157","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/157\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/157\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/157\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/157","id":620356542,"node_id":"MDU6SXNzdWU2MjAzNTY1NDI=","number":157,"title":"nlp.load_dataset() gives \"TypeError: list_() takes exactly one argument (2 given)\"","user":{"login":"saahiluppal","id":47444392,"node_id":"MDQ6VXNlcjQ3NDQ0Mzky","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/47444392?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/saahiluppal","html_url":"https:\/\/github.com\/saahiluppal","followers_url":"https:\/\/api.github.com\/users\/saahiluppal\/followers","following_url":"https:\/\/api.github.com\/users\/saahiluppal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/saahiluppal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/saahiluppal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/saahiluppal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/saahiluppal\/orgs","repos_url":"https:\/\/api.github.com\/users\/saahiluppal\/repos","events_url":"https:\/\/api.github.com\/users\/saahiluppal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/saahiluppal\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`","If you want to load a local dataset, make sure you include a `.\/` before the folder name. ","This happens by just doing run all cells on colab ... I assumed the colab example is broken. ","Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n```\r\n!pip uninstall -y -qq pyarrow\r\n!pip uninstall -y -qq nlp\r\n!pip install -qq git+https:\/\/github.com\/huggingface\/nlp.git\r\n```","> Oh I see you might have a wrong version of pyarrow install on the colab -> could you try the following. Add these lines to the beginning of your notebook, restart the runtime and run it again:\r\n> \r\n> ```\r\n> !pip uninstall -y -qq pyarrow\r\n> !pip uninstall -y -qq nlp\r\n> !pip install -qq git+https:\/\/github.com\/huggingface\/nlp.git\r\n> ```\r\n\r\nTried, having the same error.","Can you post a link here of your colab? I'll make a copy of it and see what's wrong","This should be fixed in the current version of the notebook. You can try it again","Also see: https:\/\/github.com\/huggingface\/nlp\/issues\/222","I am getting this error when running this command\r\n```\r\nval = nlp.load_dataset('squad', split=\"validation\")\r\n```\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '\/root\/.cache\/huggingface\/datasets\/squad\/plain_text\/1.0.0\/dataset_info.json'\r\n\r\nCan anybody help?","It seems like your download was corrupted :-\/ Can you run the following command: \r\n\r\n```\r\nrm -r \/root\/.cache\/huggingface\/datasets\r\n```\r\n\r\nto delete the cache completely and rerun the download? ","I tried the notebook again today and it worked without barfing. \ud83d\udc4c "],"created_at":1589820398000,"updated_at":1591344538000,"closed_at":1591344538000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to load datasets from nlp but there seems to have error saying \r\n\"TypeError: list_() takes exactly one argument (2 given)\"\r\n\r\ngist can be found here\r\nhttps:\/\/gist.github.com\/saahiluppal\/c4b878f330b10b9ab9762bc0776c0a6a","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/157\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/156","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/156\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/156\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/156\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/156","id":620263687,"node_id":"MDU6SXNzdWU2MjAyNjM2ODc=","number":156,"title":"SyntaxError with WMT datasets","user":{"login":"tomhosking","id":9419158,"node_id":"MDQ6VXNlcjk0MTkxNTg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/9419158?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/tomhosking","html_url":"https:\/\/github.com\/tomhosking","followers_url":"https:\/\/api.github.com\/users\/tomhosking\/followers","following_url":"https:\/\/api.github.com\/users\/tomhosking\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/tomhosking\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/tomhosking\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/tomhosking\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/tomhosking\/orgs","repos_url":"https:\/\/api.github.com\/users\/tomhosking\/repos","events_url":"https:\/\/api.github.com\/users\/tomhosking\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/tomhosking\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !","Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-28-3206959998b9> in <module>\r\n 1 import nlp\r\n 2 \r\n----> 3 dataset = nlp.load_dataset('wmt14')\r\n 4 print(dataset['train'][0])\r\n\r\n~\/.local\/lib\/python3.6\/site-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 507 # Instantiate the dataset builder\r\n 508 builder_instance = builder_cls(\r\n--> 509 cache_dir=cache_dir, name=name, version=version, data_dir=data_dir, data_files=data_files, **config_kwargs,\r\n 510 )\r\n 511 \r\n\r\nTypeError: Can't instantiate abstract class Wmt with abstract methods _subsets\r\n```\r\n\r\n","To correct this error I think you need the master branch of `nlp`. Can you try to install `nlp` with. `WMT` was not included at the beta release of the library. \r\n\r\nCan you try:\r\n`pip install git+https:\/\/github.com\/huggingface\/nlp.git`\r\n\r\nand check again? ","That works, thanks :)\r\n\r\nThe WMT datasets are listed in by `list_datasets()` in the beta release on pypi - it would be good to only show datasets that are actually supported by that version?","Usually, the idea is that a dataset can be added without releasing a new version. The problem in the case of `WMT` was that some \"core\" code of the library had to be changed as well. \r\n\r\n@thomwolf @lhoestq @julien-c - How should we go about this. If we add a dataset that also requires \"core\" code changes, how do we handle the versioning? The moment a dataset is on AWS it will actually be listed with `list_datasets()` in all earlier versions...\r\n\r\nIs there a way to somehow insert the `pip version` to the HfApi() and get only the datasets that were available for this version (at the date of the release of the version) @julien-c ? ","We plan to have something like a `requirements.txt` per dataset to prevent user from loading dataset with old version of `nlp` or any other libraries. Right now the solution is just to keep `nlp` up to date when you want to load a dataset that leverages the latests features of `nlp`.\r\n\r\nFor datasets that are on AWS but that use features that are not released yet we should be able to filter those from the `list_dataset` as soon as we have the `requirements.txt` feature on (filter datasets that need a future version of `nlp`).\r\n\r\nShall we rename this issue to be more explicit about the problem ?\r\nSomething like `Specify the minimum version of the nlp library required for each dataset` ?","Closing this one.\r\nFeel free to re-open if you have other questions :)"],"created_at":1589812698000,"updated_at":1595522515000,"closed_at":1595522515000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"The following snippet produces a syntax error:\r\n\r\n```\r\nimport nlp\r\n\r\ndataset = nlp.load_dataset('wmt14')\r\nprint(dataset['train'][0])\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"\/home\/tom\/.local\/lib\/python3.6\/site-packages\/IPython\/core\/interactiveshell.py\", line 3326, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n\r\n File \"<ipython-input-8-3206959998b9>\", line 3, in <module>\r\n dataset = nlp.load_dataset('wmt14')\r\n\r\n File \"\/home\/tom\/.local\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 505, in load_dataset\r\n builder_cls = import_main_class(module_path, dataset=True)\r\n\r\n File \"\/home\/tom\/.local\/lib\/python3.6\/site-packages\/nlp\/load.py\", line 56, in import_main_class\r\n module = importlib.import_module(module_path)\r\n\r\n File \"\/usr\/lib\/python3.6\/importlib\/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n\r\n File \"<frozen importlib._bootstrap_external>\", line 678, in exec_module\r\n\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n\r\n File \"\/home\/tom\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/wmt14\/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2\/wmt14.py\", line 21, in <module>\r\n from .wmt_utils import Wmt, WmtConfig\r\n\r\n File \"\/home\/tom\/.local\/lib\/python3.6\/site-packages\/nlp\/datasets\/wmt14\/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2\/wmt_utils.py\", line 659\r\n <<<<<<< HEAD\r\n ^\r\nSyntaxError: invalid syntax\r\n```\r\n\r\nPython version:\r\n`3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]`\r\nRunning on Ubuntu 18.04, via a Jupyter notebook","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/156\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/155","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/155\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/155\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/155\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/155","id":620067946,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0","number":155,"title":"Include more links in README, fix typos","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I fixed a conflict :) thanks !"],"created_at":1589795228000,"updated_at":1590654717000,"closed_at":1590654717000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/155","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/155","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/155.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/155.patch"},"body":"Include more links and fix typos in README","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/155\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/154","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/154\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/154\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/154\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/154","id":620059066,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw","number":154,"title":"add Ubuntu Dialogs Corpus datasets","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589794488000,"updated_at":1589796748000,"closed_at":1589796747000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/154","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/154","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/154.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/154.patch"},"body":"This PR adds the Ubuntu Dialog Corpus datasets version 2.0. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/154\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/153","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/153\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/153\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/153\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/153","id":619972246,"node_id":"MDU6SXNzdWU2MTk5NzIyNDY=","number":153,"title":"Meta-datasets (GLUE\/XTREME\/...) \u2013 Special care to attributions and citations","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.","Actually, double checking with @mariamabarham, we already have this feature I think.\r\n\r\nIt's like this currently:\r\n```python\r\n>>> from nlp import load_dataset\r\n>>> \r\n>>> dataset = load_dataset('glue', 'cola', split='train')\r\n>>> print(dataset.info.citation)\r\n@article{warstadt2018neural,\r\n title={Neural Network Acceptability Judgments},\r\n author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},\r\n journal={arXiv preprint arXiv:1805.12471},\r\n year={2018}\r\n}\r\n@inproceedings{wang2019glue,\r\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\r\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\r\n note={In the Proceedings of ICLR.},\r\n year={2019}\r\n}\r\n\r\nNote that each GLUE dataset has its own citation. Please see the source to see\r\nthe correct citation for each contained dataset.\r\n```\r\n\r\nWhat do you think @dseddah?","Looks good but why would there be a difference between the ref in the source and the one to be printed? ","Yes, I think we should remove this warning @mariamabarham.\r\n\r\nIt's probably a relic of tfds which didn't have the same way to access citations. "],"created_at":1589786662000,"updated_at":1589836696000,"closed_at":null,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation\/specific homepage\/etc are very visible and accessible and not only the generic citation of the meta-dataset itself.\r\n\r\nLet's take GLUE as an example:\r\n\r\nThe configuration has the citation for each dataset included (e.g. [here](https:\/\/github.com\/huggingface\/nlp\/blob\/master\/datasets\/glue\/glue.py#L154-L161)) but it should be copied inside the dataset info so that, when people access `dataset.info.citation` they get both the citation for GLUE and the citation for the specific datasets inside GLUE that they have loaded.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/153\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/152","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/152\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/152\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/152\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/152","id":619971900,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2","number":152,"title":"Add GLUE config name check","user":{"login":"Bharat123rox","id":13381361,"node_id":"MDQ6VXNlcjEzMzgxMzYx","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/13381361?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/Bharat123rox","html_url":"https:\/\/github.com\/Bharat123rox","followers_url":"https:\/\/api.github.com\/users\/Bharat123rox\/followers","following_url":"https:\/\/api.github.com\/users\/Bharat123rox\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/Bharat123rox\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/Bharat123rox\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/Bharat123rox\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/Bharat123rox\/orgs","repos_url":"https:\/\/api.github.com\/users\/Bharat123rox\/repos","events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/Bharat123rox\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review","Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?","If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the tests pass locally via: \r\n`pytest tests\/test_dataset_common.py::LocalDatasetTest::test_load_dataset_glue`","The test fails with an `AssertionError` because the name is not being passed to kwargs, however I'm not sure how to do that, because only the config file is being passed to the tests of all datasets?\r\n\r\nI'm guessing this is the corresponding code:\r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/2b3621bb5c78caf02c5a969b8e67fa0c145da4e6\/tests\/test_dataset_common.py#L141-L143\r\n\r\nAnd these are the logs:\r\n```\r\n___________________ DatasetTest.test_load_dataset_local_glue ___________________\r\n\r\nself = <tests.test_dataset_common.DatasetTest testMethod=test_load_dataset_local_glue>\r\ndataset_name = 'glue'\r\n\r\n @local\r\n def test_load_dataset_local(self, dataset_name):\r\n # test only first config\r\n if \"\/\" in dataset_name:\r\n logging.info(\"Skip {} because it is not a canonical dataset\")\r\n return\r\n\r\n configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)[:1]\r\n> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)\r\n\r\ntests\/test_dataset_common.py:200:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\/test_dataset_common.py:74: in check_load_dataset\r\n dataset_builder = dataset_builder_cls(config=config, cache_dir=processed_temp_dir)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <nlp.datasets.glue.fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597.glue.Glue object at 0x135c0ea90>\r\nargs = ()\r\nkwargs = {'cache_dir': '\/var\/folders\/r6\/mnw5ntvn5y72j7d4s1fm273m0000gn\/T\/tmpa9rpq3tl', 'config': GlueConfig(name='cola', versio...linguistic theory. Each example is a sequence of words annotated\\nwith whether it is a grammatical English sentence.')}\r\n\r\n def __init__(self, *args, **kwargs):\r\n> assert ('name' in kwargs and kwargs['name'] is not None), \"Glue has to be called with a configuration name\"\r\nE AssertionError: Glue has to be called with a configuration name\r\n\r\n\/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.py:139: AssertionError\r\n----------------------------- Captured stderr call -----------------------------\r\nINFO:nlp.load:Checking .\/datasets\/glue\/glue.py for additional imports.\r\nINFO:filelock:Lock 5209998288 acquired on .\/datasets\/glue\/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\r\nINFO:nlp.load:Found specific version folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from .\/datasets\/glue\/glue.py to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.py\r\nINFO:nlp.load:Found dataset infos file from .\/datasets\/glue\/dataset_infos.json to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.json\r\nINFO:filelock:Lock 5209998288 released on .\/datasets\/glue\/glue.py.lock\r\nINFO:nlp.load:Checking .\/datasets\/glue\/glue.py for additional imports.\r\nINFO:filelock:Lock 5196802640 acquired on .\/datasets\/glue\/glue.py.lock\r\nINFO:nlp.load:Found main folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\r\nINFO:nlp.load:Found specific version folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO:nlp.load:Found script file from .\/datasets\/glue\/glue.py to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.py\r\nINFO:nlp.load:Found dataset infos file from .\/datasets\/glue\/dataset_infos.json to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.json\r\nINFO:filelock:Lock 5196802640 released on .\/datasets\/glue\/glue.py.lock\r\n------------------------------ Captured log call -------------------------------\r\nINFO nlp.load:load.py:157 Checking .\/datasets\/glue\/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5209998288 acquired on .\/datasets\/glue\/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from .\/datasets\/glue\/glue.py to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from .\/datasets\/glue\/dataset_infos.json to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.json\r\nINFO filelock:filelock.py:318 Lock 5209998288 released on .\/datasets\/glue\/glue.py.lock\r\nINFO nlp.load:load.py:157 Checking .\/datasets\/glue\/glue.py for additional imports.\r\nINFO filelock:filelock.py:274 Lock 5196802640 acquired on .\/datasets\/glue\/glue.py.lock\r\nINFO nlp.load:load.py:320 Found main folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\r\nINFO nlp.load:load.py:333 Found specific version folder for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\r\nINFO nlp.load:load.py:346 Found script file from .\/datasets\/glue\/glue.py to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.py\r\nINFO nlp.load:load.py:356 Found dataset infos file from .\/datasets\/glue\/dataset_infos.json to \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/dataset_infos.json\r\nINFO nlp.load:load.py:367 Found metadata file for dataset .\/datasets\/glue\/glue.py at \/usr\/local\/lib\/python3.7\/site-packages\/nlp\/datasets\/glue\/fa7c9f982200144186b2831060b54199cf028e4bbdc5f40acd339ee343342597\/glue.json\r\nINFO filelock:filelock.py:318 Lock 5196802640 released on .\/datasets\/glue\/glue.py.lock\r\n```","Closing as #130 is fixed !"],"created_at":1589786623000,"updated_at":1590617352000,"closed_at":1590617352000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/152","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/152","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/152.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/152.patch"},"body":"Fixes #130 by adding a name check to the Glue class","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/152\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/151","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/151\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/151\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/151\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/151","id":619968480,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz","number":151,"title":"Fix JSON tests.","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589786258000,"updated_at":1589786512000,"closed_at":1589786511000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/151","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/151","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/151.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/151.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/151\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/150","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/150\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/150\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/150\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/150","id":619809645,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5MTgyODU4","number":150,"title":"Add WNUT 17 NER dataset","user":{"login":"stefan-it","id":20651387,"node_id":"MDQ6VXNlcjIwNjUxMzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20651387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stefan-it","html_url":"https:\/\/github.com\/stefan-it","followers_url":"https:\/\/api.github.com\/users\/stefan-it\/followers","following_url":"https:\/\/api.github.com\/users\/stefan-it\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stefan-it\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stefan-it\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stefan-it\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stefan-it\/orgs","repos_url":"https:\/\/api.github.com\/users\/stefan-it\/repos","events_url":"https:\/\/api.github.com\/users\/stefan-it\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stefan-it\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https:\/\/github.com\/huggingface\/nlp\/blob\/master\/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ","Nice !\r\n\r\nOne thing though: I saw that you copied the `dataset_info.json` (one split info), which is different from the `dataset_infos.json` (split infos of all configs) that we expect.\r\n\r\nCould you generate the `dataset_infos.json` file using this command please ?\r\n```\r\npython nlp-cli test datasets\/wnut_17 --save_infos --all_configs\r\n```","Hi @patrickvonplaten I just rebased onto latest `master` version and executed the commands. All tests passed then :)\r\n\r\n@lhoestq thanks for that hint! I've generated and added the `dataset_infos.json` and deleted `dataset_info.json`.","Awesome ! I guess it's ready to be merged now :)"],"created_at":1589753944000,"updated_at":1590525479000,"closed_at":1590525479000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/150","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/150","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/150.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/150.patch"},"body":"Hi,\r\n\r\nthis PR adds the WNUT 17 dataset to `nlp`.\r\n\r\n> Emerging and Rare entity recognition\r\n> This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisation), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet \u201cso.. kktny in 30 mins?\u201d - even human experts find entity kktny hard to detect and resolve. This task will evaluate the ability to detect and classify novel, emerging, singleton named entities in noisy text.\r\n> \r\n> The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities.\r\n\r\nMore information about the dataset can be found on the [shared task page](https:\/\/noisy-text.github.io\/2017\/emerging-rare-entities.html).\r\n\r\nDataset is taken is taken from their [GitHub repository](https:\/\/github.com\/leondz\/emerging_entities_17), because the data provided in this repository contains minor fixes in the dataset format.\r\n\r\n## Usage\r\n\r\nThen the WNUT 17 dataset can be used in `nlp` like this:\r\n\r\n```python\r\nimport nlp\r\n\r\nwnut_17 = nlp.load_dataset(\".\/datasets\/wnut_17\/wnut_17.py\")\r\n\r\nprint(wnut_17)\r\n```\r\n\r\nThis outputs:\r\n\r\n```txt\r\n'train': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 3394)\r\n'validation': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1009)\r\n'test': Dataset(schema: {'id': 'string', 'tokens': 'list<item: string>', 'labels': 'list<item: string>'}, num_rows: 1287)\r\n```\r\n\r\nNumber are identical with the ones in [this paper](https:\/\/www.ijcai.org\/Proceedings\/2019\/0702.pdf) and are the same as using the `dataset` reader in Flair.\r\n\r\n## Features\r\n\r\nThe following feature format is used to represent a sentence in the WNUT 17 dataset:\r\n\r\n| Feature | Example | Description\r\n| ---- | ---- | -----------------\r\n| `id` | `0` | Number (id) of current sentence\r\n| `tokens` | `[\"AHFA\", \"extends\", \"deadline\"]` | List of tokens (strings) for a sentence\r\n| `labels` | `[\"B-group\", \"O\", \"O\"]` | List of labels (outer span)\r\n\r\nThe following labels are used in WNUT 17:\r\n\r\n```txt\r\nO\r\nB-corporation\r\nI-corporation\r\nB-location\r\nI-location\r\nB-product\r\nI-product\r\nB-person\r\nI-person\r\nB-group\r\nI-group\r\nB-creative-work\r\nI-creative-work\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/150\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/149","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/149\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/149\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/149\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/149","id":619735739,"node_id":"MDU6SXNzdWU2MTk3MzU3Mzk=","number":149,"title":"[Feature request] Add Ubuntu Dialogue Corpus dataset","user":{"login":"danth","id":28959268,"node_id":"MDQ6VXNlcjI4OTU5MjY4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/28959268?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/danth","html_url":"https:\/\/github.com\/danth","followers_url":"https:\/\/api.github.com\/users\/danth\/followers","following_url":"https:\/\/api.github.com\/users\/danth\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/danth\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/danth\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/danth\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/danth\/orgs","repos_url":"https:\/\/api.github.com\/users\/danth\/repos","events_url":"https:\/\/api.github.com\/users\/danth\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/danth\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https:\/\/github.com\/rkadlec\/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https:\/\/github.com\/rkadlec\/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?"],"created_at":1589730159000,"updated_at":1589821306000,"closed_at":1589821306000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/rkadlec\/ubuntu-ranking-dataset-creator or http:\/\/dataset.cs.mcgill.ca\/ubuntu-corpus-1.0\/","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/149\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/148","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/148\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/148\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/148\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/148","id":619590555,"node_id":"MDU6SXNzdWU2MTk1OTA1NTU=","number":148,"title":"_download_and_prepare() got an unexpected keyword argument 'verify_infos'","user":{"login":"richarddwang","id":17963619,"node_id":"MDQ6VXNlcjE3OTYzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/17963619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/richarddwang","html_url":"https:\/\/github.com\/richarddwang","followers_url":"https:\/\/api.github.com\/users\/richarddwang\/followers","following_url":"https:\/\/api.github.com\/users\/richarddwang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/richarddwang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/richarddwang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/richarddwang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/richarddwang\/orgs","repos_url":"https:\/\/api.github.com\/users\/richarddwang\/repos","events_url":"https:\/\/api.github.com\/users\/richarddwang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/richarddwang\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Same error for dataset 'wiki40b'","Should be fixed on master :)"],"created_at":1589680133000,"updated_at":1589787513000,"closed_at":1589787513000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"# Reproduce\r\nIn Colab,\r\n```\r\n%pip install -q nlp\r\n%pip install -q apache_beam mwparserfromhell\r\n\r\ndataset = nlp.load_dataset('wikipedia')\r\n```\r\nget\r\n```\r\nDownloading and preparing dataset wikipedia\/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to \/root\/.cache\/huggingface\/datasets\/wikipedia\/20200501.aa\/1.0.0...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-6-52471d2a0088> in <module>()\r\n----> 1 dataset = nlp.load_dataset('wikipedia')\r\n\r\n1 frames\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 515 download_mode=download_mode,\r\n 516 ignore_verifications=ignore_verifications,\r\n--> 517 save_infos=save_infos,\r\n 518 )\r\n 519 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)\r\n 361 verify_infos = not save_infos and not ignore_verifications\r\n 362 self._download_and_prepare(\r\n--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 364 )\r\n 365 # Sync info\r\n\r\nTypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos'\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/148\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/147","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/147\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/147\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/147\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/147","id":619581907,"node_id":"MDU6SXNzdWU2MTk1ODE5MDc=","number":147,"title":"Error with sklearn train_test_split","user":{"login":"ClonedOne","id":6853743,"node_id":"MDQ6VXNlcjY4NTM3NDM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/6853743?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/ClonedOne","html_url":"https:\/\/github.com\/ClonedOne","followers_url":"https:\/\/api.github.com\/users\/ClonedOne\/followers","following_url":"https:\/\/api.github.com\/users\/ClonedOne\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/ClonedOne\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/ClonedOne\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/ClonedOne\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/ClonedOne\/orgs","repos_url":"https:\/\/api.github.com\/users\/ClonedOne\/repos","events_url":"https:\/\/api.github.com\/users\/ClonedOne\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/ClonedOne\/received_events","type":"User","site_admin":false},"labels":[{"id":1935892871,"node_id":"MDU6TGFiZWwxOTM1ODkyODcx","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/enhancement","name":"enhancement","color":"a2eeef","default":true,"description":"New feature or request"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed. Probably we will want to have a similar method directly in the library","Related: #166 "],"created_at":1589675304000,"updated_at":1592497403000,"closed_at":1592497403000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:\r\n\r\n```python\r\ndata = nlp.load_dataset('imdb', cache_dir=data_cache)\r\nf_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)\r\n```\r\nthrows:\r\n```\r\nValueError: Can only get row(s) (int or slice) or columns (string).\r\n```\r\nIt's not a big deal, since there are other ways to split the data, but it would be a cool thing to have.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/147\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/146","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/146\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/146\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/146\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/146","id":619564653,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx","number":146,"title":"Add BERTScore to metrics","user":{"login":"felixgwu","id":7753366,"node_id":"MDQ6VXNlcjc3NTMzNjY=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7753366?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/felixgwu","html_url":"https:\/\/github.com\/felixgwu","followers_url":"https:\/\/api.github.com\/users\/felixgwu\/followers","following_url":"https:\/\/api.github.com\/users\/felixgwu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/felixgwu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/felixgwu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/felixgwu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/felixgwu\/orgs","repos_url":"https:\/\/api.github.com\/users\/felixgwu\/repos","events_url":"https:\/\/api.github.com\/users\/felixgwu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/felixgwu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589666979000,"updated_at":1589754130000,"closed_at":1589754129000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/146","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/146","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/146.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/146.patch"},"body":"This PR adds [BERTScore](https:\/\/arxiv.org\/abs\/1904.09675) to metrics.\r\nHere is an example of how to use it.\r\n\r\n```sh\r\nimport nlp\r\nbertscore = nlp.load_metric('metrics\/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket\r\npredictions = ['example', 'fruit']\r\nreferences = [['this is an example.', 'this is one example.'], ['apple']]\r\nresults = bertscore.compute(predictions, references, lang='en')\r\nprint(results)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/146\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/145","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/145\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/145\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/145\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/145","id":619480549,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0","number":145,"title":"[AWS Tests] Follow-up PR from #144","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589637226000,"updated_at":1589637263000,"closed_at":1589637262000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/145","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/145","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/145.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/145.patch"},"body":"I forgot to add this line in PR #145 . ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/145\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/144","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/144\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/144\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/144\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/144","id":619477367,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1","number":144,"title":"[AWS tests] AWS test should not run for canonical datasets","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589636370000,"updated_at":1589636674000,"closed_at":1589636673000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/144","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/144","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/144.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/144.patch"},"body":"AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset.\r\n\r\nThis PR changes to logic to the following: \r\n\r\n1) All datasets that are present in `nlp\/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests.\r\n\r\n2) All datasets that are only present on AWS, such as `webis\/tl_dr` atm are tested only on AWS. \r\n\r\nI think the testing structure might need a bigger refactoring and better documentation very soon. \r\n\r\nMerging for now to unblock new PRs @thomwolf @mariamabarham .","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/144\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/143","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/143\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/143\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/143\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/143","id":619457641,"node_id":"MDU6SXNzdWU2MTk0NTc2NDE=","number":143,"title":"ArrowTypeError in squad metrics","user":{"login":"patil-suraj","id":27137566,"node_id":"MDQ6VXNlcjI3MTM3NTY2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/27137566?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patil-suraj","html_url":"https:\/\/github.com\/patil-suraj","followers_url":"https:\/\/api.github.com\/users\/patil-suraj\/followers","following_url":"https:\/\/api.github.com\/users\/patil-suraj\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patil-suraj\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patil-suraj\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patil-suraj\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patil-suraj\/orgs","repos_url":"https:\/\/api.github.com\/users\/patil-suraj\/repos","events_url":"https:\/\/api.github.com\/users\/patil-suraj\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patil-suraj\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"\/Users\/quentinlhoest\/Desktop\/hf\/nlp-bis\/metrics\/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```"],"created_at":1589630797000,"updated_at":1590154732000,"closed_at":1590154608000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"`squad_metric.compute` is giving following error\r\n```\r\nArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type\r\n```\r\n\r\nThis is how my predictions and references look like\r\n```\r\npredictions[0]\r\n# {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}\r\n```\r\n\r\n```\r\nreferences[0]\r\n# {'answers': [{'text': 'Denver Broncos'},\r\n {'text': 'Denver Broncos'},\r\n {'text': 'Denver Broncos'}],\r\n 'id': '56be4db0acb8001400a502ec'}\r\n```\r\n\r\nThese are structured as per the `squad_metric.compute` help string.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/143\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/142","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/142\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/142\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/142\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/142","id":619450068,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1","number":142,"title":"[WMT] Add all wmt","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589628526000,"updated_at":1589717901000,"closed_at":1589717900000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/142","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/142","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/142.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/142.patch"},"body":"This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs \"cs-en\", \"ru-en\", \"hi-en\" because apparently it takes up to a week to get the manual data for these datasets: see http:\/\/ufal.mff.cuni.cz\/czeng. \r\n\r\nThe datasets are fully functional though for the \"big\" language pairs \"de-en\" and \"fr-en\". \r\n\r\nOverall I think the scripts are very messy and might need a big refactoring at some point.\r\n\r\nFor now I think there are good to merge (most dataset configs can be used). I will add \"cs\", \"ru\" and \"hi\" when the manual data is available. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/142\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/141","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/141\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/141\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/141\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/141","id":619447090,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw","number":141,"title":"[Clean up] remove bogus folder","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Same for the dataset_infos.json at the project root no ?","Sorry guys, I haven't noticed. Thank you for mentioning it."],"created_at":1589627622000,"updated_at":1589635467000,"closed_at":1589635466000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/141","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/141","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/141.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/141.patch"},"body":"@mariamabarham - I think you accidentally placed it there.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/141\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/140","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/140\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/140\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/140\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/140","id":619443613,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4","number":140,"title":"[Tests] run local tests as default","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["You are right and I think those are usual best practice :) I'm 100% fine with this^^","Merging this for now to unblock other PRs."],"created_at":1589626566000,"updated_at":1589635304000,"closed_at":1589635303000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/140","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/140","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/140.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/140.patch"},"body":"This PR also enables local tests by default\r\n\r\nI think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this.\r\n\r\n## Suggestion on how to commit to the repo from now on:\r\nNow since the repo is \"online\", I think we should adopt a couple of best practices:\r\n1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later\r\n2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/140\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/139","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/139\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/139\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/139\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/139","id":619327409,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy","number":139,"title":"Add GermEval 2014 NER dataset","user":{"login":"stefan-it","id":20651387,"node_id":"MDQ6VXNlcjIwNjUxMzg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/20651387?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/stefan-it","html_url":"https:\/\/github.com\/stefan-it","followers_url":"https:\/\/api.github.com\/users\/stefan-it\/followers","following_url":"https:\/\/api.github.com\/users\/stefan-it\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/stefan-it\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/stefan-it\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/stefan-it\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/stefan-it\/orgs","repos_url":"https:\/\/api.github.com\/users\/stefan-it\/repos","events_url":"https:\/\/api.github.com\/users\/stefan-it\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/stefan-it\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Had really fun playing around with this new library :heart: ","That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ","@patrickvonplaten Rebased it \ud83d\ude05\r\n\r\nHow can it test \ud83e\udd14 I used:\r\n\r\n```bash\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_real_dataset_local_germeval_14\r\n# and\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_local_germeval_14\r\n```\r\n\r\nand the tests still pass :)","Perfect, if these tests pass that's great - I'll merge the PR then :-) Was it very difficult to create the dummy data structure? "],"created_at":1589586129000,"updated_at":1589637397000,"closed_at":1589637382000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/139","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/139","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/139.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/139.patch"},"body":"Hi, \r\n\r\nthis PR adds the GermEval 2014 NER dataset \ud83d\ude03\r\n\r\n> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:\r\n\r\n> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.\r\n> - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens.\r\n> - The NER annotation uses the NoSta-D guidelines, which extend the T\u00fcbingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].\r\n\r\nDataset will be downloaded from the [official GermEval 2014 website](https:\/\/sites.google.com\/site\/germeval2014ner\/data).\r\n\r\n## Dataset format\r\n\r\nHere's an example of the dataset format from the original dataset:\r\n\r\n```tsv\r\n# http:\/\/de.wikipedia.org\/wiki\/Manfred_Korfmann [2009-10-17]\r\n1 Aufgrund O O\r\n2 seiner O O\r\n3 Initiative O O\r\n4 fand O O\r\n5 2001\/2002 O O\r\n6 in O O\r\n7 Stuttgart B-LOC O\r\n8 , O O\r\n9 Braunschweig B-LOC O\r\n10 und O O\r\n11 Bonn B-LOC O\r\n12 eine O O\r\n13 gro\u00dfe O O\r\n14 und O O\r\n15 publizistisch O O\r\n16 vielbeachtete O O\r\n17 Troia-Ausstellung B-LOCpart O\r\n18 statt O O\r\n19 , O O\r\n20 \u201e O O\r\n21 Troia B-OTH B-LOC\r\n22 - I-OTH O\r\n23 Traum I-OTH O\r\n24 und I-OTH O\r\n25 Wirklichkeit I-OTH O\r\n26 \u201c O O\r\n27 . O O\r\n```\r\n\r\nThe sentence is encoded as one token per line (tab separated columns.\r\n\r\nThe first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence.\r\n\r\nThe second column contains the token.\r\n\r\nColumn three and four contain the named entity (in IOB2 scheme).\r\nOuter spans are encoded in the third column, embedded\/nested spans in the fourth column.\r\n\r\n## Features\r\n\r\nI decided to keep most information from the dataset. That means the so called \"source\" information (where the sentences come from + date information) is also returned for each sentence in the feature vector.\r\n\r\nFor each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned:\r\n\r\n| Feature | Example | Description\r\n| ---- | ---- | -----------------\r\n| `id` | `0` | Number (id) of current sentence\r\n| `source` | `http:\/\/de.wikipedia.org\/wiki\/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string\r\n| `tokens` | `[\"Schwartau\", \"sagte\", \":\"]` | List of tokens (strings) for a sentence\r\n| `labels` | `[\"B-PER\", \"O\", \"O\"]` | List of labels (outer span)\r\n| `nested-labels` | `[\"O\", \"O\", \"O\"]` | List of labels for nested span\r\n\r\n## Example\r\n\r\nThe following command downloads the dataset from the official GermEval 2014 page and pre-processed it:\r\n\r\n```bash\r\npython nlp-cli test datasets\/germeval_14 --all_configs\r\n```\r\n\r\nIt then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences.\r\n\r\nNow it can be imported and used with `nlp`:\r\n\r\n```python\r\nimport nlp\r\n\r\ngermeval = nlp.load_dataset(\".\/datasets\/germeval_14\/germeval_14.py\")\r\nassert len(germeval[\"train\"]) == 24000\r\n\r\n# Show first sentence of training set:\r\ngermeval[\"train\"][0]\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/139\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/138","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/138\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/138\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/138\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/138","id":619225191,"node_id":"MDU6SXNzdWU2MTkyMjUxOTE=","number":138,"title":"Consider renaming to nld","user":{"login":"honnibal","id":8059750,"node_id":"MDQ6VXNlcjgwNTk3NTA=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8059750?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/honnibal","html_url":"https:\/\/github.com\/honnibal","followers_url":"https:\/\/api.github.com\/users\/honnibal\/followers","following_url":"https:\/\/api.github.com\/users\/honnibal\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/honnibal\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/honnibal\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/honnibal\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/honnibal\/orgs","repos_url":"https:\/\/api.github.com\/users\/honnibal\/repos","events_url":"https:\/\/api.github.com\/users\/honnibal\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/honnibal\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n","Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for \"NLP Datasets\" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python are going to search \"Python NLP\" and end up here, confused that this is a collection of datasets.\r\n\r\nThe names of the other huggingface libraries work because they're the only game in town: there are not very many robust, distinct libraries for `tokenizers` or `transformers` in python, for example. But there are several options for NLP in python, and adding this as a possible search result for \"python nlp\" when datasets are likely not what someone is searching for adds noise and frustrates potential users.","I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a \"step back\" from the recent changes\/renaming of transformers, but may be some middle ground between a complete rebranding, and keeping it identifiable.","Interesting, thanks for sharing your thoughts.\r\n\r\nAs we\u2019ll move toward a first non-beta release, we will pool the community of contributors\/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)\r\n\r\nIn the meantime, using `from nlp import load_dataset, load_metric` should work \ud83d\ude09","I feel like we are conflating two distinct subjects here:\r\n\r\n1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future\r\n2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only datasets and metrics\r\n\r\n(let me know if I mischaracterize your point)\r\n\r\nI'll chime in to say that the first point is a bit silly IMO. As Python developers due to the limitations of the import system we already have to share:\r\n- a single flat namespace for packages\r\n- which also conflicts with local modules i.e. local files\r\n\r\nIf we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI also think all Python software developers\/ML engineers\/scientists are capable of at least a subset of:\r\n- importing only the methods that they need like @thomwolf suggested\r\n- aliasing their import\r\n- renaming a local variable","By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nI see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental\/big to fit in `transformers`.\r\n\r\nSome of the directions we would like to explore are about sharing, traceability and more experimental models, as well as seeing a model as the community-based process of creating a composite entity from data, optimization, and code.\r\n\r\nWe'll see how these ideas end up being implemented and we'll better know how we should define the library when we start to dive into these topics. I'll try to get the `nlp` team to draft a roadmap on these topics at some point.","> If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the module within the scope of your function.\r\n\r\nFor instance,\r\n\r\n```python\r\n\r\nimport nlp\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n```\r\n\r\nThis is a bug: you've just overwritten the module, so now you can't use it. Or instead:\r\n\r\n```python\r\n\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n# (Later, e.g. in a notebook)\r\nimport nlp\r\n```\r\n\r\nThis is also a bug: you've overwritten your variable with an import.\r\n\r\nIf you have a module named `nlp`, you should avoid using `nlp` as a variable, or you'll have bugs in some contexts and inconsistencies in other contexts. You'll have situations where you need to import differently in one module vs another, or name variables differently in one context vs another, which is bad.\r\n\r\n> importing only the methods that they need like @thomwolf suggested\r\n\r\nOkay but the same logic applies to naming the module *literally anything else*. There's absolutely no point in having a module name that's 3 letters if you always plan to do `import from`! It would be entirely better to name it `nlp_datasets` if you don't want people to do `import nlp`.\r\n\r\nAnd finally:\r\n\r\n> By the way, nlp will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nSo...it isn't a datasets library? https:\/\/twitter.com\/Thom_Wolf\/status\/1261282491622731781\r\n\r\nI'm confused \ud83d\ude15 ","Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) ","I guess indeed","I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP \/ vision \/ etc) then it would make sense =\/","I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:\r\n\r\nRemember\r\nhttps:\/\/github.com\/huggingface\/pytorch-openai-transformer-lm\r\nhttps:\/\/github.com\/huggingface\/pytorch-pretrained-BERT\r\nhttps:\/\/github.com\/huggingface\/pytorch-transformers\r\nand now\r\nhttps:\/\/github.com\/huggingface\/transformers\r\n\r\n;) \r\n\r\nJokes aside, seems that the library is growing in a multi-modal direction (https:\/\/github.com\/huggingface\/datasets\/pull\/363) so the current name is not that implausible. Possibly HF ambition is really to grow its community and bring here a large chunk of datasets of the world (including tabular \/ vision \/ audio?).","Yea I see your point. However, wouldn't scoping solve the entire problem? \r\n\r\n```python\r\nimport huggingface.datasets as D\r\nimport huggingface.transformers as T\r\n```\r\n\r\nCalling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` "],"created_at":1589574207000,"updated_at":1608238591000,"closed_at":1601251690000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Hey :)\r\n\r\nJust making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.\r\n\r\nThe issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme.\r\n\r\nIf you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere.\r\n\r\nIf people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order.\r\n\r\nI don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider.\r\n\r\nI suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/138\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/137","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/137\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/137\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/137\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/137","id":619214645,"node_id":"MDU6SXNzdWU2MTkyMTQ2NDU=","number":137,"title":"Tokenized BLEU considered harmful - Discussion on community-based process","user":{"login":"kpu","id":247512,"node_id":"MDQ6VXNlcjI0NzUxMg==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/247512?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/kpu","html_url":"https:\/\/github.com\/kpu","followers_url":"https:\/\/api.github.com\/users\/kpu\/followers","following_url":"https:\/\/api.github.com\/users\/kpu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/kpu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/kpu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/kpu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/kpu\/orgs","repos_url":"https:\/\/api.github.com\/users\/kpu\/repos","events_url":"https:\/\/api.github.com\/users\/kpu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/kpu\/received_events","type":"User","site_admin":false},"labels":[{"id":2067400324,"node_id":"MDU6TGFiZWwyMDY3NDAwMzI0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/generic%20discussion","name":"generic discussion","color":"c5def5","default":false,"description":"Generic discussion on the library"},{"id":2067400959,"node_id":"MDU6TGFiZWwyMDY3NDAwOTU5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/Metric%20discussion","name":"Metric discussion","color":"d722e8","default":false,"description":"Discussions on the metrics"}],"state":"open","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I second this request. The bottom line is that **scores produced with different reference tokenizations are not comparable**. To discourage (even inadvertent) cheating, the user should never touch the reference. The `v13a` tokenization standard is not ideal, but at least it has been consistently used at matrix.statmt.org, facilitating comparisons.\r\n\r\nSacrebleu exposes [all its data sources](https:\/\/github.com\/mjpost\/sacrebleu\/blob\/master\/sacrebleu\/dataset.py) and additionally provides [an API](https:\/\/github.com\/mjpost\/sacrebleu\/blob\/master\/sacrebleu\/__init__.py) to accessing the references, which seem to fit within the spirit of your codebase.","Didn't we have a slide and discussion at WMT admitting that, for production-quality models, BLEU doesn't correlate with human eval anyway?\r\n","Yes, there are slides like that at WMT every year :) BLEU correlates with human judgment only at coarse levels, and it seems to be getting worse when people try to use it to do model selection among high-performing neural systems.\r\n\r\nHowever, the point isn't whether BLEU is a good metric, but whether your BLEU score can be compared to other BLEU scores. They only can be compared if you use the same reference tokenization (similar to how you [can't compare LM perplexities across different segmentations](https:\/\/sjmielke.com\/comparing-perplexities.htm)). sacrebleu was an attempt to get everyone to use WMT's reference tokenization (meaning, your system has to first remove its own tokenization) so that you could just compare across papers. This also prevents scores from being gamed.","I do not consider as a sufficient solution switching this library's default metric from BLEU to the wrapper around SacreBLEU. \r\n\r\nAs currently implemented, the wrapper allows end users to toggle SacreBLEU options, but doesn't pass along the SacreBLEU signature. As @mjpost showed in [Post18](https:\/\/www.aclweb.org\/anthology\/W18-6319.pdf), it's simply not credible to assume that people will stick to the defaults, therefore, the signature is necessary to be explicit about what options were used. \r\n\r\nIn addition to the `v13a` or `intl` options for the SacreBLEU `tokenize` argument, which was pointed out earlier, papers frequently differ on whether they lowercase text before scoring (`lowercase`) and the smoothing method used (`smooth_method`). BLEU scores can differ substantially (over 1 BLEU) just by changing these options. \r\n\r\nLosing the SacreBLEU signature is a regression in reproducibility and clarity.\r\n\r\n(Perhaps this should belong in a separate issue?)","Thanks for sharing your thoughts. This is a very important discussion.\r\n\r\nAlso one of the first items on our mid-term roadmap (we will try to clean it and share it soon) is to introduce mechanisms to get high-quality traceability and reproducibility for all the processes related to the library.\r\n\r\nSo having the signature for `sacrebleu` is really important!\r\n\r\nRegarding BLEU, I guess we can just remove it from the canonical metrics included in the repo itself (it won't prevent people to add it as \"user-metrics\" but at least we won't be promoting it).\r\n\r\nOn a more general note (definitely too large for the scope of this issue) we are wondering, with @srush in particular, how we could handle the selection of metrics\/datasets with the most community-based and bottom-up approach possible. If you have opinions on this, please share!","Yeah, I would love to have discussions about ways this project can have an community-based, transparent process to arrive at strong default metrics. @kpu \/ @mjpost do you have any suggestions of how that might work or pointers to places where this is done right? Perhaps this question can be template for what is likely to be repeated for other datasets.","I think @bittlingmayer is referring to Figure 6 in http:\/\/statmt.org\/wmt19\/pdf\/53\/WMT02.pdf . When you look at Appendix A there are some cases where metrics fall apart at the high end and some where they correlate well. en-zh is arguably production-quality. \r\n\r\nThis could evolve into a metrics Bazaar where the value add is really the packaging and consistency: it installs\/compiles the metrics for me, gives a reproducible name to use in publication (involve the authors; you don't want a different sacrebleu hash system), a version number, and evaluation of the metrics like http:\/\/ufallab.ms.mff.cuni.cz\/~bojar\/wmt19-metrics-task-package.tgz but run when code changes rather than once a year. ","While a Bazaar setup works for models \/ datasets, I am not sure it is ideal for metrics ? Ideal from my perspective would be to have tasks with metrics moderated by experts who document, cite, and codify known pitchfalls (as above^) and make it non-trivial for beginners to mess it up. ","@srush @thomwolf \r\n\r\nModelFront could provide (automated, \"QE-based\") evaluation for all the pretrained translation models you host. Not bottom-up and not valid for claiming SoTA, but independent, practical for builders and not top-down.\r\n\r\nFor that I would also suggest some diverse benchmarks (so split it out into datasets with only user-generated data, or only constants, or only UI strings, or only READMEs) which tease out known trade-offs. Even hypothetical magic eval is limited if we always reduce it to a single number.\r\n\r\nRealistically people want to know how a model compares to an API like Google Translate, Microsoft Translator, DeepL or Yandex (especially for a language pair like EN:RU, or for the many languages that only Yandex supports), and that could be done too.\r\n","Very important discussion.\r\nI am trying to understand the effects of tokenization.\r\nI wanted to ask which is a good practice.\r\nSacrebleu should be used on top of the tokenized output, or detokenized(raw) text?","Use sacrebleu on detokenized output and raw unmodified references. "],"created_at":1589573314000,"updated_at":1610016088000,"closed_at":null,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"https:\/\/github.com\/huggingface\/nlp\/blob\/7d1526dfeeb29248d832f1073192dbf03ad642da\/metrics\/bleu\/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, tokenizers are like window managers: they can be endlessly customized and nobody has quite the same options. \r\n\r\nAs @mjpost reported in https:\/\/www.aclweb.org\/anthology\/W18-6319.pdf BLEU configurations can vary by 1.8. Yet people are incorrectly putting non-comparable BLEU scores in the same table, such as Table 1 in https:\/\/arxiv.org\/abs\/2004.04902 . \r\n\r\nThere are a few use cases for tokenized BLEU like Thai. For Chinese, people seem to use character BLEU for better or worse.\r\n\r\nThe default easy option should be the one that's correct more often. And that is sacrebleu. Please don't make it easy for people to run what is usually the wrong option; it definitely shouldn't be `bleu`. \r\n\r\nAlso, I know this is inherited from TensorFlow and, paging @lmthang, they should discourage it too. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/137\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/136","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/136\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/136\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/136\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/136","id":619211018,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4","number":136,"title":"Update README.md","user":{"login":"renaud","id":75369,"node_id":"MDQ6VXNlcjc1MzY5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/75369?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/renaud","html_url":"https:\/\/github.com\/renaud","followers_url":"https:\/\/api.github.com\/users\/renaud\/followers","following_url":"https:\/\/api.github.com\/users\/renaud\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/renaud\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/renaud\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/renaud\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/renaud\/orgs","repos_url":"https:\/\/api.github.com\/users\/renaud\/repos","events_url":"https:\/\/api.github.com\/users\/renaud\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/renaud\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Thanks, this was fixed with #135 :)"],"created_at":1589572867000,"updated_at":1589717848000,"closed_at":1589717848000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/136","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/136","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/136.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/136.patch"},"body":"small typo","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/136\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/135","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/135\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/135\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/135\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/135","id":619206708,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw","number":135,"title":"Fix print statement in READ.md","user":{"login":"codehunk628","id":51091425,"node_id":"MDQ6VXNlcjUxMDkxNDI1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/51091425?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/codehunk628","html_url":"https:\/\/github.com\/codehunk628","followers_url":"https:\/\/api.github.com\/users\/codehunk628\/followers","following_url":"https:\/\/api.github.com\/users\/codehunk628\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/codehunk628\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/codehunk628\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/codehunk628\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/codehunk628\/orgs","repos_url":"https:\/\/api.github.com\/users\/codehunk628\/repos","events_url":"https:\/\/api.github.com\/users\/codehunk628\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/codehunk628\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed, thanks!"],"created_at":1589572343000,"updated_at":1589717646000,"closed_at":1589717645000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/135","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/135","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/135.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/135.patch"},"body":"print statement was throwing generator object instead of printing names of available datasets\/metrics","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/135\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/134","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/134\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/134\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/134\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/134","id":619112641,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4Njk5OTYz","number":134,"title":"Update README.md","user":{"login":"pranv","id":8753078,"node_id":"MDQ6VXNlcjg3NTMwNzg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/8753078?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/pranv","html_url":"https:\/\/github.com\/pranv","followers_url":"https:\/\/api.github.com\/users\/pranv\/followers","following_url":"https:\/\/api.github.com\/users\/pranv\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/pranv\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/pranv\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/pranv\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/pranv\/orgs","repos_url":"https:\/\/api.github.com\/users\/pranv\/repos","events_url":"https:\/\/api.github.com\/users\/pranv\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/pranv\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["the readme got removed, closing this one"],"created_at":1589561774000,"updated_at":1590654109000,"closed_at":1590654109000,"author_association":"NONE","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/134","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/134","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/134.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/134.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/134\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/133","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/133\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/133\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/133\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/133","id":619094954,"node_id":"MDU6SXNzdWU2MTkwOTQ5NTQ=","number":133,"title":"[Question] Using\/adding a local dataset","user":{"login":"zphang","id":1668462,"node_id":"MDQ6VXNlcjE2Njg0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1668462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zphang","html_url":"https:\/\/github.com\/zphang","followers_url":"https:\/\/api.github.com\/users\/zphang\/followers","following_url":"https:\/\/api.github.com\/users\/zphang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zphang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zphang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zphang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zphang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zphang\/repos","events_url":"https:\/\/api.github.com\/users\/zphang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zphang\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH\/TO\/YOUR\/LOCAL\/SCRIPT.py')`\r\n\r\nDoes it make sense?","Could you give a more concrete example, please? \r\n\r\nI looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?\r\n\r\nOr maybe we can use DownloadManager to specify local dataset location? In that case, where do we use DownloadManager instance?\r\n\r\nThanks","Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.\r\nYou may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample.","The download manager supports local directories. You can specify a local directory instead of a url and it should work.","Closing this one.\r\nFeel free to re-open if you have other questions :)"],"created_at":1589559966000,"updated_at":1595522649000,"closed_at":1595522649000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Users may want to either create\/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.\r\n\r\nIt appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.\r\n\r\nA notebook\/example script demonstrating this would be very helpful.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/133\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/132","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/132\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/132\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/132\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/132","id":619077851,"node_id":"MDU6SXNzdWU2MTkwNzc4NTE=","number":132,"title":"[Feature Request] Add the OpenWebText dataset","user":{"login":"LysandreJik","id":30755778,"node_id":"MDQ6VXNlcjMwNzU1Nzc4","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/30755778?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/LysandreJik","html_url":"https:\/\/github.com\/LysandreJik","followers_url":"https:\/\/api.github.com\/users\/LysandreJik\/followers","following_url":"https:\/\/api.github.com\/users\/LysandreJik\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/LysandreJik\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/LysandreJik\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/LysandreJik\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/LysandreJik\/orgs","repos_url":"https:\/\/api.github.com\/users\/LysandreJik\/repos","events_url":"https:\/\/api.github.com\/users\/LysandreJik\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/LysandreJik\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https:\/\/zenodo.org\/record\/3834942#.Xs1w8i-z2J8","Closing since it's been added in #660 "],"created_at":1589558249000,"updated_at":1602080568000,"closed_at":1602080568000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https:\/\/www.github.com\/google-research\/electra).\r\n\r\nMore information and the download link are available [here](https:\/\/skylion007.github.io\/OpenWebTextCorpus\/).","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/132\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/131","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/131\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/131\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/131\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/131","id":619073731,"node_id":"MDU6SXNzdWU2MTkwNzM3MzE=","number":131,"title":"[Feature request] Add Toronto BookCorpus dataset","user":{"login":"jarednielsen","id":4564897,"node_id":"MDQ6VXNlcjQ1NjQ4OTc=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/4564897?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jarednielsen","html_url":"https:\/\/github.com\/jarednielsen","followers_url":"https:\/\/api.github.com\/users\/jarednielsen\/followers","following_url":"https:\/\/api.github.com\/users\/jarednielsen\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jarednielsen\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jarednielsen\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jarednielsen\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jarednielsen\/orgs","repos_url":"https:\/\/api.github.com\/users\/jarednielsen\/repos","events_url":"https:\/\/api.github.com\/users\/jarednielsen\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jarednielsen\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it involves copyright problem...","Hi, @lhoestq, just a reminder that this is solved by #248 .\ud83d\ude09 "],"created_at":1589557844000,"updated_at":1593379651000,"closed_at":1593379651000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":null,"body":"I know the copyright\/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/131\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/130","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/130\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/130\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/130\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/130","id":619035440,"node_id":"MDU6SXNzdWU2MTkwMzU0NDA=","number":130,"title":"Loading GLUE dataset loads CoLA by default","user":{"login":"zphang","id":1668462,"node_id":"MDQ6VXNlcjE2Njg0NjI=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1668462?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/zphang","html_url":"https:\/\/github.com\/zphang","followers_url":"https:\/\/api.github.com\/users\/zphang\/followers","following_url":"https:\/\/api.github.com\/users\/zphang\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/zphang\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/zphang\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/zphang\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/zphang\/orgs","repos_url":"https:\/\/api.github.com\/users\/zphang\/repos","events_url":"https:\/\/api.github.com\/users\/zphang\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/zphang\/received_events","type":"User","site_admin":false},"labels":[{"id":2067388877,"node_id":"MDU6TGFiZWwyMDY3Mzg4ODc3","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20bug","name":"dataset bug","color":"2edb81","default":false,"description":"A bug in a dataset script provided in the library"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s\/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info under `Glue.BUILDER_CONFIGS`","Yes so the first config is loaded by default when no `name` is supplied but for GLUE this should probably throw an error indeed.\r\n\r\nWe can probably just add an `__init__` at the top of the `class Glue(nlp.GeneratorBasedBuilder)` in the `glue.py` script which does this check:\r\n```\r\nclass Glue(nlp.GeneratorBasedBuilder):\r\n def __init__(self, *args, **kwargs):\r\n assert 'name' in kwargs and kwargs[name] is not None, \"Glue has to be called with a configuration name\"\r\n super(Glue, self).__init__(*args, **kwargs)\r\n```","An error is raised if the sub-dataset is not specified :)\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']\r\nExample of usage:\r\n\t`load_dataset('glue', 'cola')`\r\n```"],"created_at":1589554550000,"updated_at":1590617295000,"closed_at":1590617295000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"If I run:\r\n\r\n```python\r\ndataset = nlp.load_dataset('glue')\r\n```\r\nThe resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:\r\n\r\n```python\r\nmetric = nlp.load_metric(\"glue\")\r\n```\r\nwhich throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/130\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/129","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/129\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/129\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/129\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/129","id":618997725,"node_id":"MDU6SXNzdWU2MTg5OTc3MjU=","number":129,"title":"[Feature request] Add Google Natural Question dataset","user":{"login":"elyase","id":1175888,"node_id":"MDQ6VXNlcjExNzU4ODg=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/1175888?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/elyase","html_url":"https:\/\/github.com\/elyase","followers_url":"https:\/\/api.github.com\/users\/elyase\/followers","following_url":"https:\/\/api.github.com\/users\/elyase\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/elyase\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/elyase\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/elyase\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/elyase\/orgs","repos_url":"https:\/\/api.github.com\/users\/elyase\/repos","events_url":"https:\/\/api.github.com\/users\/elyase\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/elyase\/received_events","type":"User","site_admin":false},"labels":[{"id":2067376369,"node_id":"MDU6TGFiZWwyMDY3Mzc2MzY5","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/dataset%20request","name":"dataset request","color":"e99695","default":false,"description":"Requesting to add a new dataset"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Indeed, I think this one is almost ready cc @lhoestq ","I'm doing the latest adjustments to make the processing of the dataset run on Dataflow","Is there an update to this? It will be very beneficial for the QA community!","Still work in progress :)\r\nThe idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'm also doing it for wikipedia.","Super appreciate your hard work !!\r\nI'll cross my fingers and hope easily loadable wikipedia dataset will come soon. ","Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in python to convert it to our Apache Arrow format yet.\r\nHowever we had planned to change this conversion step anyways so we'll make just sure that it enables to process and convert the NQ dataset to arrow.","NQ was added in #427 \ud83c\udf89"],"created_at":1589552060000,"updated_at":1595510489000,"closed_at":1595510489000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"Would be great to have https:\/\/github.com\/google-research-datasets\/natural-questions as an alternative to SQuAD.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/129\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/128","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/128\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/128\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/128\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/128","id":618951117,"node_id":"MDU6SXNzdWU2MTg5NTExMTc=","number":128,"title":"Some error inside nlp.load_dataset()","user":{"login":"polkaYK","id":18486287,"node_id":"MDQ6VXNlcjE4NDg2Mjg3","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/18486287?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/polkaYK","html_url":"https:\/\/github.com\/polkaYK","followers_url":"https:\/\/api.github.com\/users\/polkaYK\/followers","following_url":"https:\/\/api.github.com\/users\/polkaYK\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/polkaYK\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/polkaYK\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/polkaYK\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/polkaYK\/orgs","repos_url":"https:\/\/api.github.com\/users\/polkaYK\/repos","events_url":"https:\/\/api.github.com\/users\/polkaYK\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/polkaYK\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.","Thanks for reply, worked fine!\r\n"],"created_at":1589547689000,"updated_at":1589548240000,"closed_at":1589548240000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"First of all, nice work!\r\n\r\nI am going through [this overview notebook](https:\/\/colab.research.google.com\/github\/huggingface\/nlp\/blob\/master\/notebooks\/Overview.ipynb)\r\n\r\nIn simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`\r\n\r\nI get an error, which is connected with some inner code, I think:\r\n```\r\n---------------------------------------------------------------------------\r\n\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-8-d848d3a99b8c> in <module>()\r\n 1 # Downloading and loading a dataset\r\n 2 \r\n----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]')\r\n\r\n8 frames\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 515 download_mode=download_mode,\r\n 516 ignore_verifications=ignore_verifications,\r\n--> 517 save_infos=save_infos,\r\n 518 )\r\n 519 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)\r\n 361 verify_infos = not save_infos and not ignore_verifications\r\n 362 self._download_and_prepare(\r\n--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 364 )\r\n 365 # Sync info\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 414 try:\r\n 415 # Prepare split will record examples associated to the split\r\n--> 416 self._prepare_split(split_generator, **prepare_split_kwargs)\r\n 417 except OSError:\r\n 418 raise OSError(\"Cannot find data file. \" + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or \"\"))\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/builder.py in _prepare_split(self, split_generator)\r\n 585 fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n 586 fpath = os.path.join(self._cache_dir, fname)\r\n--> 587 examples_type = self.info.features.type\r\n 588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size)\r\n 589 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/features.py in type(self)\r\n 460 @property\r\n 461 def type(self):\r\n--> 462 return get_nested_type(self)\r\n 463 \r\n 464 @classmethod\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/features.py in get_nested_type(schema)\r\n 370 # Nested structures: we allow dict, list\/tuples, sequences\r\n 371 if isinstance(schema, dict):\r\n--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})\r\n 373 elif isinstance(schema, (list, tuple)):\r\n 374 assert len(schema) == 1, \"We defining list feature, you should just provide one example of the inner type\"\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/features.py in <dictcomp>(.0)\r\n 370 # Nested structures: we allow dict, list\/tuples, sequences\r\n 371 if isinstance(schema, dict):\r\n--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})\r\n 373 elif isinstance(schema, (list, tuple)):\r\n 374 assert len(schema) == 1, \"We defining list feature, you should just provide one example of the inner type\"\r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/features.py in get_nested_type(schema)\r\n 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds\r\n 380 if isinstance(inner_type, pa.StructType):\r\n--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))\r\n 382 return pa.list_(inner_type, schema.length)\r\n 383 \r\n\r\n\/usr\/local\/lib\/python3.6\/dist-packages\/nlp\/features.py in <genexpr>(.0)\r\n 379 # We allow to reverse list of dict => dict of list for compatiblity with tfds\r\n 380 if isinstance(inner_type, pa.StructType):\r\n--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))\r\n 382 return pa.list_(inner_type, schema.length)\r\n 383 \r\n\r\nTypeError: list_() takes exactly one argument (2 given)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/128\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/127","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/127\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/127\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/127\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/127","id":618909042,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NTQ1MDcy","number":127,"title":"Update Overview.ipynb","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589543208000,"updated_at":1589543247000,"closed_at":1589543245000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/127","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/127","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/127.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/127.patch"},"body":"update notebook","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/127\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/126","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/126\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/126\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/126\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/126","id":618897499,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NTM1Mzc5","number":126,"title":"remove webis","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589541920000,"updated_at":1589542284000,"closed_at":1589542226000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/126","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/126","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/126.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/126.patch"},"body":"Remove webis from dataset folder.\r\n\r\nOur first dataset script that only lives on AWS :-) https:\/\/s3.console.aws.amazon.com\/s3\/buckets\/datasets.huggingface.co\/nlp\/datasets\/webis\/tl_dr\/?region=us-east-1 @julien-c @jplu ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/126\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/125","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/125\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/125\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/125\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/125","id":618869048,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0","number":125,"title":"[Newsroom] add newsroom","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589538874000,"updated_at":1589539027000,"closed_at":1589539022000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/125","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/125","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/125.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/125.patch"},"body":"I checked it with the data link of the mail you forwarded @thomwolf => works well!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/125\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/124","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/124\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/124\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/124\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/124","id":618864284,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx","number":124,"title":"Xsum, require manual download of some files","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589538373000,"updated_at":1589540688000,"closed_at":1589540686000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/124","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/124","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/124.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/124.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/124\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/123","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/123\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/123\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/123\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/123","id":618820140,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NDcxODU5","number":123,"title":"[Tests] Local => aws","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path\/to\/my\/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n\r\nNote: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.","> For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path\/to\/my\/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> \r\n> Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n\r\nDoes it have to download the whole data to check if the checksums are correct? I guess so no? ","> > For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path\/to\/my\/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> > Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n> \r\n> Does it have to download the whole data to check if the checksums are correct? I guess so no?\r\n\r\nYes it has to download them all (unless they were already downloaded in which case it just uses the cached downloaded files)."],"created_at":1589533945000,"updated_at":1589537172000,"closed_at":1589537006000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/123","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/123","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/123.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/123.patch"},"body":"## Change default Test from local => aws\r\n\r\nAs a default we set` aws=True`, `Local=False`, `slow=False`\r\n\r\n### 1. RUN_AWS=1 (default)\r\nThis runs 4 tests per dataset script.\r\n\r\na) Does the dataset script have a valid etag \/ Can it be reached on AWS? \r\nb) Can we load its `builder_class`?\r\nc) Can we load **all** dataset configs?\r\nd) _Most importantly_: Can we load the dataset? \r\n\r\nImportant - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s.\r\n\r\n### 2. RUN_LOCAL=1 RUN_AWS=0\r\n\r\n***This should be done when debugging dataset scripts of the .\/datasets folder***\r\n\r\nThis only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory?\r\n\r\n### 3. RUN_SLOW=1\r\n\r\nWe should set up to run these tests maybe 1 time per week ? @thomwolf \r\n\r\nThe `slow` tests include two more important tests. \r\n\r\ne) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work.\r\n\r\nf) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the \"real\" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script? ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/123\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/122","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/122\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/122\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/122\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/122","id":618813182,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3","number":122,"title":"Final cleanup of readme and metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589533252000,"updated_at":1630698009000,"closed_at":1589533342000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/122","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/122","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/122.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/122.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/122\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/121","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/121\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/121\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/121\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/121","id":618790040,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx","number":121,"title":"make style","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589531016000,"updated_at":1589531139000,"closed_at":1589531138000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/121","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/121","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/121.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/121.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/121\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/120","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/120\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/120\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/120\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/120","id":618737783,"node_id":"MDU6SXNzdWU2MTg3Mzc3ODM=","number":120,"title":"\ud83d\udc1b `map` not working","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I didn't assign the output \ud83e\udd26\u200d\u2642\ufe0f\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```"],"created_at":1589524988000,"updated_at":1589526158000,"closed_at":1589526158000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to run a basic example (mapping function to add a prefix). \r\n[Here is the colab notebook I'm using.](https:\/\/colab.research.google.com\/drive\/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)\r\n\r\n```python\r\nimport nlp\r\n\r\ndataset = nlp.load_dataset('squad', split='validation[:10%]')\r\n\r\ndef test(sample):\r\n sample['title'] = \"test prefix @@@ \" + sample[\"title\"]\r\n return sample\r\n\r\nprint(dataset[0]['title'])\r\ndataset.map(test)\r\nprint(dataset[0]['title'])\r\n```\r\nOutput :\r\n> Super_Bowl_50\r\nSuper_Bowl_50\r\n\r\nExpected output :\r\n> Super_Bowl_50\r\ntest prefix @@@ Super_Bowl_50","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/120\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/119","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/119\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/119\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/119\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/119","id":618652145,"node_id":"MDU6SXNzdWU2MTg2NTIxNDU=","number":119,"title":"\ud83d\udc1b Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache Arrow\r\nHome-page: https:\/\/arrow.apache.org\/\r\nAuthor: None\r\nAuthor-email: None\r\nLicense: Apache License, Version 2.0\r\nLocation: \/usr\/local\/lib\/python3.6\/dist-packages\r\nRequires: numpy\r\nRequired-by: nlp, feather-format\r\n> \r\n> version = 0.14.1","Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine."],"created_at":1589509646000,"updated_at":1589519482000,"closed_at":1589510728000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to load CNN\/DM dataset on Colab.\r\n\r\n[Colab notebook](https:\/\/colab.research.google.com\/drive\/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)\r\n\r\nBut I meet this error :\r\n\r\n> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/119\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/118","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/118\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/118\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/118\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/118","id":618643088,"node_id":"MDU6SXNzdWU2MTg2NDMwODg=","number":118,"title":"\u2753 How to apply a map to all subsets ?","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["That's the way!"],"created_at":1589507932000,"updated_at":1589526349000,"closed_at":1589526265000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm working with CNN\/DM dataset, where I have 3 subsets : `train`, `test`, `validation`.\r\n\r\nShould I apply my map function on the subsets one by one ?\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nfor corpus in ['train', 'test', 'validation']:\r\n cnn_dm[corpus] = cnn_dm[corpus].map(my_func)\r\n```\r\n\r\nOr is there a better way to do this ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/118\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/117","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/117\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/117\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/117\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/117","id":618632573,"node_id":"MDU6SXNzdWU2MTg2MzI1NzM=","number":117,"title":"\u2753 How to remove specific rows of a dataset ?","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Hi, you can't do that at the moment."],"created_at":1589505906000,"updated_at":1620964939000,"closed_at":1589526272000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I saw on the [example notebook](https:\/\/colab.research.google.com\/github\/huggingface\/nlp\/blob\/master\/notebooks\/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column :\r\n\r\n```python\r\ndataset.drop('id')\r\n```\r\n\r\nBut I didn't find how to remove a specific row. \r\n\r\n**For example, how can I remove all sample with `id` < 10 ?**","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/117\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/116","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/116\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/116\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/116\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/116","id":618628264,"node_id":"MDU6SXNzdWU2MTg2MjgyNjQ=","number":116,"title":"\ud83d\udc1b Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[{"id":2067393914,"node_id":"MDU6TGFiZWwyMDY3MzkzOTE0","url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/labels\/metric%20bug","name":"metric bug","color":"25b21e","default":false,"description":"A bug in a metric script"}],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Can you share your data files or a minimally reproducible example?","Sure, [here is a Colab notebook](https:\/\/colab.research.google.com\/drive\/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56","This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction\/reference and `add_batch` for a batch of predictions\/references. This would make it more coherent with the way we use Arrow.\r\n\r\nLet me do this change","Thanks for noticing though. I was mainly used to do `.compute` directly ^^","Thanks @lhoestq it works :)"],"created_at":1589505126000,"updated_at":1590709387000,"closed_at":1590709387000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to use rouge metric.\r\n\r\nI have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence. \r\nI tried :\r\n\r\n```python\r\nimport nlp\r\n\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"test.pred.tokenized\") as p, open(\"test.gold.tokenized\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\n```\r\n\r\nBut I meet following error :\r\n\r\n> pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323\r\n\r\n---\r\n\r\nFull stack-trace :\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 3, in <module>\r\n File \"\/home\/me\/.venv\/transformers\/lib\/python3.6\/site-packages\/nlp\/metric.py\", line 224, in add\r\n self.writer.write_batch(batch)\r\n File \"\/home\/me\/.venv\/transformers\/lib\/python3.6\/site-packages\/nlp\/arrow_writer.py\", line 148, in write_batch\r\n pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)\r\n File \"pyarrow\/table.pxi\", line 1550, in pyarrow.lib.Table.from_pydict\r\n File \"pyarrow\/table.pxi\", line 1503, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow\/public-api.pxi\", line 390, in pyarrow.lib.pyarrow_wrap_table\r\n File \"pyarrow\/error.pxi\", line 85, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323\r\n```\r\n\r\n(`nlp` installed from source)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/116\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/115","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/115\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/115\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/115\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/115","id":618615855,"node_id":"MDU6SXNzdWU2MTg2MTU4NTU=","number":115,"title":"AttributeError: 'dict' object has no attribute 'info'","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\"].info == cnn_dm[\"validation\"].info)\r\n# True\r\n```\r\n\r\nIs it expected ?","Good point @Colanim ! What happens under the hood when running:\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\n```\r\n\r\nis that for every split in `cnn_dailymail`, a different dataset object (which all holds the same info) is created. This has the advantages that the datasets are easily separable in a training setup. \r\nAlso note that you can load e.g. only the `train` split of the dataset via:\r\n\r\n```python\r\ncnn_dm_train = nlp.load_dataset('cnn_dailymail', split=\"train\")\r\nprint(cnn_dm_train.info)\r\n```\r\n\r\nI think we should make the `info` object slightly different when creating the dataset for each split - at the moment it contains for example the variable `splits` which should maybe be renamed to `split` and contain only one `SplitInfo` object ...\r\n"],"created_at":1589502587000,"updated_at":1589721060000,"closed_at":1589721060000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I'm trying to access the information of CNN\/DM dataset :\r\n\r\n```python\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm.info)\r\n```\r\n\r\nreturns :\r\n\r\n> AttributeError: 'dict' object has no attribute 'info'","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/115\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/114","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/114\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/114\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/114\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/114","id":618611310,"node_id":"MDU6SXNzdWU2MTg2MTEzMTA=","number":114,"title":"Couldn't reach CNN\/DM dataset","user":{"login":"astariul","id":43774355,"node_id":"MDQ6VXNlcjQzNzc0MzU1","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/43774355?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/astariul","html_url":"https:\/\/github.com\/astariul","followers_url":"https:\/\/api.github.com\/users\/astariul\/followers","following_url":"https:\/\/api.github.com\/users\/astariul\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/astariul\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/astariul\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/astariul\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/astariul\/orgs","repos_url":"https:\/\/api.github.com\/users\/astariul\/repos","events_url":"https:\/\/api.github.com\/users\/astariul\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/astariul\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Installing from source (instead of Pypi package) solved the problem."],"created_at":1589501777000,"updated_at":1589501992000,"closed_at":1589501991000,"author_association":"NONE","active_lock_reason":null,"pull_request":null,"body":"I can't get CNN \/ DailyMail dataset.\r\n\r\n```python\r\nimport nlp\r\n\r\nassert \"cnn_dailymail\" in [dataset.id for dataset in nlp.list_datasets()]\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\n```\r\n\r\n[Colab notebook](https:\/\/colab.research.google.com\/drive\/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing)\r\n\r\ngives following error :\r\n\r\n```\r\nConnectionError: Couldn't reach https:\/\/s3.amazonaws.com\/datasets.huggingface.co\/nlp\/cnn_dailymail\/cnn_dailymail.py\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/114\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/113","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/113\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/113\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/113\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/113","id":618590562,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MjkxNjIx","number":113,"title":"Adding docstrings and some doc","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589498081000,"updated_at":1589498565000,"closed_at":1589498564000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/113","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/113","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/113.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/113.patch"},"body":"Some doc","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/113\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/112","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/112\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/112\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/112\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/112","id":618569195,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4Mjc0MTU4","number":112,"title":"Qa4mre - add dataset","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589494671000,"updated_at":1589534203000,"closed_at":1589534202000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/112","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/112","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/112.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/112.patch"},"body":"Added dummy data test only for the first config. Will do the rest later.\r\nI had to do add some minor hacks to an important function to make it work. \r\nThere might be a cleaner way to handle it - can you take a look @thomwolf ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/112\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/111","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/111\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/111\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/111\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/111","id":618528060,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy","number":111,"title":"[Clean-up] remove under construction datastes","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589489533000,"updated_at":1589489543000,"closed_at":1589489542000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/111","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/111","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/111.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/111.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/111\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/110","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/110\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/110\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/110\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/110","id":618520325,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MjMzODIy","number":110,"title":"fix reddit tifu dummy data","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589488657000,"updated_at":1589488814000,"closed_at":1589488813000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/110","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/110","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/110.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/110.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/110\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/109","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/109\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/109\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/109\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/109","id":618508359,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MjI0MDYw","number":109,"title":"[Reclor] fix reclor","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589487386000,"updated_at":1589487549000,"closed_at":1589487548000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/109","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/109","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/109.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/109.patch"},"body":"- That's probably one me. Could have made the manual data test more flexible. @mariamabarham ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/109\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/108","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/108\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/108\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/108\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/108","id":618386394,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MTIzMzc3","number":108,"title":"convert can use manual dir as second argument","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589475152000,"updated_at":1589475163000,"closed_at":1589475162000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/108","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/108","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/108.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/108.patch"},"body":"@mariamabarham ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/108\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/107","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/107\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/107\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/107\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/107","id":618373045,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MTEyNzcx","number":107,"title":"add writer_batch_size to GeneratorBasedBuilder","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome that's great!"],"created_at":1589474139000,"updated_at":1589475030000,"closed_at":1589475029000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/107","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/107","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/107.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/107.patch"},"body":"You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/107\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/106","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/106\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/106\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/106\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/106","id":618361418,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3","number":106,"title":"Add data dir test command","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice - I think we can merge this. I will update the checksums for `wikihow` then as well"],"created_at":1589473119000,"updated_at":1589474951000,"closed_at":1589474950000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/106","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/106","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/106.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/106.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/106\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/105","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/105\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/105\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/105\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/105","id":618345191,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MDg5Njgz","number":105,"title":"[New structure on AWS] Adapt paths","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589471757000,"updated_at":1589471788000,"closed_at":1589471787000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/105","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/105","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/105.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/105.patch"},"body":"Some small changes so that we have the correct paths. @julien-c ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/105\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/104","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/104\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/104\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/104\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/104","id":618277081,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE4MDMzOTY0","number":104,"title":"Add trivia_q","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589466439000,"updated_at":1594532060000,"closed_at":1589487812000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/104","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/104","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/104.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/104.patch"},"body":"Currently tested only for one config to pass tests. Needs to add more dummy data later.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/104\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/103","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/103\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/103\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/103\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/103","id":618233637,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy","number":103,"title":"[Manual downloads] add logic proposal for manual downloads and add wikihow","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> Wikihow is an example that needs to manually download two files as stated in: https:\/\/github.com\/mahnazkoupaee\/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~\/wikihow\/manual_dir`.\r\n> \r\n> The dataset can then be loaded via:\r\n> \r\n> ```python\r\n> import nlp\r\n> nlp.load_dataset(\"wikihow\", data_dir=\"~\/wikihow\/manual_dir\")\r\n> ```\r\n> \r\n> I added\/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\nwouldn't be nicer if we can have `manual_dir\/wikihow`? ","> > Wikihow is an example that needs to manually download two files as stated in: https:\/\/github.com\/mahnazkoupaee\/WikiHow-Dataset.\r\n> > The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~\/wikihow\/manual_dir`.\r\n> > The dataset can then be loaded via:\r\n> > ```python\r\n> > import nlp\r\n> > nlp.load_dataset(\"wikihow\", data_dir=\"~\/wikihow\/manual_dir\")\r\n> > ```\r\n> > \r\n> > \r\n> > I added\/changed so that there are explicit error messages when using manually downloaded files.\r\n> \r\n> wouldn't be nicer if we can have `manual_dir\/wikihow`?\r\n\r\nSure, I mean the user can decide whatever he likes best :-) The path one puts in `data_dir` will be used as the path to the manual dir. `nlp.load_dataset(\"wikihow\", data_dir=\"~\/manual_dir\/wikihow\")` would work as well as any other path ;-) ","Perfect! You can merge!"],"created_at":1589463036000,"updated_at":1589466461000,"closed_at":1589466460000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/103","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/103","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/103.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/103.patch"},"body":"Wikihow is an example that needs to manually download two files as stated in: https:\/\/github.com\/mahnazkoupaee\/WikiHow-Dataset. \r\n\r\nThe user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~\/wikihow\/manual_dir`.\r\n\r\nThe dataset can then be loaded via:\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"wikihow\", data_dir=\"~\/wikihow\/manual_dir\")\r\n```\r\n\r\nI added\/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/103\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/102","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/102\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/102\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/102\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/102","id":618231216,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3OTk3MDQz","number":102,"title":"Run save infos","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ","Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples\/s]```"],"created_at":1589462846000,"updated_at":1589470984000,"closed_at":1589470983000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/102","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/102","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/102.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/102.patch"},"body":"I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/102\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/101","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/101\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/101\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/101\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/101","id":618111651,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3ODk5OTQ2","number":101,"title":"[Reddit] add reddit","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589451902000,"updated_at":1589452045000,"closed_at":1589452044000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/101","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/101","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/101.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/101.patch"},"body":"- Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/101\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/100","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/100\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/100\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/100\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/100","id":618081602,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3ODc1MjE2","number":100,"title":"Add per type scores in seqeval metric","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM :-) Some small suggestions to shorten the code a bit :-) ","Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)","@thom Is-it what you meant?","Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION"],"created_at":1589449072000,"updated_at":1589498495000,"closed_at":1589498494000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/100","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/100","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/100.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/100.patch"},"body":"This PR add a bit more detail in the seqeval metric. Now the usage and output are:\r\n\r\n```python\r\nimport nlp\r\nmet = nlp.load_metric('metrics\/seqeval')\r\nreferences = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\npredictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\nmet.compute(predictions, references)\r\n\r\n#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}\r\n```\r\n\r\nIt is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter:\r\n\r\n```python\r\nimport nlp\r\nmet = nlp.load_metric('metrics\/seqeval')\r\nreferences = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]\r\npredictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]\r\nmet.compute(predictions, references, metrics_kwargs={\"suffix\": True})\r\n\r\n#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/100\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/99","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/99\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/99\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/99\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/99","id":618026700,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3ODMxNjky","number":99,"title":"[Cmrc 2018] fix cmrc2018","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589444523000,"updated_at":1589446182000,"closed_at":1589446181000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/99","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/99","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/99.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/99.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/99\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/98","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/98\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/98\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/98\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/98","id":617957739,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3Nzc3NDcy","number":98,"title":"Webis tl-dr","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?","> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!","@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ","There is dummy_data here, no ?","Yeah I think naming it webis\/tl_dr would be best @jplu if that works for you","No problem at all!! On it^^","> There is dummy_data here, no ?\r\n\r\nSome paths were wrong - the structure is really confusing and the error messages don't really help either - I have to think about how to make this easier to understand!\r\n\r\nHope it was ok that I fiddled with your PR !","> Some paths were wrong - the structure is really confusing and the error message don't really help either - I have to think about how to make this easier to understand!\r\n\r\nOh ok! I haven't noticed that sorry :(\r\n\r\n> Hope it was ok that I fiddled with your PR !\r\n\r\nOf course it was ok :)","@julien-c Looks like what you have in mind?\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"datasets\/webis\", \"tl_dr\")\r\n\r\n#Output: Downloading and preparing dataset webis\/tl_dr (download: Unknown size, generated: Unknown size, total: Unknown size) to \/home\/jplu\/.cache\/huggingface\/datasets\/webis\/tl_dr\/1.0.0...\r\n```","Merging this for now. Maybe we can see whether to rename it in a different PR @julien-c ? \r\n","Hi, \r\nAuthor here of the webis-tldr corpus. Any plans on integrating this dataset into the hub? I remember we could access it in the previous versions of the library. If there is a particular issue that I can help with, do let me know.\r\n\r\nThanks!","Hi @shahbazsyed, this dataset _is_ inside the hub but it's namespaced by the organization name `webis`.\r\n\r\nYou can load it following the steps described in https:\/\/huggingface.co\/datasets\/webis\/tl_dr\r\n\r\nHere's a Colab showcasing that it works: https:\/\/colab.research.google.com\/drive\/11IrzRVpnMLJZ8_UFFHLR8FhiajjAHRUU?usp=sharing\r\n\r\nThe reason the code is in S3 and not in this repo is that the dataset is namespaced under the `webis` organization. We don't have a lot of namespaced datasets yet but this should become the main way we add more datasets in the future.\r\nLet us know if that's an issue for you. Thank you!"],"created_at":1589437338000,"updated_at":1599127221000,"closed_at":1589489656000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/98","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/98","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/98.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/98.patch"},"body":"Add the Webid TL:DR dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/98\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/97","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/97\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/97\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/97\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/97","id":617809431,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3NjU4MDcy","number":97,"title":"[Csv] add tests for csv dataset script","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@thomwolf - can you check and merge if ok? "],"created_at":1589411171000,"updated_at":1589412196000,"closed_at":1589412195000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/97","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/97","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/97.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/97.patch"},"body":"Adds dummy data tests for csv.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/97\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/96","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/96\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/96\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/96\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/96","id":617739521,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4","number":96,"title":"lm1b","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..."],"created_at":1589402324000,"updated_at":1589465610000,"closed_at":1589465609000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/96","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/96","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/96.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/96.patch"},"body":"Add lm1b dataset.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/96\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/95","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/95\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/95\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/95\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/95","id":617703037,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4","number":95,"title":"Replace checksums files by Dataset infos json","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great! LGTM :-) ","> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? "],"created_at":1589398576000,"updated_at":1589446723000,"closed_at":1589446722000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/95","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/95","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/95.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/95.patch"},"body":"### Better verifications when loading a dataset\r\n\r\nI replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`.\r\n\r\nIt simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config.\r\n\r\nThe dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR.\r\n\r\n### Renaming\r\n\r\nAccording to these changes, I did some renaming:\r\n`save_checksums` -> `save_infos`\r\n`ignore_checksums` -> `ignore_verifications`\r\n\r\nfor example, when you are creating a dataset you have to run\r\n```nlp-cli test path\/to\/my\/dataset --save_infos --all_configs```\r\ninstead of\r\n```nlp-cli test path\/to\/my\/dataset --save_checksums --all_configs```\r\n\r\n### And now, the fun part\r\n\r\nWe'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets\r\n\r\n-----------------\r\n\r\nfeedback appreciated !","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/95\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/94","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/94\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/94\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/94\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/94","id":617571340,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3NDYyMTIw","number":94,"title":"Librispeech","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D "],"created_at":1589385854000,"updated_at":1589405343000,"closed_at":1589405342000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/94","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/94","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/94.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/94.patch"},"body":"Add librispeech dataset and remove some useless content.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/94\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/93","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/93\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/93\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/93\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/93","id":617522029,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy","number":93,"title":"Cleanup notebooks and various fixes","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589381938000,"updated_at":1589382108000,"closed_at":1589382107000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/93","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/93","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/93.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/93.patch"},"body":"Fixes on dataset (more flexible) metrics (fix) and general clean ups","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/93\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/92","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/92\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/92\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/92\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/92","id":617341505,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky","number":92,"title":"[WIP] add wmt14","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589366523000,"updated_at":1589627858000,"closed_at":1589627857000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/92","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/92","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/92.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/92.patch"},"body":"WMT14 takes forever to download :-\/ \r\n\r\n- WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/92\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/91","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/91\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/91\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/91\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/91","id":617339484,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3Mjc0MjA0","number":91,"title":"[Paracrawl] add paracrawl","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589366340000,"updated_at":1589366415000,"closed_at":1589366414000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/91","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/91","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/91.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/91.patch"},"body":"- Huge dataset - took ~1h to download\r\n- Also this PR reformats all dataset scripts and adds `datasets` to `make style`","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/91\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/90","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/90\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/90\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/90\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/90","id":617311877,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0","number":90,"title":"Add download gg drive","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["awesome - so no manual downloaded needed here? ","Yes exactly. It works like a standard download"],"created_at":1589363762000,"updated_at":1589373988000,"closed_at":1589364331000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/90","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/90","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/90.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/90.patch"},"body":"We can now add datasets that download from google drive","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/90\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/89","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/89\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/89\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/89\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/89","id":617295069,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4","number":89,"title":"Add list and inspect methods - cleanup hf_api","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589362215000,"updated_at":1589378700000,"closed_at":1589362390000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/89","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/89","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/89.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/89.patch"},"body":"Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3:\r\n```python\r\nnlp.list_datasets()\r\nnlp.list_metrics()\r\n# Copy and prepare the scripts at `local_path` for easy inspection\/modification.\r\nnlp.inspect_dataset(path, local_path) \r\n# Copy and prepare the scripts at `local_path` for easy inspection\/modification.\r\nnlp.inspect_metric(path, local_path) \r\n```\r\n\r\nAlso clean up the `HfAPI` to use `dataclasses` for better user-experience","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/89\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/88","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/88\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/88\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/88\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/88","id":617284664,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw","number":88,"title":"Add wiki40b","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "],"created_at":1589361361000,"updated_at":1589373115000,"closed_at":1589373114000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/88","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/88","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/88.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/88.patch"},"body":"This one is a beam dataset that downloads files using tensorflow.\r\nI tested it on a small config and it works fine","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/88\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/87","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/87\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/87\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/87\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/87","id":617267118,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjE1NzA0","number":87,"title":"Add Flores","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589359889000,"updated_at":1589361814000,"closed_at":1589361813000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/87","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/87","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/87.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/87.patch"},"body":"Beautiful language for sure!","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/87\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/86","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/86\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/86\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/86\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/86","id":617260972,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2","number":86,"title":"[Load => load_dataset] change naming","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589359380000,"updated_at":1589359858000,"closed_at":1589359857000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/86","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/86","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/86.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/86.patch"},"body":"Rename leftovers @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/86\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/85","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/85\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/85\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/85\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/85","id":617253428,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjA0ODA4","number":85,"title":"Add boolq","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome :-) Thanks for adding the function to the Mock DL Manager"],"created_at":1589358747000,"updated_at":1589360979000,"closed_at":1589360978000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/85","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/85","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/85.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/85.patch"},"body":"I just added the dummy data for this dataset.\r\nThis one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/85\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/84","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/84\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/84\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/84\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/84","id":617249815,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz","number":84,"title":"[TedHrLr] add left dummy data","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589358440000,"updated_at":1589358562000,"closed_at":1589358561000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/84","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/84","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/84.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/84.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/84\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/83","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/83\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/83\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/83\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/83","id":616863601,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2ODkyOTUz","number":83,"title":"New datasets","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589307747000,"updated_at":1589307767000,"closed_at":1589307765000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/83","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/83","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/83.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/83.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/83\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/82","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/82\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/82\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/82\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/82","id":616805194,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2ODQ1Njc5","number":82,"title":"[Datasets] add ted_hrlr","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589302010000,"updated_at":1589356374000,"closed_at":1589356373000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/82","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/82","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/82.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/82.patch"},"body":"@thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework. \r\n\r\nThe result looks like this:\r\n![Screenshot from 2020-05-12 18-34-43](https:\/\/user-images.githubusercontent.com\/23423619\/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png)\r\n\r\nyou can see that each split has a `translation` key which value is the nlp.features.Translation object. \r\n\r\nThat's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/82\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/81","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/81\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/81\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/81\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/81","id":616793010,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2ODM1NzE1","number":81,"title":"add tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589300899000,"updated_at":1589355837000,"closed_at":1589355836000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/81","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/81","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/81.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/81.patch"},"body":"Tests for py_utils functions and for the BaseReader used to read from arrow and parquet.\r\nI also removed unused utils functions.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/81\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/80","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/80\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/80\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/80\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/80","id":616786803,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2ODMwNjk3","number":80,"title":"Add nbytes + nexamples check","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good to me! Should we hard code those numbers in the config classes and make sure that when loading a dataset that the numbers match? "],"created_at":1589300323000,"updated_at":1589356354000,"closed_at":1589356353000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/80","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/80","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/80.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/80.patch"},"body":"### Save size and number of examples\r\nNow when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file.\r\nThis new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example:\r\n\r\n```\r\n# Cached sizes: <full_config_name> <num_bytes> <num_examples>\r\nhansards\/house\/1.0.0\/test 22906629 122290\r\nhansards\/house\/1.0.0\/train 191459584 947969\r\nhansards\/senate\/1.0.0\/test 5711686 25553\r\nhansards\/senate\/1.0.0\/train 40324278 182135\r\n```\r\n\r\n### Check processing output \r\n\r\nIf there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen.\r\n\r\n### Fill Dataset Info\r\n\r\nAll the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare`\r\n\r\n### Check space on disk before running `download_and_prepare`\r\n\r\nCheck if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs.\r\n\r\nTODO:\r\nA better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/80\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/79","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/79\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/79\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/79\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/79","id":616785613,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2ODI5NzMy","number":79,"title":"[Convert] add new pattern","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589300211000,"updated_at":1589300230000,"closed_at":1589300229000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/79","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/79","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/79.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/79.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/79\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/78","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/78\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/78\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/78\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/78","id":616774275,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2ODIwNzU5","number":78,"title":"[Tests] skip beam dataset tests for now","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["@lhoestq - I moved the wkipedia file to the \"correct\" folder. ","Nice thanks !"],"created_at":1589299258000,"updated_at":1589300184000,"closed_at":1589300182000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/78","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/78","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/78.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/78.patch"},"body":"For now we will skip tests for Beam Datasets","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/78\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/77","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/77\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/77\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/77\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/77","id":616674601,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz","number":77,"title":"New datasets","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589291519000,"updated_at":1589292136000,"closed_at":1589292135000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/77","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/77","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/77.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/77.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/77\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/76","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/76\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/76\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/76\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/76","id":616579228,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2NjYyMTk2","number":76,"title":"pin flake 8","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589282729000,"updated_at":1589282855000,"closed_at":1589282854000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/76","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/76","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/76.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/76.patch"},"body":"Flake 8's new version does not like our format. Pinning the version for now.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/76\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/75","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/75\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/75\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/75\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/75","id":616520163,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2NjE0MzU1","number":75,"title":"WIP adding metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["It's all about my metric stuff so I'll probably merge it unless you want to have a look.\r\n\r\nTook the occasion to remove the old doc and requirements.txt"],"created_at":1589277120000,"updated_at":1589355852000,"closed_at":1589355850000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/75","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/75","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/75.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/75.patch"},"body":"Adding the following metrics as identified by @mariamabarham:\r\n\r\n1. BLEU: BiLingual Evaluation Understudy: https:\/\/github.com\/tensorflow\/nmt\/blob\/master\/nmt\/scripts\/bleu.py, https:\/\/github.com\/chakki-works\/sumeval\/blob\/master\/sumeval\/metrics\/bleu.py (multilingual)\r\n2. GLEU: Google-BLEU: https:\/\/github.com\/cnap\/gec-ranking\/blob\/master\/scripts\/compute_gleu\r\n3. Sacrebleu: https:\/\/pypi.org\/project\/sacrebleu\/1.4.8\/ (pypi package), https:\/\/github.com\/mjpost\/sacrebleu (github implementation)\r\n4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https:\/\/github.com\/google-research\/google-research\/tree\/master\/rouge, https:\/\/github.com\/chakki-works\/sumeval\/blob\/master\/sumeval\/metrics\/rouge.py (multilingual)\r\n5. Seqeval: https:\/\/github.com\/chakki-works\/seqeval (github implementation), https:\/\/pypi.org\/project\/seqeval\/0.0.12\/ (pypi package)\r\n6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https:\/\/github.com\/ns-moosavi\/coval\r\n7. SQuAD v1 evaluation script\r\n8. SQuAD V2 evaluation script: https:\/\/worksheets.codalab.org\/rest\/bundles\/0x6b567e1cf2e041ec80d7098f031c5c9e\/contents\/blob\/\r\n9. GLUE\r\n10. XNLI\r\n\r\n\r\nNot now:\r\n1. Perplexity: https:\/\/github.com\/allenai\/allennlp\/blob\/master\/allennlp\/training\/metrics\/perplexity.py\r\n2. Spearman: https:\/\/github.com\/allenai\/allennlp\/blob\/master\/allennlp\/training\/metrics\/spearman_correlation.py\r\n3. F1_measure: https:\/\/github.com\/allenai\/allennlp\/blob\/master\/allennlp\/training\/metrics\/f1_measure.py\r\n4. Pearson_corelation: https:\/\/github.com\/allenai\/allennlp\/blob\/master\/allennlp\/training\/metrics\/pearson_correlation.py\r\n5. AUC: https:\/\/github.com\/allenai\/allennlp\/blob\/master\/allennlp\/training\/metrics\/auc.py \r\n6. Entropy: https:\/\/github.com\/allenai\/allennlp\/blob\/master\/allennlp\/training\/metrics\/entropy.py","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/75\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/74","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/74\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/74\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/74\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/74","id":616511101,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2NjA3MDcy","number":74,"title":"fix overflow check","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589276281000,"updated_at":1589277879000,"closed_at":1589277878000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/74","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/74","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/74.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/74.patch"},"body":"I did some tests and unfortunately the test\r\n```\r\npa_array.nbytes > MAX_BATCH_BYTES\r\n```\r\ndoesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...).\r\n\r\nI don't think we can do a proper overflow test for the limit of 2GB...\r\n\r\nFor now I replaced it with a sanity check on the first element.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/74\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/73","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/73\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/73\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/73\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/73","id":616417845,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2NTMyMTg1","number":73,"title":"JSON script","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["The tests for the Wikipedia dataset do not pass anymore with the error:\r\n```\r\nTo be able to use this dataset, you need to install the following dependencies ['mwparserfromhell'] using 'pip install mwparserfromhell' for instance'\r\n```","This was an issue on master. You can just rebase from master.","Perfect! Indeed, it worked^^ Thanks @lhoestq ","Currently the dummy_data tests are always green because in a PR the dataset is not yet synchronized with aws. This PR fixes this: https:\/\/github.com\/huggingface\/nlp\/pull\/140 . \r\n\r\nCould you test `json` locally or wait until the PR: https:\/\/github.com\/huggingface\/nlp\/pull\/140 is merged ? :-) ","Ok, I will wait #140 to be merged and then rebase :) "],"created_at":1589267482000,"updated_at":1589784637000,"closed_at":1589784636000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/73","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/73","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/73.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/73.patch"},"body":"Add a JSONS script to read JSON datasets from files.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/73\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/72","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/72\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/72\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/72\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/72","id":616225010,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2Mzc4Mjg4","number":72,"title":"[README dummy data tests] README to better understand how the dummy data structure works","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1589235543000,"updated_at":1589235963000,"closed_at":1589235961000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/72","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/72","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/72.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/72.patch"},"body":"In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the \"edge cases\". \r\n\r\n@mariamabarham @thomwolf @lhoestq @jplu - I'd be happy to checkout the dummy data structure and get some feedback on possible improvements.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/72\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/71","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/71\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/71\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/71\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/71","id":615942180,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE2MTUxODM4","number":71,"title":"Fix arrow writer for big datasets using writer_batch_size","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["After a quick chat with Yacine : the 2Go test may not be sufficient actually, as I'm looking at the size of the array and not the size of the current_rows. If the test doesn't do the job I think I'll remove it and lower the batch size a bit to be sure that it never exceeds 2Go. I'll do more tests later"],"created_at":1589208336000,"updated_at":1589227787000,"closed_at":1589227238000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/71","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/71","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/71.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/71.patch"},"body":"This PR fixes Yacine's bug.\r\nAccording to [this](https:\/\/github.com\/apache\/arrow\/blob\/master\/docs\/source\/cpp\/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go.\r\n\r\nTherefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/71\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/70","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/70\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/70\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/70\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/70","id":615679102,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1OTM3NDgw","number":70,"title":"adding RACE, QASC, Super_glue and Tiny_shakespear datasets","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think rebasing to master will solve the quality test and the datasets that don't have a testing structure yet because of the manual download - maybe you can put them in `datasets under construction`? Then would also make it easier for me to see how to add tests for them :-) "],"created_at":1589184469000,"updated_at":1589289712000,"closed_at":1589289711000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/70","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/70","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/70.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/70.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/70\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/69","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/69\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/69\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/69\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/69","id":615450534,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1NzYyNTQ4","number":69,"title":"fix cache dir in builder tests","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Nice, is that the reason one cannot rerun the tests without deleting the cache? \r\n","Yes exactly. It was not using the temporary dir for tests."],"created_at":1589135961000,"updated_at":1589181570000,"closed_at":1589181568000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/69","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/69","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/69.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/69.patch"},"body":"minor fix","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/69\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/68","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/68\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/68\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/68\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/68","id":614882655,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MzQ3NTgw","number":68,"title":"[CSV] re-add csv","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588959509000,"updated_at":1588959648000,"closed_at":1588959646000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/68","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/68","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/68.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/68.patch"},"body":"Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests.\r\n\r\n@lhoestq noticed that I accidently deleted it in https:\/\/github.com\/huggingface\/nlp\/pull\/63#discussion_r422263729.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/68\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/67","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/67\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/67\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/67\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/67","id":614798483,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1Mjc5NjI0","number":67,"title":"[Tests] Test files locally","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Super nice, good job @patrickvonplaten!"],"created_at":1588950163000,"updated_at":1588967447000,"closed_at":1588951020000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/67","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/67","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/67.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/67.patch"},"body":"This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets. \r\n\r\nBy default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci. \r\n\r\n**When local is activated all folders in `.\/datasets` are tested.**\r\n\r\n**Important** When adding a dataset, we should no longer upload it to AWS. The steps are:\r\n1. Open a PR\r\n2. Add a dataset as described in `datasets\/README.md`\r\n3. If all tests pass, push to master\r\n\r\nCurrently we have 49 functional datasets in our code base. \r\n\r\nWe have 6 datasets \"under-construction\" that don't pass the tests - so I put them in a folder \"datasets_under_construction\" - it would be nice to open a PR to fix them and put them in the `datasets` folder.\r\n\r\n**Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via:\r\n`rm -r ~\/.cache\/huggingface\/datasets\/*` \r\n\r\n@thomwolf @mariamabarham @lhoestq ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/67\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/66","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/66\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/66\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/66\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/66","id":614748552,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MjM5Njgy","number":66,"title":"[Datasets] ReadME","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588945063000,"updated_at":1588945163000,"closed_at":1588945162000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/66","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/66","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/66.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/66.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/66\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/65","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/65\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/65\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/65\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/65","id":614746516,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MjM4MDEw","number":65,"title":"fix math dataset and xcopa","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588944835000,"updated_at":1588944941000,"closed_at":1588944940000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/65","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/65","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/65.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/65.patch"},"body":"- fixes math dataset and xcopa, uploaded both of the to S3","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/65\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/64","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/64\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/64\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/64\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/64","id":614737057,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MjMwMjYy","number":64,"title":"[Datasets] Make master ready for datasets adding","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588943820000,"updated_at":1588943851000,"closed_at":1588943850000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/64","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/64","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/64.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/64.patch"},"body":"Add all relevant files so that datasets can now be added on master","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/64\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/63","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/63\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/63\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/63\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/63","id":614666365,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MTczODU5","number":63,"title":"[Dataset scripts] add all datasets scripts","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588935015000,"updated_at":1588959562000,"closed_at":1588937640000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/63","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/63","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/63.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/63.patch"},"body":"As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes. \r\n\r\n@mariamabarham @lhoestq @thomwolf - what do you think? \r\n\r\nIf this is ok for you, I can sync up the master with the `add_dataset` branch: https:\/\/github.com\/huggingface\/nlp\/pull\/37 so that master is up to date. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/63\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/62","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/62\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/62\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/62\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/62","id":614630830,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MTQ1NDAx","number":62,"title":"[Cached Path] Better error message","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588930787000,"updated_at":1588931147000,"closed_at":1588931147000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/62","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/62","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/62.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/62.patch"},"body":"IMO returning `None` in this function only leads to confusion and is never helpful.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/62\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/61","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/61\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/61\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/61\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/61","id":614607474,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE1MTI3MTU4","number":61,"title":"[Load] rename setup_module to prepare_module","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588928062000,"updated_at":1588928192000,"closed_at":1588928176000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/61","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/61","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/61.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/61.patch"},"body":"rename setup_module to prepare_module due to issues with pytests `setup_module` function.\r\nSee: PR #59. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/61\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/60","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/60\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/60\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/60\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/60","id":614372553,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0OTQyNjEy","number":60,"title":"Update to simplify some datasets conversion","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome! ","Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)","> Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)\r\n\r\nWe should probably open a new PR about this","I think it might be a good idea to both change the supervised keys to a named tuple and also handle the translation features specifically.","Just noticed that `pyarrow` apparently does not have a `is_boolean` function. Or do I have the wrong `pyarrow` version? ","Ah, it was a typo `pa.types.is_boolean` is the correct name. Will fix in: https:\/\/github.com\/huggingface\/nlp\/pull\/59"],"created_at":1588888944000,"updated_at":1588934312000,"closed_at":1588933104000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/60","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/60","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/60.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/60.patch"},"body":"This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https:\/\/github.com\/huggingface\/nlp\/pull\/37#discussion_r420176626\r\n\r\nWe could also change (not included in this PR yet):\r\n- `supervized_keys` to make them a NamedTuple instead of a dataclass, and\r\n- handle specifically the `Translation` features.\r\nas mentioned here: https:\/\/github.com\/huggingface\/nlp\/pull\/37#discussion_r421740236\r\n\r\n@patrickvonplaten @mariamabarham tell me if you want these two last changes as well.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/60\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/59","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/59\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/59\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/59\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/59","id":614366045,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0OTM3NTgx","number":59,"title":"Fix tests","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I can fix the tests tomorrow :-) ","Very weird bug indeed! I think the problem was that when importing `setup_module` we overwrote `pytest's` setup_module function. I think this is the relevant code in pytest: https:\/\/github.com\/pytest-dev\/pytest\/blob\/9d2eabb397b059b75b746259daeb20ee5588f559\/src\/_pytest\/python.py#L460.","Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n\r\nI think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham \r\n\r\n","> Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n> \r\n> I think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham\r\n\r\nI think if it only needs a re-uploading, we can rename it, `DatasetBuilder.config` is easier and sounds better","Ok seems to be fine. Most tests work - merging."],"created_at":1588888089000,"updated_at":1588935477000,"closed_at":1588934811000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/59","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/59","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/59.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/59.patch"},"body":"@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.\r\n\r\nI'm trying to fix them here but I have a weird error, do you think you can have a look?\r\n```bash\r\n(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv .\/tests\/test_dataset_common.py::DatasetTest::test_builder_class_snli\r\n============================================================================= test session starts =============================================================================\r\nplatform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- \/Users\/thomwolf\/miniconda2\/envs\/datasets\/bin\/python\r\ncachedir: .pytest_cache\r\nrootdir: \/Users\/thomwolf\/Documents\/GitHub\/datasets\r\nplugins: xdist-1.31.0, forked-1.1.3\r\ncollected 1 item \r\n\r\ntests\/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR\r\n\r\n=================================================================================== ERRORS ====================================================================================\r\n____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________\r\n\r\nfile_path = <module 'tests.test_dataset_common' from '\/Users\/thomwolf\/Documents\/GitHub\/datasets\/tests\/test_dataset_common.py'>\r\ndownload_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)\r\ndownload_kwargs = {}\r\n\r\n def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:\r\n r\"\"\"\r\n Download\/extract\/cache a dataset to add to the lib from a path or url which can be:\r\n - a path to a local directory containing the dataset processing python script\r\n - an url to a S3 directory with a dataset processing python script\r\n \r\n Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)\r\n and using cloudpickle (among other things).\r\n \r\n Return: tuple of\r\n the unique id associated to the dataset\r\n the local path to the dataset\r\n \"\"\"\r\n if download_config is None:\r\n download_config = DownloadConfig(**download_kwargs)\r\n download_config.extract_compressed_file = True\r\n download_config.force_extract = True\r\n \r\n> name = list(filter(lambda x: x, file_path.split(\"\/\")))[-1] + \".py\"\r\nE AttributeError: module 'tests.test_dataset_common' has no attribute 'split'\r\n\r\nsrc\/nlp\/load.py:169: AttributeError\r\n============================================================================== warnings summary ===============================================================================\r\n\/Users\/thomwolf\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/pywrap_tensorflow_internal.py:15\r\n \/Users\/thomwolf\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n-- Docs: https:\/\/docs.pytest.org\/en\/latest\/warnings.html\r\n=========================================================================== short test summary info ===========================================================================\r\nERROR tests\/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'\r\n========================================================================= 1 warning, 1 error in 3.63s =========================================================================\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/59\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/58","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/58\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/58\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/58\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/58","id":614362308,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0OTM0NTY4","number":58,"title":"Aborted PR - Fix tests","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Wait I messed up my branch, let me clean this."],"created_at":1588887619000,"updated_at":1588888081000,"closed_at":1588887687000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/58","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/58","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/58.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/58.patch"},"body":"@patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.\r\n\r\nI'm trying to fix them here but I have a weird error, do you think you can have a look?\r\n```bash\r\n(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv .\/tests\/test_dataset_common.py::DatasetTest::test_builder_class_snli\r\n============================================================================= test session starts =============================================================================\r\nplatform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- \/Users\/thomwolf\/miniconda2\/envs\/datasets\/bin\/python\r\ncachedir: .pytest_cache\r\nrootdir: \/Users\/thomwolf\/Documents\/GitHub\/datasets\r\nplugins: xdist-1.31.0, forked-1.1.3\r\ncollected 1 item \r\n\r\ntests\/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR\r\n\r\n=================================================================================== ERRORS ====================================================================================\r\n____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________\r\n\r\nfile_path = <module 'tests.test_dataset_common' from '\/Users\/thomwolf\/Documents\/GitHub\/datasets\/tests\/test_dataset_common.py'>\r\ndownload_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)\r\ndownload_kwargs = {}\r\n\r\n def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:\r\n r\"\"\"\r\n Download\/extract\/cache a dataset to add to the lib from a path or url which can be:\r\n - a path to a local directory containing the dataset processing python script\r\n - an url to a S3 directory with a dataset processing python script\r\n \r\n Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)\r\n and using cloudpickle (among other things).\r\n \r\n Return: tuple of\r\n the unique id associated to the dataset\r\n the local path to the dataset\r\n \"\"\"\r\n if download_config is None:\r\n download_config = DownloadConfig(**download_kwargs)\r\n download_config.extract_compressed_file = True\r\n download_config.force_extract = True\r\n \r\n> name = list(filter(lambda x: x, file_path.split(\"\/\")))[-1] + \".py\"\r\nE AttributeError: module 'tests.test_dataset_common' has no attribute 'split'\r\n\r\nsrc\/nlp\/load.py:169: AttributeError\r\n============================================================================== warnings summary ===============================================================================\r\n\/Users\/thomwolf\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/pywrap_tensorflow_internal.py:15\r\n \/Users\/thomwolf\/miniconda2\/envs\/datasets\/lib\/python3.7\/site-packages\/tensorflow_core\/python\/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n\r\n-- Docs: https:\/\/docs.pytest.org\/en\/latest\/warnings.html\r\n=========================================================================== short test summary info ===========================================================================\r\nERROR tests\/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'\r\n========================================================================= 1 warning, 1 error in 3.63s =========================================================================\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/58\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/57","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/57\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/57\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/57\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/57","id":614261638,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0ODUzMDM5","number":57,"title":"Better cached path","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I should have read this PR before doing my own: https:\/\/github.com\/huggingface\/nlp\/pull\/62 :D \r\nwill close mine. Looks great :-) ","> Awesome, this is really nice!\r\n> \r\n> By the way, we should improve the `cached_path` method of the `transformers` repo similarly, don't you think (@patrickvonplaten in particular).\r\n\r\nYeah, we should do the same in `transformers` I think - will note it down."],"created_at":1588876560000,"updated_at":1588944030000,"closed_at":1588944028000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/57","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/57","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/57.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/57.patch"},"body":"### Changes:\r\n- The `cached_path` no longer returns None if the file is missing\/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error)\r\n- Fix requests to firebase API that doesn't handle HEAD requests...\r\n- Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/57\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/56","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/56\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/56\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/56\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/56","id":614236869,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0ODMyODY4","number":56,"title":"[Dataset] Tester add mock function","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588873897000,"updated_at":1588873971000,"closed_at":1588873970000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/56","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/56","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/56.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/56.patch"},"body":"need to add an empty `extract()` function to make `hansard` dataset test work.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/56\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/55","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/55\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/55\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/55\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/55","id":613968072,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0NjE0MjE1","number":55,"title":"Beam datasets","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Right now the changes are a bit hard to read as the one from #25 are also included. You can wait until #25 is merged before looking at the implementation details","Nice!! I tested it a bit and works quite well. I will do a my review once the #25 will be merged because there are several overlaps.\r\n\r\nAt least I can share my thoughts on your **Next** section:\r\n1) I don't think it is a good thing to rely on tfds preprocessed datasets uploaded in their online storage, because they might be updated or deleted at any moment by Google and then possibly break our own processing.\r\n2) Improves the pipeline is always a good direction, but in the meantime we might also share the preprocessed dataset in S3 storage. Which might be another way to see 1), instead of downloading Google preprocessed datasets, using our own ones.\r\n3) Apache Beam can be easily integrated in Spark, so I don't see the need to replace Beam by Spark.","Ok I've merged #25 so you can rebase or merge if you want.\r\n\r\nI fully agree with @jplu notes for the \"next section\".\r\n\r\nDon't hesitate to use some credit on Google Dataflow if you think it would be useful to give it a try.","Pr is ready for review !\r\n\r\nNew minor changes:\r\n- re-added the csv dataset builder (it was on my branch from #25 but disappeared from master)\r\n- move the csv script and the wikipedia script to \"under construction\" for now\r\n- some renaming in the `nlp-cli test` command"],"created_at":1588849472000,"updated_at":1589181602000,"closed_at":1589181600000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/55","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/55","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/55.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/55.patch"},"body":"# Beam datasets\r\n\r\n## Intro\r\n\r\nBeam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections).\r\nThe advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are:\r\n- the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine\r\n- Google Dataflow. I didn't play with it.\r\n- Spark or Flink, two well known data processing frameworks. I tried to use the Spark\/Flink local runners provided by apache beam for python and wasn't able to make them work properly though...\r\n\r\n## From tfds beam datasets to our own beam datasets\r\n\r\nTensorflow datasets used beam and a complicated pipeline to shard the TFRecords files.\r\nTo allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case).\r\n\r\nOn our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !)\r\n\r\nMoreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it).\r\n\r\n## Main changes\r\n\r\n- Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos\r\n- Create a ParquetReader and refactor a bit the arrow_reader.py\r\n\r\n\\> **With this, we can now try to add beam datasets from tfds**\r\n\r\nI already added the wikipedia one, and I will also try to add the Wiki40b dataset\r\n\r\n## Test the wikipedia script\r\n\r\nYou can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this:\r\n\r\n```\r\n>>> import nlp\r\n>>> nlp.load(\"datasets\/nlp\/wikipedia\", dataset_config=\"20200501.frr\")\r\n```\r\n\r\nThis wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol)\r\n\r\n## Next\r\n\r\nShould we allow to download preprocessed datasets from the tfds google storage ?\r\nShould we try to optimize the beam pipelines to run locally without memory issues ?\r\nShould we try other data processing frameworks for big datasets, like spark ?\r\n\r\n\r\n## About this PR\r\n\r\nIt should be merged after #25 \r\n\r\n-----------------\r\n\r\nI'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/55\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/54","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/54\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/54\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/54\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/54","id":613513348,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0MjUyODkw","number":54,"title":"[Tests] Improved Error message for dummy folder structure","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588788708000,"updated_at":1588788780000,"closed_at":1588788779000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/54","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/54","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/54.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/54.patch"},"body":"Improved Error message","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/54\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/53","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/53\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/53\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/53\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/53","id":613436158,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0MTkwMzkz","number":53,"title":"[Features] Typo in generate_from_dict","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588781123000,"updated_at":1588865326000,"closed_at":1588865325000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/53","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/53","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/53.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/53.patch"},"body":"Change `isinstance` test in features when generating features from dict.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/53\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/52","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/52\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/52\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/52\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/52","id":613339071,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0MTEyMDAy","number":52,"title":"allow dummy folder structure to handle dict of lists","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588773275000,"updated_at":1588773319000,"closed_at":1588773318000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/52","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/52","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/52.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/52.patch"},"body":"`esnli.py` needs that extension of the dummy data testing.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/52\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/51","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/51\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/51\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/51\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/51","id":613266668,"node_id":"MDExOlB1bGxSZXF1ZXN0NDE0MDUyOTYw","number":51,"title":"[Testing] Improved testing structure","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome!\r\nLet's have this in the doc at the end :-)"],"created_at":1588766587000,"updated_at":1588889239000,"closed_at":1588771218000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/51","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/51","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/51.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/51.patch"},"body":"This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class.\r\n\r\nas @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp.\r\nThis PR tries to change that to some extent.\r\n\r\nIt follows the following logic for the `dummy` folder structure now:\r\n1.) The data bulider has no config -> the `dummy` folder structure is:\r\n`dummy\/<version>\/dummy_data.zip`\r\n2) The data builder has >= 1 configs -> the `dummy` folder structure is: \r\n`dummy\/<config_name_1>\/<version>\/dummy_data.zip`\r\n`dummy\/<config_name_2>\/<version>\/dummy_data.zip`\r\n\r\nNow, the difficult part is how to create the `dummy_data.zip` file. There are two cases:\r\nA) The `data_urs` parameter inserted into the `download_and_extract` fn is a **string**:\r\n-> the `dummy_data.zip` file zips the folder: \r\n`dummy_data\/<relative_path_of_folder_structure_of_url>`\r\nB) The `data_urs` parameter inserted into the `download_and_extract` fn is a **dict**:\r\n-> the `dummy_data.zip` file zips the folder: \r\n`dummy_data\/<relative_path_of_folder_structure_of_url_behind _key_1>`\r\n`dummy_data\/<relative_path_of_folder_structure_of_url_behind _key_2>`\r\n\r\nBy relative folder structure I mean `url_path.split('.\/')[-1]`. As an example the dataset **xquad** by deepmind has the following url path behind the key `de`: `https:\/\/github.com\/deepmind\/xquad\/blob\/master\/xquad.de.json` \r\n-> This means that the relative url path should be `xquad.de.json`.\r\n\r\n\r\n@mariamabarham B) is a change from how is was before and I think is makes more sense. \r\nWhile before the `dummy_data.zip` file for xquad with config `de` looked like:\r\n`dummy_data\/de` it would now look like `dummy_data\/xquad.de.json`. I think this is better and easier to understand. \r\n\r\nTherefore there are currently 6 tests that would have to have changed their dummy folder structure, but which can easily be done (30min). \r\n\r\nI also added a function: `print_dummy_data_folder_structure` that prints out the expected structures when testing which should be quite helpful.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/51\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/50","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/50\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/50\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/50\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/50","id":612583126,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEzNTAwMjE0","number":50,"title":"[Tests] test only for fast test as a default","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Test failure is not related to change in test file.\r\n"],"created_at":1588683562000,"updated_at":1588683738000,"closed_at":1588683736000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/50","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/50","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/50.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/50.patch"},"body":"Test only for one config on circle ci to speed up testing. Add all config test as a slow test. \r\n@mariamabarham @thomwolf ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/50\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/49","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/49\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/49\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/49\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/49","id":612545483,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEzNDY5ODg0","number":49,"title":"fix flatten nested","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588679713000,"updated_at":1588687166000,"closed_at":1588687165000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/49","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/49","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/49.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/49.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/49\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/48","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/48\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/48\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/48\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/48","id":612504687,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEzNDM2MTgz","number":48,"title":"[Command Convert] remove tensorflow import","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588675260000,"updated_at":1588677238000,"closed_at":1588677236000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/48","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/48","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/48.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/48.patch"},"body":"Remove all tensorflow import statements.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/48\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/47","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/47\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/47\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/47\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/47","id":612446493,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEzMzg5MDc1","number":47,"title":"[PyArrow Feature] fix py arrow bool","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588668988000,"updated_at":1588675228000,"closed_at":1588675227000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/47","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/47","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/47.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/47.patch"},"body":"To me it seems that `bool` can only be accessed with `bool_` when looking at the pyarrow types: https:\/\/arrow.apache.org\/docs\/python\/api\/datatypes.html. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/47\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/46","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/46\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/46\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/46\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/46","id":612398190,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEzMzUxNTY0","number":46,"title":"[Features] Strip str key before dict look-up","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588663905000,"updated_at":1588667865000,"closed_at":1588667864000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/46","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/46","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/46.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/46.patch"},"body":"The dataset `anli.py` currently fails because it tries to look up a key `1\\n` in a dict that only has the key `1`. Added an if statement to strip key if it cannot be found in dict.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/46\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/45","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/45\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/45\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/45\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/45","id":612386583,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEzMzQzMjAy","number":45,"title":"[Load] Separate Module kwargs and builder kwargs.","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588662594000,"updated_at":1588931482000,"closed_at":1588931482000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/45","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/45","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/45.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/45.patch"},"body":"Kwargs for the `load_module` fn should be passed with `module_xxxx` to `builder_kwargs` of `load` fn.\r\n\r\nThis is a follow-up PR of: https:\/\/github.com\/huggingface\/nlp\/pull\/41","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/45\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/44","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/44\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/44\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/44\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/44","id":611873486,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyOTUwMzU1","number":44,"title":"[Tests] Fix tests for datasets with no config","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588598738000,"updated_at":1588598884000,"closed_at":1588598883000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/44","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/44","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/44.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/44.patch"},"body":"Forgot to fix `None` problem for datasets that have no config this in PR: https:\/\/github.com\/huggingface\/nlp\/pull\/42","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/44\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/43","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/43\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/43\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/43\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/43","id":611773279,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyODcxNTE5","number":43,"title":"[Checksums] If no configs exist prevent to run over empty list","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Whoops I fixed it directly on master before checking that you have done it in this PR. We may close it","Yeah, I saw :-) But I think we should add this as well since some datasets have an empty list of configs and then as the code is now it would fail. \r\n\r\nIn this PR, I just make sure that the code jumps in the correct else if \"there are no configs\" as is the case for some datasets @mariamabarham ","Sorry, I thought you meant a different commit . Just saw this one: https:\/\/github.com\/huggingface\/nlp\/commit\/7c644f284e2303b57612a6e7c904fe13906d893f\r\n.\r\n\r\nAll good then :-) "],"created_at":1588588782000,"updated_at":1588598283000,"closed_at":1588598283000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/43","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/43","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/43.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/43.patch"},"body":"`movie_rationales` e.g. has no configs.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/43\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/42","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/42\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/42\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/42\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/42","id":611754343,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyODU1OTE2","number":42,"title":"[Tests] allow tests for builders without config","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588586782000,"updated_at":1588597850000,"closed_at":1588597848000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/42","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/42","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/42.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/42.patch"},"body":"Some dataset scripts have no configs - the tests have to be adapted for this case. \r\nIn this case the dummy data will be saved as:\r\n- natural_questions\r\n -> dummy\r\n -> -> 1.0.0 (version num)\r\n -> -> -> dummy_data.zip\r\n ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/42\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/41","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/41\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/41\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/41\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/41","id":611739219,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyODQzNDQy","number":41,"title":"[Load module] allow kwargs into load module","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588585331000,"updated_at":1588621147000,"closed_at":1588621146000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/41","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/41","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/41.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/41.patch"},"body":"Currenly it is not possible to force a re-download of the dataset script. \r\n\r\nThis simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/41\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/40","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/40\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/40\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/40\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/40","id":611721308,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyODI4NzU2","number":40,"title":"Update remote checksums instead of overwrite","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588583594000,"updated_at":1588593111000,"closed_at":1588593109000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/40","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/40","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/40.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/40.patch"},"body":"When the user uploads a dataset on S3, checksums are also uploaded with the `--upload_checksums` parameter.\r\n\r\nIf the user uploads the dataset in several steps, then the remote checksums file was previously overwritten. Now it's going to be updated with the new checksums.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/40\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/39","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/39\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/39\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/39\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/39","id":611712135,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyODIxNTA4","number":39,"title":"[Test] improve slow testing","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588582713000,"updated_at":1588582790000,"closed_at":1588582789000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/39","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/39","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/39.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/39.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/39\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/38","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/38\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/38\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/38\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/38","id":611677656,"node_id":"MDU6SXNzdWU2MTE2Nzc2NTY=","number":38,"title":"[Checksums] Error for some datasets","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["@lhoestq - could you take a look? It's not very urgent though!","Fixed with 06882b4\r\n\r\nNow your command works :)\r\nNote that you can also do\r\n```\r\nnlp-cli test datasets\/nlp\/xnli --save_checksums\r\n```\r\nSo that it will save the checksums directly in the right directory.","Awesome!"],"created_at":1588579216000,"updated_at":1588585700000,"closed_at":1588585700000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":null,"body":"The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`, \r\nthe same bug happens:\r\n\r\nWhen running: \r\n```\r\npython nlp-cli nlp-cli test xnli --save_checksums\r\n```\r\n\r\nleads to:\r\n\r\n```\r\n File \"nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"\/home\/patrick\/python_bin\/nlp\/commands\/test.py\", line 61, in run\r\n ignore_checksums=self._ignore_checksums,\r\n File \"\/home\/patrick\/python_bin\/nlp\/builder.py\", line 383, in download_and_prepare\r\n self._download_and_prepare(dl_manager=dl_manager, download_config=download_config)\r\n File \"\/home\/patrick\/python_bin\/nlp\/builder.py\", line 627, in _download_and_prepare\r\n dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split,\r\n File \"\/home\/patrick\/python_bin\/nlp\/builder.py\", line 431, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"\/home\/patrick\/python_bin\/nlp\/datasets\/xnli\/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f\/xnli.py\", line 95, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DATA_URL)\r\n File \"\/home\/patrick\/python_bin\/nlp\/utils\/download_manager.py\", line 246, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"\/home\/patrick\/python_bin\/nlp\/utils\/download_manager.py\", line 186, in download\r\n self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n File \"\/home\/patrick\/python_bin\/nlp\/utils\/download_manager.py\", line 166, in _record_sizes_checksums\r\n self._recorded_sizes_checksums[url] = get_size_checksum(path)\r\n File \"\/home\/patrick\/python_bin\/nlp\/utils\/checksums_utils.py\", line 81, in get_size_checksum\r\n with open(path, \"rb\") as f:\r\nTypeError: expected str, bytes or os.PathLike object, not tuple\r\n```\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/38\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/37","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/37\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/37\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/37\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/37","id":611670295,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyNzg5MjQ4","number":37,"title":"[Datasets ToDo-List] add datasets","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"assignees":[{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false}],"milestone":null,"comments":["Note:\r\n```\r\nnlp-cli test datasets\/nlp\/<your-dataset-folder> --save_checksums --all_configs\r\n```\r\ndirectly saves the checksums in the right place, and runs for all the dataset configurations.","@patrickvonplaten can you provide the add the link to the PR for the dummy data? ","https:\/\/github.com\/huggingface\/nlp\/pull\/15 - But it's probably best to checkout into this branch and look how the dummy data strtucture is for `squad` for example.","are lock files supposed to stay ?","> are lock files supposed to stay ?\r\n\r\nNot sure! I think the checksum command creates them, so I just uploaded them as well.","We can trash the `lock` file, they are dummy file that are only used to avoid concurrent access when the library is run.\r\nYou can read the filelock readme and code, it's a very simple single-file library: https:\/\/github.com\/benediktschmitt\/py-filelock","The testing design was slightly changed as explained in https:\/\/github.com\/huggingface\/nlp\/pull\/51 . \r\nIf creating the dummy folder is too confusing it helps to upload everything else to AWS, then run the test and check the INFO when testing on how to create the dummy folder structure.","Closing because we can now work on master"],"created_at":1588578459000,"updated_at":1588945703000,"closed_at":1588945703000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/37","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/37","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/37.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/37.patch"},"body":"## Description\r\n\r\nThis PR acts as a dashboard to see which datasets are added to the library and work. \r\n\r\nCicle-ci should always be green so that we can be sure that newly added datasets are functional. \r\nThis PR should not be merged.\r\n\r\n\r\n## Progress\r\n\r\n**For the following datasets the test commands**:\r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name>\r\n```\r\nand \r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name>\r\n```\r\n\r\n**passes**.\r\n\r\n- [x] Squad\r\n- [x] Sentiment140\r\n- [x] XNLI\r\n- [x] Crime_and_Punish\r\n- [x] movie_rationales\r\n- [x] ai2_arc\r\n- [x] anli\r\n- [x] event2Mind\r\n- [x] Fquad\r\n- [x] blimp\r\n- [x] empathetic_dialogues\r\n- [x] cosmos_qa\r\n- [x] xquad\r\n- [x] blog_authorship_corpus\r\n- [x] SNLI\r\n- [x] break_data\r\n- [x] SQuAD v2\r\n- [x] cfq\r\n- [x] eraser_multi_rc\r\n- [x] Glue\r\n- [x] Tydiqa\r\n- [x] wiki_qa\r\n- [x] wikitext\r\n- [x] winogrande\r\n- [x] wiqa\r\n- [x] esnli\r\n- [x] civil_comments\r\n- [x] commonsense_qa\r\n- [x] com_qa\r\n- [x] coqa\r\n- [x] wiki_split\r\n- [x] cos_e\r\n- [x] xcopa\r\n- [x] quarel\r\n- [x] quartz\r\n- [x] squad_it\r\n- [x] quoref \r\n- [x] squad_pt\r\n- [x] cornell_movie_dialog\r\n- [x] SciQ\r\n- [x] Scifact\r\n- [x] hellaswag\r\n- [x] ted_multi (in translate)\r\n- [x] Aeslc (summarization)\r\n- [x] drop\r\n- [x] gap\r\n- [x] hansard\r\n- [x] opinosis\r\n- [x] MLQA\r\n- [x] math_dataset\r\n\r\n## How-To-Add a dataset\r\n\r\n**Before adding a dataset make sure that your branch is up to date**:\r\n1. `git checkout add_datasets`\r\n2. `git pull`\r\n\r\n**Add a dataset via the `convert_dataset.sh` bash script:** \r\n\r\nRunning `bash convert_dataset.sh <file\/to\/tfds\/datascript.py>` (*e.g.* `bash convert_dataset.sh ..\/tensorflow-datasets\/tensorflow_datasets\/text\/movie_rationales.py`) will automatically run all the steps mentioned in **Add a dataset manually** below. \r\n\r\nMake sure that you run `convert_dataset.sh` from the root folder of `nlp`.\r\n\r\nThe conversion script should work almost always for step 1): \"convert dataset script from tfds to nlp format\" and 2) \"create checksum file\" and step 3) \"make style\".\r\n\r\nIt can also sometimes automatically run step 4) \"create the correct dummy data from tfds\", but this will only work if a) there is either no config name or only one config name and b) the `tfds testing\/test_data\/fake_example` is in the correct form.\r\n\r\nNevertheless, the script should always be run in the beginning until an error occurs to be more efficient. \r\n\r\nIf the conversion script does not work or fails at some step, then you can run the steps manually as follows:\r\n\r\n**Add a dataset manually** \r\n\r\nMake sure you run all of the following commands from the root of your `nlp` git clone.\r\nAlso make sure that you changed to this branch:\r\n```\r\ngit checkout add_datasets\r\n```\r\n\r\n1) the tfds datascript file should be converted to `nlp` style:\r\n\r\n```\r\npython nlp-cli convert --tfds_path <path\/to\/tensorflow_datasets\/text\/your_dataset_name>.py --nlp_directory datasets\/nlp\r\n```\r\n\r\nThis will convert the tdfs script and create a folder with the correct name.\r\n\r\n2) the checksum file should be added. Use the command:\r\n```\r\npython nlp-cli test datasets\/nlp\/<your-dataset-folder> --save_checksums --all_configs\r\n```\r\n\r\nA checksums.txt file should be created in your folder and the structure should look as follows:\r\n\r\nsquad\/\r\n\u251c\u2500\u2500 squad.py\/\r\n\u2514\u2500\u2500 urls_checksums\/\r\n...........\u2514\u2500\u2500 checksums.txt\r\n\r\nDelete the created `*.lock` file afterward - it should not be uploaded to AWS.\r\n\r\n3) run black and isort on your newly added datascript files so that they look nice:\r\n\r\n```\r\nmake style\r\n```\r\n\r\n4) the dummy data should be added. For this it might be useful to take a look into the structure of other examples as shown in the PR here and at `<path\/to\/tensorflow_datasets\/testing\/test_data\/test_data\/fake_examples>` whether the same data can be used.\r\n\r\n5) the data can be uploaded to AWS using the command\r\n```\r\naws s3 cp datasets\/nlp\/<your-dataset-folder> s3:\/\/datasets.huggingface.co\/nlp\/<your-dataset-folder> --recursive\r\n```\r\n\r\n6) check whether all works as expected using: \r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name>\r\n```\r\nand \r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name>\r\n```\r\n\r\n7) push to this PR and rerun the circle ci workflow to check whether circle ci stays green.\r\n\r\n8) Edit this commend and tick off your newly added dataset :-) \r\n\r\n## TODO-list\r\n\r\nMaybe we can add a TODO-list here for everybody that feels like adding new datasets so that we will not add the same datasets.\r\n\r\nHere a link to available datasets: https:\/\/docs.google.com\/spreadsheets\/d\/1zOtEqOrnVQwdgkC4nJrTY6d-Av02u0XFzeKAtBM2fUI\/edit#gid=0\r\n\r\nPatrick:\r\n\r\n- [ ] boolq - *weird download link*\r\n- [ ] c4 - *beam dataset*","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/37\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/36","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/36\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/36\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/36\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/36","id":611528349,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyNjgwOTk1","number":36,"title":"Metrics - refactoring, adding support for download and distributed metrics","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok, this one seems to be ready to merge.","> Really cool, I love it! I would just raise a tiny point, the distributive version of the metrics might not work properly with TF because it is a different way to do, why not to add a \"framework\" detection and raise warning when TF is used, saying something like \"not available yet in TF switch to non distributive metric computation\".\r\n> \r\n> What do you think?\r\n\r\nGood point @jplu I'm not sure how you should do distributed metrics evaluation for TF.\r\nThere is only one python script, right?\r\nMaybe it's just the same as in the not-distributed case?","I think non-distributed case should work in TF for both cases indeed, but this needs to be tested."],"created_at":1588546817000,"updated_at":1589184962000,"closed_at":1589184960000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/36","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/36","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/36.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/36.patch"},"body":"Refactoring metrics to have a similar loading API than the datasets and improving the import system.\r\n\r\n# Import system\r\nThe import system has ben upgraded. There are now three types of imports allowed:\r\n1. `library` imports (identified as \"absolute imports\")\r\n```python\r\nimport seqeval\r\n```\r\n=> we'll test all the imports before running the scripts and if one cannot be imported we'll display an error message like this one:\r\n`ImportError: To be able to use this metric\/dataset, you need to install the following dependencies ['seqeval'] using 'pip install seqeval' for instance'`\r\n\r\n2. `internal` imports (identified as \"relative imports\")\r\n```python\r\nimport .c4_utils\r\n```\r\n=> we'll assume this point to a file in the same directory\/S3-directory as the main script and download this file.\r\n\r\n2. `external` imports (identified as \"relative imports\" with a comment starting with `# From:`)\r\n```python\r\nfrom .nmt_bleu import compute_bleu # From: https:\/\/github.com\/tensorflow\/nmt\/blob\/master\/nmt\/scripts\/bleu.py\r\n```\r\n=> we'll assume this point to the URL of a python script (if it's a link to a github file, we'll take the raw file automatically).\r\n=> the script is downloaded and renamed to the import name (here above renamed from `bleu.py` to `nmt_bleu.py`). Renaming the file can be necessary if the distant file has the same name as the dataset\/metric processing script. If you forgot to rename the distant script and it has the same name as the dataset\/metric, you'll have an explicit error message asking to rename the import anyway.\r\n\r\n# Hosting metrics\r\n\r\nMetrics are hosted on a S3 bucket like the dataset processing scripts.\r\n\r\n# Metrics scripts\r\n\r\nMetrics scripts have a lot in common with datasets processing scripts. They also have a `metric.info` including citations, descriptions and links to relevant pages.\r\n\r\nMetrics have more documentation to supply to ensure they are used well.\r\n\r\nFour examples are already included for reference in [.\/metrics](.\/metrics): BLEU, ROUGE, SacreBLEU and SeqEVAL.\r\n\r\n# Automatic support for distributed\/multi-processing metric computation\r\n\r\nWe've also added support for automatic distributed\/multi-processing metric computation (e.g. when using DistributedDataParallel). We leverage our own dataset format for smart caching in this case. \r\n\r\nHere is a quick gist of a standard use of metrics (the simplest usage):\r\n```python\r\nimport nlp\r\nbleu_metric = nlp.load_metric('bleu')\r\n\r\n# If you only have a single iteration, you can easily compute the score like this\r\npredictions = model(inputs)\r\nscore = bleu_metric.compute(predictions, references)\r\n\r\n# If you have a loop, you can \"add\" your predictions and references at each iteration instead of having to save them yourself (the metric object store them efficiently for you)\r\nfor batch in dataloader:\r\n model_input, targets = batch\r\n predictions = model(model_inputs)\r\n bleu.add(predictions, targets)\r\nscore = bleu_metric.compute() # Compute the score from all the stored predictions\/references\r\n```\r\n\r\nHere is a quick gist of a use in a distributed torch setup (should work for any python multi-process setup actually). It's pretty much identical to the second example above:\r\n```python\r\nimport nlp\r\n# You need to give the total number of parallel python processes (num_process) and the id of each process (process_id)\r\nbleu = nlp.load_metric('bleu', process_id=torch.distributed.get_rank(),b num_process=torch.distributed.get_world_size())\r\n\r\nfor batch in dataloader:\r\n model_input, targets = batch\r\n predictions = model(model_inputs)\r\n bleu.add(predictions, targets)\r\nscore = bleu_metric.compute() # Compute the score on the first node by default (can be set to compute on each node as well)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/36\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/35","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/35\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/35\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/35\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/35","id":611413731,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyNjAyMTc0","number":35,"title":"[Tests] fix typo","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588512229000,"updated_at":1588512261000,"closed_at":1588512260000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/35","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/35","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/35.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/35.patch"},"body":"@lhoestq - currently the slow test fail with:\r\n\r\n```\r\n_____________________________________________________________________________________ DatasetTest.test_load_real_dataset_xnli _____________________________________________________________________________________\r\n \r\nself = <tests.test_dataset_common.DatasetTest testMethod=test_load_real_dataset_xnli>, dataset_name = 'xnli'\r\n \r\n @slow \r\n def test_load_real_dataset(self, dataset_name):\r\n with tempfile.TemporaryDirectory() as temp_data_dir: \r\n> dataset = load(dataset_name, data_dir=temp_data_dir)\r\n \r\ntests\/test_dataset_common.py:153: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n..\/..\/python_bin\/nlp\/load.py:497: in load \r\n dbuilder.download_and_prepare(**download_and_prepare_kwargs)\r\n..\/..\/python_bin\/nlp\/builder.py:383: in download_and_prepare\r\n self._download_and_prepare(dl_manager=dl_manager, download_config=download_config)\r\n..\/..\/python_bin\/nlp\/builder.py:627: in _download_and_prepare\r\n dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split,\r\n..\/..\/python_bin\/nlp\/builder.py:431: in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n..\/..\/python_bin\/nlp\/datasets\/xnli\/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f\/xnli.py:95: in _split_generators \r\n dl_dir = dl_manager.download_and_extract(_DATA_URL)\r\n..\/..\/python_bin\/nlp\/utils\/download_manager.py:246: in download_and_extract\r\n return self.extract(self.download(url_or_urls)) \r\n..\/..\/python_bin\/nlp\/utils\/download_manager.py:186: in download \r\n self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n..\/..\/python_bin\/nlp\/utils\/download_manager.py:166: in _record_sizes_checksums \r\n self._recorded_sizes_checksums[url] = get_size_checksum(path)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n \r\npath = ('', '\/tmp\/tmpkajlg9yc\/downloads\/c0f7773c480a3f2d85639d777e0e17e65527460310d80760fd3fc2b2f2960556.c952a63cb17d3d46e412ceb7dbcd656ce2b15cc9ef17f50c28f81c48a7c853b5')\r\n \r\n def get_size_checksum(path: str) -> Tuple[int, str]:\r\n \"\"\"Compute the file size and the sha256 checksum of a file\"\"\" \r\n m = sha256()\r\n> with open(path, \"rb\") as f: \r\nE TypeError: expected str, bytes or os.PathLike object, not tuple\r\n \r\n..\/..\/python_bin\/nlp\/utils\/checksums_utils.py:81: TypeError \r\n```\r\n\r\n- the checksums probably need to be updated no? And we should also think about how to write a test for the checksums.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/35\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/34","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/34\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/34\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/34\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/34","id":611385516,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyNTg0OTM0","number":34,"title":"[Tests] add slow tests","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588503682000,"updated_at":1588508310000,"closed_at":1588508309000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/34","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/34","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/34.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/34.patch"},"body":"This PR adds a slow test that downloads the \"real\" dataset. The test is decorated as \"slow\" so that it will not automatically run on circle ci.\r\n\r\nBefore uploading a dataset, one should test that this test passes, manually by running \r\n\r\n```\r\nRUN_SLOW=1 pytest tests\/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-script-name>\r\n```\r\n\r\nThis PR should be merged after PR: #33 ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/34\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/33","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/33\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/33\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/33\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/33","id":611052081,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyMzU1ODE0","number":33,"title":"Big cleanup\/refactoring for clean serialization","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Great! I think when this merged, we can merge sure that Circle Ci stays happy when uploading new datasets. "],"created_at":1588376757000,"updated_at":1588508254000,"closed_at":1588508253000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/33","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/33","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/33.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/33.patch"},"body":"This PR cleans many base classes to re-build them as `dataclasses`. We can thus use a simple serialization workflow for `DatasetInfo`, including it's `Features` and `SplitDict` based on `dataclasses` `asdict()`.\r\n\r\nThe resulting code is a lot shorter, can be easily serialized\/deserialized, dataset info are human-readable and we can get rid of the `dataclass_json` dependency.\r\n\r\nThe scripts have breaking changes and the conversion tool is updated.\r\n\r\nExample of dataset info in SQuAD script now:\r\n```python\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"id\":\r\n nlp.Value('string'),\r\n \"title\":\r\n nlp.Value('string'),\r\n \"context\":\r\n nlp.Value('string'),\r\n \"question\":\r\n nlp.Value('string'),\r\n \"answers\":\r\n nlp.Sequence({\r\n \"text\": nlp.Value('string'),\r\n \"answer_start\": nlp.Value('int32'),\r\n }),\r\n }),\r\n # No default supervised_keys (as we have to pass both question\r\n # and context as input).\r\n supervised_keys=None,\r\n homepage=\"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/\",\r\n citation=_CITATION,\r\n )\r\n```\r\n\r\nExample of serialized dataset info:\r\n```bash\r\n{\r\n \"description\": \"Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\\n\",\r\n \"citation\": \"@article{2016arXiv160605250R,\\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\\n Konstantin and {Liang}, Percy},\\n title = \\\"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\\\",\\n journal = {arXiv e-prints},\\n year = 2016,\\n eid = {arXiv:1606.05250},\\n pages = {arXiv:1606.05250},\\narchivePrefix = {arXiv},\\n eprint = {1606.05250},\\n}\\n\",\r\n \"homepage\": \"https:\/\/rajpurkar.github.io\/SQuAD-explorer\/\",\r\n \"license\": \"\",\r\n \"features\": {\r\n \"id\": {\r\n \"dtype\": \"string\",\r\n \"_type\": \"Value\"\r\n },\r\n \"title\": {\r\n \"dtype\": \"string\",\r\n \"_type\": \"Value\"\r\n },\r\n \"context\": {\r\n \"dtype\": \"string\",\r\n \"_type\": \"Value\"\r\n },\r\n \"question\": {\r\n \"dtype\": \"string\",\r\n \"_type\": \"Value\"\r\n },\r\n \"answers\": {\r\n \"feature\": {\r\n \"text\": {\r\n \"dtype\": \"string\",\r\n \"_type\": \"Value\"\r\n },\r\n \"answer_start\": {\r\n \"dtype\": \"int32\",\r\n \"_type\": \"Value\"\r\n }\r\n },\r\n \"length\": -1,\r\n \"_type\": \"Sequence\"\r\n }\r\n },\r\n \"supervised_keys\": null,\r\n \"name\": \"squad\",\r\n \"version\": {\r\n \"version_str\": \"1.0.0\",\r\n \"description\": \"New split API (https:\/\/tensorflow.org\/datasets\/splits)\",\r\n \"nlp_version_to_prepare\": null,\r\n \"major\": 1,\r\n \"minor\": 0,\r\n \"patch\": 0\r\n },\r\n \"splits\": {\r\n \"train\": {\r\n \"name\": \"train\",\r\n \"num_bytes\": 79426386,\r\n \"num_examples\": 87599,\r\n \"dataset_name\": \"squad\"\r\n },\r\n \"validation\": {\r\n \"name\": \"validation\",\r\n \"num_bytes\": 10491883,\r\n \"num_examples\": 10570,\r\n \"dataset_name\": \"squad\"\r\n }\r\n },\r\n \"size_in_bytes\": 0,\r\n \"download_size\": 35142551,\r\n \"download_checksums\": []\r\n}\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/33\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/32","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/32\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/32\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/32\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/32","id":610715580,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyMTAzMzIx","number":32,"title":"Fix map caching notebooks","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588334126000,"updated_at":1588508158000,"closed_at":1588508157000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/32","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/32","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/32.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/32.patch"},"body":"Previously, caching results with `.map()` didn't work in notebooks.\r\nTo reuse a result, `.map()` serializes the functions with `dill.dumps` and then it hashes it.\r\n\r\nThe problem is that when using `dill.dumps` to serialize a function, it also saves its origin (filename + line no.) and the origin of all the `globals` this function needs. However for notebooks and shells, the filename looks like \\<ipython-input-13-9ed2afe61d25\\> and the line no. changes often.\r\n\r\nTo fix the problem, I added a new dispatch function for code objects that ignore the origin of the code if it comes from a notebook or a python shell.\r\n\r\nI tested these cases in a notebook:\r\n- lambda functions\r\n- named functions\r\n- methods\r\n- classmethods\r\n- staticmethods\r\n- classes that implement `__call__`\r\n\r\nThe caching now works as expected for all of them :)\r\nI also tested the caching in the demo notebook and it works fine !","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/32\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/31","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/31\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/31\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/31\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/31","id":610677641,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEyMDczNDE4","number":31,"title":"[Circle ci] Install a virtual env before running tests","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588327877000,"updated_at":1588370776000,"closed_at":1588370775000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/31","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/31","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/31.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/31.patch"},"body":"Install a virtual env before running tests to not running into sudo issues when dynamically downloading files. \r\n\r\nSame number of tests now pass \/ fail as on my local computer: \r\n![Screenshot from 2020-05-01 12-14-44](https:\/\/user-images.githubusercontent.com\/23423619\/80798814-8a0a0a80-8ba5-11ea-8db8-599d33bbfccd.png)\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/31\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/30","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/30\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/30\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/30\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/30","id":610549072,"node_id":"MDExOlB1bGxSZXF1ZXN0NDExOTY4Mzk3","number":30,"title":"add metrics which require download files from github","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588306402000,"updated_at":1589185194000,"closed_at":1589185194000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/30","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/30","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/30.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/30.patch"},"body":"To download files from github, I copied the `load_dataset_module` and its dependencies (without the builder) in `load.py` to `metrics\/metric_utils.py`. I made the following changes:\r\n\r\n- copy the needed files in a folder`metric_name` \r\n- delete all other files that are not needed\r\n\r\nFor metrics that require an external import, I first create a `<metric_name>_imports.py` file which contains all external urls. Then I create a `<metric_name>.py` in which I will load the external files using `<metric_name>_imports.py` ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/30\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/29","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/29\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/29\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/29\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/29","id":610243997,"node_id":"MDExOlB1bGxSZXF1ZXN0NDExNzIwODMx","number":29,"title":"Hf_api small changes","user":{"login":"julien-c","id":326577,"node_id":"MDQ6VXNlcjMyNjU3Nw==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/326577?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/julien-c","html_url":"https:\/\/github.com\/julien-c","followers_url":"https:\/\/api.github.com\/users\/julien-c\/followers","following_url":"https:\/\/api.github.com\/users\/julien-c\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/julien-c\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/julien-c\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/julien-c\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/julien-c\/orgs","repos_url":"https:\/\/api.github.com\/users\/julien-c\/repos","events_url":"https:\/\/api.github.com\/users\/julien-c\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/julien-c\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Ok merging! I think it's good now"],"created_at":1588266403000,"updated_at":1588276305000,"closed_at":1588276304000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/29","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/29","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/29.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/29.patch"},"body":"From Patrick: \r\n```python \r\nfrom nlp import hf_api\r\napi = hf_api.HfApi()\r\napi.dataset_list()\r\n```\r\n\r\nworks :-) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/29\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/28","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/28\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/28\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/28\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/28","id":610241907,"node_id":"MDExOlB1bGxSZXF1ZXN0NDExNzE5MTQy","number":28,"title":"[Circle ci] Adds circle ci config","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588266215000,"updated_at":1588276269000,"closed_at":1588276268000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/28","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/28","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/28.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/28.patch"},"body":"@thomwolf can you take a look and set up circle ci on: \r\nhttps:\/\/app.circleci.com\/projects\/project-dashboard\/github\/huggingface\r\n\r\nI think for `nlp` only admins can set it up, which I guess is you :-) ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/28\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/27","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/27\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/27\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/27\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/27","id":610230476,"node_id":"MDExOlB1bGxSZXF1ZXN0NDExNzA5OTc0","number":27,"title":"[Cleanup] Removes all files in testing except test_dataset_common","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588265121000,"updated_at":1588268365000,"closed_at":1588268363000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/27","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/27","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/27.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/27.patch"},"body":"As far as I know, all files in `tests` were old `tfds test files` so I removed them. We can still look them up on the other library. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/27\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/26","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/26\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/26\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/26\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/26","id":610226047,"node_id":"MDExOlB1bGxSZXF1ZXN0NDExNzA2NjA2","number":26,"title":"[Tests] Clean tests","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588264709000,"updated_at":1588277524000,"closed_at":1588277523000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/26","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/26","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/26.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/26.patch"},"body":"the abseil testing library (https:\/\/abseil.io\/docs\/python\/quickstart.html) is better than the one I had before, so I decided to switch to that and changed the `setup.py` config file. \r\nAbseil has more support and a cleaner API for parametrized testing I think. \r\n\r\nI added a list of all dataset scripts that are currently on AWS, but will replace that once the \r\nAPI is integrated into this lib. \r\n\r\nOne can now easily test for just a single function for a single dataset with:\r\n`tests\/test_dataset_common.py::DatasetTest::test_load_dataset_wikipedia` \r\n\r\nNOTE: This PR is rebased on PR #29 so should be merged after.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/26\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/25","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/25\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/25\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/25\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/25","id":609708863,"node_id":"MDExOlB1bGxSZXF1ZXN0NDExMjQ4Nzg2","number":25,"title":"Add script csv datasets","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Very interesting thoughts, we should think deeper about all what you raised indeed.","Ok here is a proposal for a more general API and workflow.\r\n\r\n# New `ArrowBasedBuilder`\r\n\r\nFor all the formats that can be directly and efficiently loaded by Arrow (CSV, JSON, Parquet, Arrow), we don't really want to have to go through a conversion to python and back to Arrow. This new builder has a `_generate_tables` method to yield `Arrow.Tables` instead of single examples.\r\nThe tables can be directly casted in Arrow so it's not necessary to supply `Features`, they can be deduced from the `Table` column.\r\n\r\n# Central role of the `BuilderConfig` to store all the arguments necessary for the Dataset creation.\r\n \r\n`BuilderConfig` provide a few defaults fields `name`, `version`, `description`, `data_files` and `data_dir` which can be used to store values necessary for the creation of the dataset. It can be freely extended to store additional information (see the example for `CsvConfig`).\r\n\r\nOn the contrary, `DatasetInfo` is designed as an organized and delimited information storage class with predefined fields.\r\n\r\n`DatasetInfo` now store two names:\r\n- `builder_name`: Name of the builder script used to create the dataset\r\n- `config_name`: Name of the configuration used to create the dataset.\r\n\r\n# Refactoring `load()` arguments and all the chain of processing including the `DownloadManager`\r\n\r\n`load()` now accept a selection of arguments which are used to update the `BuilderConfig` and some kwargs which are used to handle the download process.\r\n\r\nSupplying a `BuilderConfig` as `config` will override the config provided in the dataset. Supplying a `str` will get the associated config from the dataset. Default is to fetch the first config of the dataset.\r\n\r\nGiving additional arguments to `load()` will override the arguments in the `BuilderConfig`.\r\n\r\n# CSV script\r\n\r\nThe `csv.py` script is provided as an example, usage is:\r\n```python\r\nbbc = nlp.load('\/Users\/thomwolf\/Documents\/GitHub\/datasets\/datasets\/nlp\/csv',\r\n name='bbc',\r\n version=\"1.0.1\",\r\n split='train',\r\n data_files={'train': ['\/Users\/thomwolf\/Documents\/GitHub\/datasets\/datasets\/dummy_data\/csv\/test.csv']},\r\n skip_rows=10,\r\n download_mode='force_redownload')\r\n```\r\n\r\n# Checksums\r\n\r\nWe now don't raise an error if the checksum file is not found.\r\n\r\n# `DownloadConfig`\r\n\r\nWe now have a download configuration class to handle all the specific arguments for file caching like proxies, using only local files or user-agents.","Ok merging this for now.\r\n\r\nOne general note is that it's a bit hard to handle the `ClassLabel` generally in both `nlp` and `Arrow` since a class label typically need some metadata for the class names. For now, I raise a `NotImplementedError` when an `ArrowBuilder` output a table with a `DictionaryType` is encountered (which could be a simple equivalent for a `ClassLabel` Feature in Arrow tables).\r\n\r\nIn general and if we need this in the future for some Beam Datasets for instance, I think we should use one of the `metadata` fields in the `Arrow` type or table's schema to store the relation with indices and class names.\r\n\r\nSo ping me if you meet Beam datasets which uses `ClassLabels` (cc @lhoestq @patrickvonplaten @mariamabarham)."],"created_at":1588235288000,"updated_at":1588959111000,"closed_at":1588886089000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/25","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/25","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/25.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/25.patch"},"body":"This is a PR allowing to create datasets from local CSV files. A usage might be:\r\n\r\n```python\r\nimport nlp\r\nds = nlp.load(\r\n path=\"csv\",\r\n name=\"bbc\",\r\n dataset_files={\r\n nlp.Split.TRAIN: [\"datasets\/dummy_data\/csv\/train.csv\"],\r\n nlp.Split.TEST: [\"\"datasets\/dummy_data\/csv\/test.csv\"\"]\r\n },\r\n csv_kwargs={\r\n \"skip_rows\": 0,\r\n \"delimiter\": \",\",\r\n \"quote_char\": \"\\\"\",\r\n \"header_as_column_names\": True\r\n }\r\n)\r\n```\r\n\r\n```\r\nDownloading and preparing dataset bbc\/1.0.0 (download: Unknown size, generated: Unknown size, total: Unknown size) to \/home\/jplu\/.cache\/huggingface\/datasets\/bbc\/1.0.0...\r\nDataset bbc downloaded and prepared to \/home\/jplu\/.cache\/huggingface\/datasets\/bbc\/1.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 49), 'train': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 99), 'validation': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 0)}\r\n```\r\n\r\nHow it is read:\r\n\r\n- `path`: the `csv` word means \"I want to create a CSV dataset\"\r\n- `name`: the name of this dataset is `bbc`\r\n- `dataset_files`: this is a dictionary where each key is the list of files corresponding to the key split.\r\n- `csv_kwargs`: this is the keywords arguments to \"explain\" how to read the CSV files\r\n * `skip_rows`: number of rows have to be skipped, starting from the beginning of the file\r\n * `delimiter`: which delimiter is used to separate the columns\r\n * `quote_char`: which quote char is used to represent a column where the delimiter appears in one of them\r\n * `header_as_column_names`: will use the first row (header) of the file as name for the features. Otherwise the names will be automatically generated as `f1`, `f2`, etc... Will be applied after the `skip_rows` parameter.\r\n\r\n**TODO**: for now the `csv.py` is copied each time we create a new dataset as `ds_name.py`, this behavior will be modified to have only the `csv.py` script copied only once and not for all the CSV datasets.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/25\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/24","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/24\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/24\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/24\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/24","id":609064987,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEwNzE5MTU0","number":24,"title":"Add checksums","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks good to me :-) \r\n\r\nJust would prefer to get rid of the `_DYNAMICALLY_IMPORTED_MODULE` attribute and replace it by a `get_imported_module()` function. Maybe there is something I'm not seeing here though - what do you think? ","> * I'm not sure I understand the general organization of checksums. I see we have a checksum folder with potentially several checksum files but I also see that checksum files can potentially contain several checksums. Could you explain a bit more how this is organized?\r\n\r\nIt should look like this:\r\nsquad\/\r\n\u251c\u2500\u2500 squad.py\/\r\n\u2514\u2500\u2500 urls_checksums\/\r\n...........\u2514\u2500\u2500 checksums.txt\r\n\r\nIn checksums.txt, the format is one line per (url, size, checksum)\r\n\r\nI don't have a strong opinion between `urls_checksums\/checksums.txt` or directly `checksums.txt` (not inside the `urls_checksums` folder), let me know what you think.\r\n\r\n\r\n> * Also regarding your comment on checksum files for \"canonical\" datasets. I understand we can just create these with `nlp-cli test` and then upload them manually to our S3, right?\r\n\r\nYes you're right","Update of the commands:\r\n\r\n- nlp-cli test \\<dataset\\> : Run download_and_prepare and verify checksums\r\n * --name \\<name\\> : run only for the name\r\n * --all_configs : run for all configs\r\n * --save_checksums : instead of verifying checksums, compute and save them\r\n * --ignore_checksums : don't do checksums verification\r\n\r\n- nlp-cli upload \\<dataset_folder\\> : Upload a dataset\r\n * --upload_checksums : compute and upload checksums for uploaded files\r\n\r\nTODO:\r\n- don't overwrite checksums files on S3, to let the user upload a dataset in several steps if needed\r\n\r\nQuestion:\r\n- One idea from @patrickvonplaten : shall we upload checksums everytime we upload files ? (and therefore remove the upload_checksums parameter)","Ok, ready to merge, then @lhoestq ?","Yep :)"],"created_at":1588167449000,"updated_at":1588276370000,"closed_at":1588276369000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/24","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/24","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/24.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/24.patch"},"body":"### Checksums files\r\n\r\nThey are stored next to the dataset script in urls_checksums\/checksums.txt.\r\nThey are used to check the integrity of the datasets downloaded files.\r\nI kept the same format as tensorflow-datasets.\r\nThere is one checksums file for all configs.\r\n\r\n### Load a dataset\r\n\r\nWhen you do `load(\"squad\")`, it will also download the checksums file and put it next to the script in nlp\/datasets\/hash\/urls_checksums\/checksums.txt.\r\nIt also verifies that the downloaded files checksums match the expected ones.\r\n\r\nYou can ignore checksum tests with `load(\"squad\", ignore_checksums=True)` (under the hood it just adds `ignore_checksums=True` in the `DownloadConfig`)\r\n\r\n### Test a dataset\r\n\r\nThere is a new command `nlp-cli test squad` that runs `download_and_prepare` to see if it runs ok, and that verifies that all the checksums match. Allowed arguments are `--name`, `--all_configs`, `--ignore_checksums` and `--register_checksums`.\r\n\r\n### Register checksums\r\n\r\n1. If the dataset has external dataset files\r\n\r\nThe command `nlp-cli test squad --register_checksums --all_configs` runs `download_and_prepare` on all configs to see if it runs ok, and it creates the checksums file.\r\nYou can also register one config at a time using `--name` instead ; the checksums file will be completed and not overwritten.\r\n\r\nIf the script is a local script, the checksum file is moved to urls_checksums\/checksums.txt next to the local script, to enable the user to upload both the script and the checksums file afterwards with `nlp-cli upload squad`.\r\n\r\n2. If the dataset files are all inside the directory of the dataset script\r\n\r\nThe user can directly do `nlp-cli upload squad --register_checksums`, as there is no need to download anything.\r\nIn this case however, all the dataset must be uploaded at once.\r\n\r\n--\r\n\r\nPS : it doesn't allow to register checksums for canonical datasets, the file has to be added manually on S3 for now (I guess ?)\r\n\r\nAlso I feel like we must be sure that this processes would not constrain too much any user from uploading its dataset.\r\nLet me know what you think :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/24\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/23","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/23\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/23\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/23\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/23","id":608508706,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEwMjczOTU2","number":23,"title":"Add metrics","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588096925000,"updated_at":1589185178000,"closed_at":1589185178000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/23","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/23","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/23.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/23.patch"},"body":"This PR is a draft for adding metrics (sacrebleu and seqeval are added)\r\n\r\nuse case examples:\r\n`import nlp`\r\n**sacrebleu:**\r\n```\r\nrefs = [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],\r\n ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]\r\nsys = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.']\r\nsacrebleu = nlp.load_metrics('sacrebleu')\r\nprint(sacrebleu.score)\r\n```\r\n\r\n**seqeval:**\r\n```\r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]\r\nseqeval = nlp.load_metrics('seqeval')\r\nprint(seqeval.accuracy_score(y_true, y_pred)\r\nprint(seqeval.f1_score(y_true, y_pred)\r\n```\r\n_examples are taken from the corresponding web page_\r\n\r\nyour comments and suggestions are more than welcomed\r\n\r\n\r\n\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/23\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/22","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/22\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/22\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/22\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/22","id":608298586,"node_id":"MDExOlB1bGxSZXF1ZXN0NDEwMTAyMjU3","number":22,"title":"adding bleu score code","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1588078850000,"updated_at":1588096100000,"closed_at":1588096088000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/22","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/22","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/22.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/22.patch"},"body":"this PR add the BLEU score metric to the lib. It can be tested by running the following code.\r\n\r\n` from nlp.metrics import bleu\r\n\r\nhyp1 = \"It is a guide to action which ensures that the military always obeys the commands of the party\"\r\nref1a = \"It is a guide to action that ensures that the military forces always being under the commands of the party \"\r\n ref1b = \"It is the guiding principle which guarantees the military force always being under the command of the Party\"\r\nref1c = \"It is the practical guide for the army always to heed the directions of the party\"\r\n \r\nlist_of_references = [[ref1a, ref1b, ref1c]]\r\nhypotheses = [hyp1]\r\nbleu = bleu.bleu_score(list_of_references, hypotheses,4, smooth=True)\r\nprint(bleu) `","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/22\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/21","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/21\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/21\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/21\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/21","id":607914185,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA5Nzk2MTM4","number":21,"title":"Cleanup Features - Updating convert command - Fix Download manager","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["For conflicts, I think the mention hint \"This should be modified because it mentions ...\" is missing.","Looks great!"],"created_at":1588029415000,"updated_at":1588325387000,"closed_at":1588325386000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/21","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/21","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/21.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/21.patch"},"body":"This PR makes a number of changes:\r\n\r\n# Updating `Features`\r\n\r\nFeatures are a complex mechanism provided in `tfds` to be able to modify a dataset on-the-fly when serializing to disk and when loading from disk.\r\n\r\nWe don't really need this because (1) it hides too much from the user and (2) our datatype can be directly mapped to Arrow tables on drive so we usually don't need to change the format before\/after serialization.\r\n\r\nThis PR extracts and refactors these features in a single `features.py` files. It still keep a number of features classes for easy compatibility with tfds, namely the `Sequence`, `Tensor`, `ClassLabel` and `Translation` features.\r\n\r\nSome more complex features involving a pre-processing on-the-fly during serialization are kept:\r\n- `ClassLabel` which are able to convert from label strings to integers,\r\n- `Translation`which does some check on the languages.\r\n\r\n# Updating the `convert` command\r\n\r\nWe do a few updates here\r\n- following the simplification of the `features` (cf above), conversion are updated\r\n- we also makes it simpler to convert a single file\r\n- some code need to be fixed manually after conversion (e.g. to remove some encoding processing in former tfds `Text` features. We highlight this code with a \"git merge conflict\" style syntax for easy manual fixing.\r\n\r\n# Fix download manager iterator\r\n\r\nYou kept me up quite late on Tuesday night with this `os.scandir` change @lhoestq ;-)\r\n","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/21\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/20","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/20\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/20\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/20\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/20","id":607313557,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA5MzEyMDI1","number":20,"title":"remove boto3 and promise dependencies","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587973185000,"updated_at":1588003457000,"closed_at":1587996945000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/20","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/20","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/20.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/20.patch"},"body":"With the new download manager, we don't need `promise` anymore.\r\nI also removed `boto3` as in [this pr](https:\/\/github.com\/huggingface\/transformers\/pull\/3968)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/20\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/19","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/19\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/19\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/19\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/19","id":606400645,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA4NjIwMjUw","number":19,"title":"Replace tf.constant for TF","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Awesome!"],"created_at":1587742326000,"updated_at":1588152428000,"closed_at":1587849525000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/19","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/19","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/19.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/19.patch"},"body":"Replace simple tf.constant type of Tensor to tf.ragged.constant which allows to have examples of different size in a tf.data.Dataset.\r\n\r\nNow the training works with TF. Here the same example than for the PT in collab:\r\n\r\n```python\r\nimport tensorflow as tf\r\nimport nlp\r\nfrom transformers import BertTokenizerFast, TFBertForQuestionAnswering\r\n\r\n# Load our training dataset and tokenizer\r\ntrain_dataset = nlp.load('squad', split=\"train[:1%]\")\r\ntokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')\r\n\r\ndef get_correct_alignement(context, answer):\r\n start_idx = answer['answer_start'][0]\r\n text = answer['text'][0]\r\n end_idx = start_idx + len(text)\r\n if context[start_idx:end_idx] == text:\r\n return start_idx, end_idx # When the gold label position is good\r\n elif context[start_idx-1:end_idx-1] == text:\r\n return start_idx-1, end_idx-1 # When the gold label is off by one character\r\n elif context[start_idx-2:end_idx-2] == text:\r\n return start_idx-2, end_idx-2 # When the gold label is off by two character\r\n else:\r\n raise ValueError()\r\n\r\n# Tokenize our training dataset\r\ndef convert_to_features(example_batch):\r\n # Tokenize contexts and questions (as pairs of inputs)\r\n input_pairs = list(zip(example_batch['context'], example_batch['question']))\r\n encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True)\r\n\r\n # Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.\r\n start_positions, end_positions = [], []\r\n for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):\r\n start_idx, end_idx = get_correct_alignement(context, answer)\r\n start_positions.append([encodings.char_to_token(i, start_idx)])\r\n end_positions.append([encodings.char_to_token(i, end_idx-1)])\r\n \r\n if start_positions and end_positions:\r\n encodings.update({'start_positions': start_positions,\r\n 'end_positions': end_positions})\r\n return encodings\r\n\r\ntrain_dataset = train_dataset.map(convert_to_features, batched=True)\r\n\r\ncolumns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']\r\ntrain_dataset.set_format(type='tensorflow', columns=columns)\r\nfeatures = {x: train_dataset[x] for x in columns[:3]} \r\nlabels = {\"output_1\": train_dataset[\"start_positions\"]}\r\nlabels[\"output_2\"] = train_dataset[\"end_positions\"]\r\ntfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)\r\nmodel = TFBertForQuestionAnswering.from_pretrained(\"bert-base-cased\")\r\nloss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)\r\nopt = tf.keras.optimizers.Adam(learning_rate=3e-5)\r\nmodel.compile(optimizer=opt,\r\n loss={'output_1': loss_fn, 'output_2': loss_fn},\r\n loss_weights={'output_1': 1., 'output_2': 1.},\r\n metrics=['accuracy'])\r\nmodel.fit(tfdataset, epochs=1, steps_per_epoch=3)\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/19\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/18","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/18\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/18\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/18\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/18","id":606109196,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA4Mzg0MTc3","number":18,"title":"Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo","user":{"login":"thomwolf","id":7353373,"node_id":"MDQ6VXNlcjczNTMzNzM=","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/7353373?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/thomwolf","html_url":"https:\/\/github.com\/thomwolf","followers_url":"https:\/\/api.github.com\/users\/thomwolf\/followers","following_url":"https:\/\/api.github.com\/users\/thomwolf\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/thomwolf\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/thomwolf\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/thomwolf\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/thomwolf\/orgs","repos_url":"https:\/\/api.github.com\/users\/thomwolf\/repos","events_url":"https:\/\/api.github.com\/users\/thomwolf\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/thomwolf\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["LGTM"],"created_at":1587713988000,"updated_at":1588174048000,"closed_at":1588089988000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/18","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/18","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/18.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/18.patch"},"body":"This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time).\r\n\r\n# Style & quality:\r\nYou can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8.\r\nYou can then clean the style and check the quality before merging your PR with:\r\n```bash\r\nmake style\r\nmake quality\r\n```\r\n\r\n# Allow dependencies in dataset processing scripts\r\nWe can now allow (some level) of imports in dataset processing scripts (in addition to PyPi imports).\r\nNamely, you can do the two following things:\r\n\r\nImport from a relative path to a file in the same folder as the dataset processing script:\r\n```python\r\nimport .c4_utils \r\n``` \r\n\r\nOr import from a relative path to a file in a folder\/archive\/github repo to which you provide an URL after the import state with `# From: [URL]`:\r\n```python\r\nimport .clicr.dataset_code.build_json_dataset # From: https:\/\/github.com\/clips\/clicr\r\n```\r\n\r\nIn both these cases, after downloading the main dataset processing script, we will identify the location of these dependencies, download them and copy them in the dataset processing script folder.\r\n\r\nNote that only direct import in the dataset processing script will be handled.\r\nWe don't recursively explore the additional import to download further files.\r\nAlso, when we download from an additional directory (in the second case above), we recursively add `__init__.py` to all the sub-folder so you can import from them.\r\n\r\nThis part is still tested for now. If you've seen datasets which required external utilities, tell me and I can test it.\r\n\r\n# Update the cache to have a better local structure\r\n\r\nThe local structure in the `src\/datasets` folder is now: `src\/datasets\/DATASET_NAME\/DATASET_HASH\/*`\r\n\r\nThe hash is computed from the full code of the dataset processing script as well as all the local and downloaded dependencies as mentioned above. This way if you change some code in a utility related to your dataset, a new hash should be computed.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/18\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/17","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/17\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/17\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/17\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/17","id":605753027,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA4MDk3NjM0","number":17,"title":"Add Pandas as format type","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587666014000,"updated_at":1588010870000,"closed_at":1588010868000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/17","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/17","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/17.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/17.patch"},"body":"As detailed in the title ^^","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/17\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/16","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/16\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/16\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/16\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/16","id":605661462,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA4MDIyMTUz","number":16,"title":"create our own DownloadManager","user":{"login":"lhoestq","id":42851186,"node_id":"MDQ6VXNlcjQyODUxMTg2","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/42851186?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/lhoestq","html_url":"https:\/\/github.com\/lhoestq","followers_url":"https:\/\/api.github.com\/users\/lhoestq\/followers","following_url":"https:\/\/api.github.com\/users\/lhoestq\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/lhoestq\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/lhoestq\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/lhoestq\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/lhoestq\/orgs","repos_url":"https:\/\/api.github.com\/users\/lhoestq\/repos","events_url":"https:\/\/api.github.com\/users\/lhoestq\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/lhoestq\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Looks great to me! ","The new download manager is ready. I removed the old folder and I fixed a few remaining dependencies.\r\nI tested it on squad and a few others from the dataset folder and it works fine.\r\n\r\nThe only impact of these changes is that it breaks the `download_and_prepare` script that was used to register the checksums when we create a dataset, as the checksum logic is not implemented.\r\n\r\nLet me know if you have remarks","Ok merged it (a bit fast for you to update the copyright, now I see that. but it's ok, we'll do a pass on these doc\/copyright before releasing anyway)","Actually two additional things here @lhoestq (I merged too fast sorry, let's make a new PR for additional developments):\r\n- I think we can remove some dependencies now (e.g. `promises`) in setup.py, can you have a look?\r\n- also, I think we can remove the boto3 dependency like here: https:\/\/github.com\/huggingface\/transformers\/pull\/3968"],"created_at":1587658087000,"updated_at":1620239124000,"closed_at":1587849910000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/16","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/16","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/16.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/16.patch"},"body":"I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution.\r\nWith this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine.\r\n\r\nFor the implementation, what I did exactly:\r\n- I copied the old download manager\r\n- I removed all the dependences to the old `download` files\r\n- I replaced all the download + extract calls by calls to `cached_path`\r\n- I removed unused parameters (extract_dir, compute_stats) (maybe compute_stats could be re-added later if we want to compute stats...)\r\n- I left some functions unimplemented for now. We will probably have to implement them because they are used by some datasets scripts (download_kaggle_data, iter_archive) or because we may need them at some point (download_checksums, _record_sizes_checksums)\r\n\r\nLet me know if you think that this is going the right direction or if you have remarks.\r\nNote: I didn't write any test yet as I wanted to read your remarks first","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/16\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/15","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/15\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/15\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/15\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/15","id":604906708,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA3NDEwOTk3","number":15,"title":"[Tests] General Test Design for all dataset scripts","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["> I think I'm fine with this.\r\n> \r\n> The alternative would be to host a small subset of the dataset on the S3 together with the testing script. But I think having all (test file creation + actual tests) in one file is actually quite convenient.\r\n> \r\n> Good for me!\r\n> \r\n> One question though, will we have to create one test file for each of the 100+ datasets or could we make some automatic conversion from tfds dataset test files?\r\n\r\nI think if we go the way shown in the PR we would have to create one test file for each of the 100+ datasets. \r\n\r\nAs far as I know the tfds test files all rely on the user having created a special download folder structure in `tensorflow-datasets\/tensorflow_datasets\/testing\/test_data\/fake_examples`. \r\n\r\nMy hypothesis was: \r\nBecasue, we don't want to work with PRs, no `dataset_script` is going to be in the official repo, so no `dataset_script_test` can be in the repo either. Therefore we can also not have any \"fake\" test folder structure in the repo. \r\n\r\n**BUT:** As you mentioned @thom, we could have a fake data structure on AWS. To add a test the user has to upload multiple small test files when uploading his data set script. \r\n\r\nSo for a cli this could look like:\r\n`python nlp-cli upload <data_set_script> --testfiles <relative path to test file 1> <relative path to test file 2> ...` \r\n\r\nor even easier if the user just creates the dataset folder with the script inside and the testing folder structure, then the API could look like:\r\n\r\n`python nlp-cli upload <path\/to\/dataset\/folder>`\r\n\r\nand the dataset folder would look like\r\n```\r\nsquad\r\n- squad.py\r\n- fake_data # this dir would have to have the exact same structure we get when downloading from the official squad data url\r\n```\r\n\r\nThis way I think we wouldn't even need any test files at all for each dataset script. For special datasets like `c4` or `wikipedia` we could then allow to optionally upload another test script. \r\nWe just assume that this is our downloaded `url` and check all functionality from there. \r\n\r\nThinking a bit more about this solution sounds a) much less work and b) even easier for the user.\r\n\r\nA small problem I see here though:\r\n1) What do we do when the depending on the config name the downloaded folder structure is very different? I think for each dataset config name we should have one test, which could correspond to one \"fake\" folder structure on AWS\r\n\r\n@thomwolf What do you think? I would actually go for this solution instead now.\r\n@mariamabarham You have written many more tfds dataset scripts and tests than I have - what do you think? \r\n\r\n","Regarding the tfds tests, I don't really see a point in keeping them because:\r\n\r\n1) If you provide a fake data structure, IMO there is no need for each dataset to have an individual test file because (I think) most datasets have the same functions `_split_generators` and `_generate_examples` for which you can just test the functionality in a common test file. For special functions like these beam \/ pipeline functionality you probably need an extra test file. But @mariamabarham I think you have seen more than I have here as well \r\n\r\n2) The dataset test design is very much intertwined with the download manager design and contains a lot of code. I would like to seperate the tests into a) tests for downloading in general b) tests for post download data set pre-processing. Since we are going to change the download code anyways quite a lot, my plan was to focus on b) first. ","I like the idea of having a fake data folder on S3. I have seen datasets with nested compressed files structures that would be tedious to generate with code. And for users it is probably easier to create a fake data folder by taking a subset of the actual data, and then upload it as you said.","> > I think I'm fine with this.\r\n> > The alternative would be to host a small subset of the dataset on the S3 together with the testing script. But I think having all (test file creation + actual tests) in one file is actually quite convenient.\r\n> > Good for me!\r\n> > One question though, will we have to create one test file for each of the 100+ datasets or could we make some automatic conversion from tfds dataset test files?\r\n> \r\n> I think if we go the way shown in the PR we would have to create one test file for each of the 100+ datasets.\r\n> \r\n> As far as I know the tfds test files all rely on the user having created a special download folder structure in `tensorflow-datasets\/tensorflow_datasets\/testing\/test_data\/fake_examples`.\r\n> \r\n> My hypothesis was:\r\n> Becasue, we don't want to work with PRs, no `dataset_script` is going to be in the official repo, so no `dataset_script_test` can be in the repo either. Therefore we can also not have any \"fake\" test folder structure in the repo.\r\n> \r\n> **BUT:** As you mentioned @thom, we could have a fake data structure on AWS. To add a test the user has to upload multiple small test files when uploading his data set script.\r\n> \r\n> So for a cli this could look like:\r\n> `python nlp-cli upload <data_set_script> --testfiles <relative path to test file 1> <relative path to test file 2> ...`\r\n> \r\n> or even easier if the user just creates the dataset folder with the script inside and the testing folder structure, then the API could look like:\r\n> \r\n> `python nlp-cli upload <path\/to\/dataset\/folder>`\r\n> \r\n> and the dataset folder would look like\r\n> \r\n> ```\r\n> squad\r\n> - squad.py\r\n> - fake_data # this dir would have to have the exact same structure we get when downloading from the official squad data url\r\n> ```\r\n> \r\n> This way I think we wouldn't even need any test files at all for each dataset script. For special datasets like `c4` or `wikipedia` we could then allow to optionally upload another test script.\r\n> We just assume that this is our downloaded `url` and check all functionality from there.\r\n> \r\n> Thinking a bit more about this solution sounds a) much less work and b) even easier for the user.\r\n> \r\n> A small problem I see here though:\r\n> \r\n> 1. What do we do when the depending on the config name the downloaded folder structure is very different? I think for each dataset config name we should have one test, which could correspond to one \"fake\" folder structure on AWS\r\n> \r\n> @thomwolf What do you think? I would actually go for this solution instead now.\r\n> @mariamabarham You have written many more tfds dataset scripts and tests than I have - what do you think?\r\n\r\nI'm agreed with you just one thing, for some dataset like glue or xtreme you may have multiple datasets in it. so I think a good way is to have one main fake folder and a subdirectory for each dataset inside","> Regarding the tfds tests, I don't really see a point in keeping them because:\r\n> \r\n> 1. If you provide a fake data structure, IMO there is no need for each dataset to have an individual test file because (I think) most datasets have the same functions `_split_generators` and `_generate_examples` for which you can just test the functionality in a common test file. For special functions like these beam \/ pipeline functionality you probably need an extra test file. But @mariamabarham I think you have seen more than I have here as well\r\n> 2. The dataset test design is very much intertwined with the download manager design and contains a lot of code. I would like to seperate the tests into a) tests for downloading in general b) tests for post download data set pre-processing. Since we are going to change the download code anyways quite a lot, my plan was to focus on b) first.\r\n\r\nFor _split_generator, yes. But I'm not sure for _generate_examples because there is lots of things that should be taken into account such as feature names and types, data format (json, jsonl, csv, tsv,..)","Sounds good to me!\r\n\r\nWhen testing, we could thus just override the prefix in the URL inside the download manager to have them point to the test directory on our S3.\r\n\r\nCc @lhoestq ","Ok, here is a second draft for the testing structure. \r\n\r\nI think the big difficulty here is \"How can you generate tests on the fly from a given dataset name, *e.g.* `squad`\"?\r\n\r\nSo, this morning I did some research on \"parameterized testing\" and pure `unittest` or `pytest` didn't work very well. \r\nI found the lib https:\/\/github.com\/wolever\/parameterized, which works very nicely for our use case I think. \r\n@thomwolf - would it be ok to have a dependence on this lib for `nlp`? It seems like a light-weight lib to me. \r\n\r\nThis lib allows to add a `parameterization` decorator to a `unittest.TestCase` class so that the class can be instantiated for multiple different arguments (which are the dataset names `squad` etc. in our case).\r\n\r\nWhat I like about this lib is that one only has to add the decorator and the each of the parameterized tests are shown, like this: \r\n\r\n![Screenshot from 2020-04-24 15-13-14](https:\/\/user-images.githubusercontent.com\/23423619\/80216326-2bd9a680-863e-11ea-8a0f-460976f5309c.png)\r\n\r\nWith this structure we would only have to upload the dummy data for each dataset and would not require a specific testing file. \r\n\r\nWhat do you think @thomwolf @mariamabarham @lhoestq ?","I think this is a nice solution.\r\n\r\nDo you think we could have the `parametrized` dependency in a `[test]` optional installation of `setup.py`? I would really like to keep the dependencies of the standard installation as small as possible. ","> I think this is a nice solution.\r\n> \r\n> Do you think we could have the `parametrized` dependency in a `[test]` optional installation of `setup.py`? I would really like to keep the dependencies of the standard installation as small as possible.\r\n\r\nYes definitely!","UPDATE: \r\n\r\nThis test design is ready now. I added dummy data to S3 for the dataests: `squad, crime_and_punish, sentiment140` . The structure can be seen on `https:\/\/s3.console.aws.amazon.com\/s3\/buckets\/datasets.huggingface.co\/nlp\/squad\/dummy\/?region=us-east-1&tab=overview` for `squad`. \r\n\r\nAll dummy data files have to be in .zip format and called `dummy_data.zip`. The zip file should thereby have the exact same folder structure one gets from downloading the \"real\" data url(s). \r\n\r\nTo show how the .zip file looks like for the added datasets, I added the folder `nlp\/datasets\/dummy_data` in this PR. I think we can leave for the moment so that people can see better how to add dummy data tests and later delete it like `nlp\/datasets\/nlp`."],"created_at":1587573961000,"updated_at":1587999668000,"closed_at":1587998882000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/15","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/15","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/15.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/15.patch"},"body":"The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as minimal as possible. \r\n\r\nIn order to test whether the specific data set class can download the data and generate the examples **without** downloading the actual data all the time, a MockDataLoaderManager class is used which receives a `mock_folder_structure_fn` function from each individual dataset test file that create \"fake\" data and which returns the same folder structure that would have been created when using the real data downloader. ","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/15\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/14","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/14\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/14\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/14\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/14","id":604761315,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA3MjkzNjU5","number":14,"title":"[Download] Only create dir if not already exist","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587562371000,"updated_at":1587630453000,"closed_at":1587630453000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/14","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/14","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/14.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/14.patch"},"body":"This was quite annoying to find out :D. \r\nSome datasets have save in the same directory. So we should only create a new directory if it doesn't already exist.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/14\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/13","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/13\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/13\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/13\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/13","id":604547951,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA3MTIxMjkw","number":13,"title":"[Make style]","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["I think this can be quickly reproduced. \r\nI use `black, version 19.10b0`. \r\n\r\nWhen running: \r\n`black nlp\/src\/arrow_reader.py` \r\nit gives me: \r\n\r\n```\r\nerror: cannot format \/home\/patrick\/hugging_face\/nlp\/src\/nlp\/arrow_reader.py: cannot use --safe with this file; failed to parse source file. AST error message: invalid syntax (<unknown>, line 78)\r\nOh no! \ud83d\udca5 \ud83d\udc94 \ud83d\udca5\r\n1 file failed to reformat.\r\n```\r\n\r\nThe line in question is: \r\nhttps:\/\/github.com\/huggingface\/nlp\/blob\/6922a16705e61f9e31a365f2606090b84d49241f\/src\/nlp\/arrow_reader.py#L78\r\n\r\nWhat is weird is that the trainer file in `transformers` has more or less the same syntax and black does not fail there: \r\nhttps:\/\/github.com\/huggingface\/transformers\/blob\/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d\/src\/transformers\/trainer.py#L95\r\n\r\nI googled quite a bit about black & typing hints yesterday and didn't find anything useful. \r\nAny ideas @thomwolf @julien-c @LysandreJik ?","> I think this can be quickly reproduced.\r\n> I use `black, version 19.10b0`.\r\n> \r\n> When running:\r\n> `black nlp\/src\/arrow_reader.py`\r\n> it gives me:\r\n> \r\n> ```\r\n> error: cannot format \/home\/patrick\/hugging_face\/nlp\/src\/nlp\/arrow_reader.py: cannot use --safe with this file; failed to parse source file. AST error message: invalid syntax (<unknown>, line 78)\r\n> Oh no! \ud83d\udca5 \ud83d\udc94 \ud83d\udca5\r\n> 1 file failed to reformat.\r\n> ```\r\n> \r\n> The line in question is:\r\n> https:\/\/github.com\/huggingface\/nlp\/blob\/6922a16705e61f9e31a365f2606090b84d49241f\/src\/nlp\/arrow_reader.py#L78\r\n> \r\n> What is weird is that the trainer file in `transformers` has more or less the same syntax and black does not fail there:\r\n> https:\/\/github.com\/huggingface\/transformers\/blob\/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d\/src\/transformers\/trainer.py#L95\r\n> \r\n> I googled quite a bit about black & typing hints yesterday and didn't find anything useful.\r\n> Any ideas @thomwolf @julien-c @LysandreJik ?\r\n\r\nOk I found the problem. It was the one Julien mentioned and has nothing to do with this line. Black's error message is a bit misleading here, I guess","Ok, just had to remove the python 2 syntax comments `# type`. \r\n\r\nGood to merge for me now @thomwolf "],"created_at":1587543006000,"updated_at":1587646942000,"closed_at":1587646942000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/13","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/13","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/13.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/13.patch"},"body":"Added Makefile and applied make style to all. \r\nmake style runs the following code:\r\n\r\n```\r\nstyle:\r\n black --line-length 119 --target-version py35 src\r\n isort --recursive src\r\n```\r\n\r\nIt's the same code that is run in `transformers`.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/13\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/12","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/12\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/12\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/12\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/12","id":604518583,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA3MDk3MzA4","number":12,"title":"[Map Function] add assert statement if map function does not return dict or None","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Also added to an assert statement that if a dict is returned by function, all values of `dicts` are `lists`","Wait to merge until `make style` is set in place.","Updated the assert statements. Played around with multiple cases and it should be good now IMO. "],"created_at":1587540084000,"updated_at":1602138701000,"closed_at":1587709743000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/12","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/12","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/12.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/12.patch"},"body":"IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised. \r\n\r\nNot sure whether you had cases in mind where the user should do something else @thomwolf , but I think a lot of silent errors can be avoided with this assert statement.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/12\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/11","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/11\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/11\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/11\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/11","id":603921624,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA2NjExODk2","number":11,"title":"[Convert TFDS to HFDS] Extend script to also allow just converting a single file","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587468333000,"updated_at":1587502021000,"closed_at":1587502020000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/11","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/11","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/11.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/11.patch"},"body":"Adds another argument to be able to convert only a single file","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/11\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/10","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/10\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/10\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/10\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/10","id":603909327,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA2NjAxNzQ2","number":10,"title":"Name json file \"squad.json\" instead of \"squad.py.json\"","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587467068000,"updated_at":1587502086000,"closed_at":1587502086000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/10","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/10","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/10.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/10.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/10\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/9","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/9\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/9\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/9\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/9","id":603894874,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA2NTkwMDQw","number":9,"title":"[Clean up] Datasets","user":{"login":"patrickvonplaten","id":23423619,"node_id":"MDQ6VXNlcjIzNDIzNjE5","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/23423619?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/patrickvonplaten","html_url":"https:\/\/github.com\/patrickvonplaten","followers_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/followers","following_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/orgs","repos_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/repos","events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/patrickvonplaten\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes!"],"created_at":1587465596000,"updated_at":1587502198000,"closed_at":1587502198000,"author_association":"MEMBER","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/9","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/9","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/9.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/9.patch"},"body":"Clean up `nlp\/datasets` folder. \r\n\r\nAs I understood, eventually the `nlp\/datasets` shall not exist anymore at all. \r\n\r\nThe folder `nlp\/datasets\/nlp` is kept for the moment, but won't be needed in the future, since it will live on S3 (actually it already does) at: `https:\/\/s3.console.aws.amazon.com\/s3\/buckets\/datasets.huggingface.co\/nlp\/?region=us-east-1` and the different `dataset downloader scripts will be added to `nlp\/src\/nlp` when downloaded by the user. \r\n\r\nThe folder `nlp\/datasets\/checksums` is kept for now, but won't be needed anymore in the future. \r\n\r\nThe remaining folders\/ files are leftovers from tensorflow-datasets and are not needed. The can be looked up in the private tensorflow-dataset repo.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/9\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/8","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/8\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/8\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/8\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/8","id":601783243,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA0OTg0NDUz","number":8,"title":"Fix issue 6: error when the citation is missing in the DatasetInfo","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587110666000,"updated_at":1588152431000,"closed_at":1587389052000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/8","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/8","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/8.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/8.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/8\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/7","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/7\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/7\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/7\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/7","id":601780534,"node_id":"MDExOlB1bGxSZXF1ZXN0NDA0OTgyMzA2","number":7,"title":"Fix issue 5: allow empty datasets","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1587110396000,"updated_at":1588152433000,"closed_at":1587389028000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/7","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/7","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/7.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/7.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/7\/timeline","performed_via_github_app":null,"is_pull_request":true} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/6","id":600330836,"node_id":"MDU6SXNzdWU2MDAzMzA4MzY=","number":6,"title":"Error when citation is not given in the DatasetInfo","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes looks good to me.\r\nNote that we may refactor quite strongly the `info.py` to make it a lot simpler (it's very complicated for basically a dictionary of info I think)","No, problem ^^ It might just be a temporary fix :)","Fixed."],"created_at":1586960094000,"updated_at":1588152202000,"closed_at":1588152202000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":null,"body":"The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/info.py\", line 338, in __repr__\r\n citation_pprint = _indent('\"\"\"{}\"\"\"'.format(self.citation.strip()))\r\nAttributeError: 'NoneType' object has no attribute 'strip'\r\n```\r\n\r\nI propose to do the following change in the `info.py` file. The method:\r\n```python\r\ndef __repr__(self):\r\n splits_pprint = _indent(\"\\n\".join([\"{\"] + [\r\n \" '{}': {},\".format(k, split.num_examples)\r\n for k, split in sorted(self.splits.items())\r\n ] + [\"}\"]))\r\n features_pprint = _indent(repr(self.features))\r\n citation_pprint = _indent('\"\"\"{}\"\"\"'.format(self.citation.strip()))\r\n return INFO_STR.format(\r\n name=self.name,\r\n version=self.version,\r\n description=self.description,\r\n total_num_examples=self.splits.total_num_examples,\r\n features=features_pprint,\r\n splits=splits_pprint,\r\n citation=citation_pprint,\r\n homepage=self.homepage,\r\n supervised_keys=self.supervised_keys,\r\n # Proto add a \\n that we strip.\r\n license=str(self.license).strip())\r\n```\r\nBecomes:\r\n```python\r\ndef __repr__(self):\r\n splits_pprint = _indent(\"\\n\".join([\"{\"] + [\r\n \" '{}': {},\".format(k, split.num_examples)\r\n for k, split in sorted(self.splits.items())\r\n ] + [\"}\"]))\r\n features_pprint = _indent(repr(self.features))\r\n ## the strip is done only is the citation is given\r\n citation_pprint = self.citation\r\n\r\n if self.citation:\r\n citation_pprint = _indent('\"\"\"{}\"\"\"'.format(self.citation.strip()))\r\n return INFO_STR.format(\r\n name=self.name,\r\n version=self.version,\r\n description=self.description,\r\n total_num_examples=self.splits.total_num_examples,\r\n features=features_pprint,\r\n splits=splits_pprint,\r\n citation=citation_pprint,\r\n homepage=self.homepage,\r\n supervised_keys=self.supervised_keys,\r\n # Proto add a \\n that we strip.\r\n license=str(self.license).strip())\r\n```\r\nAnd now it is ok. @thomwolf are you ok with this fix?","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/6\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/5","id":600295889,"node_id":"MDU6SXNzdWU2MDAyOTU4ODk=","number":5,"title":"ValueError when a split is empty","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["To fix this I propose to modify only the file `arrow_reader.py` with few updates. First update, the following method:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n # For each split, return the files instruction (skip\/take)\r\n file_instructions = []\r\n num_examples = 0\r\n for abs_instr in absolute_instructions:\r\n length = name2len[abs_instr.splitname]\r\n if not length:\r\n raise ValueError(\r\n 'Split empty. This might means that dataset hasn\\'t been generated '\r\n 'yet and info not restored from GCS, or that legacy dataset is used.')\r\n filename = filename_for_dataset_split(\r\n dataset_name=name,\r\n split=abs_instr.splitname,\r\n filetype_suffix='arrow')\r\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n to = length if abs_instr.to is None else abs_instr.to\r\n num_examples += to - from_\r\n single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n file_instructions.extend(single_file_instructions)\r\n return FileInstructions(\r\n num_examples=num_examples,\r\n file_instructions=file_instructions,\r\n )\r\n```\r\nBecomes:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n # For each split, return the files instruction (skip\/take)\r\n file_instructions = []\r\n num_examples = 0\r\n for abs_instr in absolute_instructions:\r\n length = name2len[abs_instr.splitname]\r\n ## Delete the if not length and the raise\r\n filename = filename_for_dataset_split(\r\n dataset_name=name,\r\n split=abs_instr.splitname,\r\n filetype_suffix='arrow')\r\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n to = length if abs_instr.to is None else abs_instr.to\r\n num_examples += to - from_\r\n single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n file_instructions.extend(single_file_instructions)\r\n return FileInstructions(\r\n num_examples=num_examples,\r\n file_instructions=file_instructions,\r\n )\r\n```\r\n\r\nSecond update the following method:\r\n```python\r\ndef _read_files(files, info):\r\n \"\"\"Returns Dataset for given file instructions.\r\n\r\n Args:\r\n files: List[dict(filename, skip, take)], the files information.\r\n The filenames contain the absolute path, not relative.\r\n skip\/take indicates which example read in the file: `ds.slice(skip, take)`\r\n \"\"\"\r\n pa_batches = []\r\n for f_dict in files:\r\n pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n pa_batches.extend(pa_table.to_batches())\r\n pa_table = pa.Table.from_batches(pa_batches)\r\n ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n return ds\r\n```\r\nBecomes:\r\n```python\r\ndef _read_files(files, info):\r\n \"\"\"Returns Dataset for given file instructions.\r\n\r\n Args:\r\n files: List[dict(filename, skip, take)], the files information.\r\n The filenames contain the absolute path, not relative.\r\n skip\/take indicates which example read in the file: `ds.slice(skip, take)`\r\n \"\"\"\r\n pa_batches = []\r\n for f_dict in files:\r\n pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n pa_batches.extend(pa_table.to_batches())\r\n ## we modify the table only if there are some batches\r\n if pa_batches:\r\n pa_table = pa.Table.from_batches(pa_batches)\r\n ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n return ds\r\n```\r\n\r\nWith these two updates it works now. @thomwolf are you ok with this changes?","Yes sounds good to me!\r\nDo you want to make a PR? or I can do it as well","Fixed."],"created_at":1586957113000,"updated_at":1588152185000,"closed_at":1588152185000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":null,"body":"When a split is empty either TEST, VALIDATION or TRAIN I get the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/load.py\", line 295, in load\r\n ds = dbuilder.as_dataset(**as_dataset_kwargs)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/builder.py\", line 587, in as_dataset\r\n datasets = utils.map_nested(build_single_dataset, split, map_tuple=True)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/utils\/py_utils.py\", line 158, in map_nested\r\n for k, v in data_struct.items()\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/utils\/py_utils.py\", line 158, in <dictcomp>\r\n for k, v in data_struct.items()\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/utils\/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/builder.py\", line 601, in _build_single_dataset\r\n split=split,\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/builder.py\", line 625, in _as_dataset\r\n split_infos=self.info.splits.values(),\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/arrow_reader.py\", line 200, in read\r\n return py_utils.map_nested(_read_instruction_to_ds, instructions)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/utils\/py_utils.py\", line 172, in map_nested\r\n return function(data_struct)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/arrow_reader.py\", line 191, in _read_instruction_to_ds\r\n file_instructions = make_file_instructions(name, split_infos, instruction)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/arrow_reader.py\", line 104, in make_file_instructions\r\n absolute_instructions=absolute_instructions,\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/arrow_reader.py\", line 122, in _make_file_instructions_from_absolutes\r\n 'Split empty. This might means that dataset hasn\\'t been generated '\r\nValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used.\r\n``` \r\n\r\nHow to reproduce:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def __init__(self, **config):\r\n self.train = config.pop(\"train\", None)\r\n self.validation = config.pop(\"validation\", None)\r\n super(Bbc, self).__init__(**config)\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(builder=self, description=\"bla\", features=nlp.features.FeaturesDict({\"id\": nlp.int32, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": self.train}),\r\n nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": self.validation}),\r\n nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={\"filepath\": None})]\r\n\r\n def _generate_examples(self, filepath):\r\n if not filepath:\r\n return None, {}\r\n\r\n with open(filepath) as f:\r\n reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n lines = list(reader)[1:]\r\n\r\n for idx, line in enumerate(lines):\r\n yield idx, {\"id\": idx, \"text\": line[1], \"label\": line[0]}\r\n```\r\n\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc\/data\/train.csv\", \"validation\": \"bbc\/data\/test.csv\"})\r\n```","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/5\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/4","id":600185417,"node_id":"MDU6SXNzdWU2MDAxODU0MTc=","number":4,"title":"[Feature] Keep the list of labels of a dataset as metadata","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes! I see mostly two options for this:\r\n- a `Feature` approach like currently (but we might deprecate features)\r\n- wrapping in a smart way the Dictionary arrays of Arrow: https:\/\/arrow.apache.org\/docs\/python\/data.html?highlight=dictionary%20encode#dictionary-arrays","I would have a preference for the second bullet point.","This should be accessible now as a feature in dataset.info.features (and even have the mapping methods).","Perfect! Well done!!","Hi,\r\nI hope we could get a better documentation.\r\nIt took me more than 1 hour to found this way to get the label information.","Yes we are working on the doc right now, should be in the next release quite soon."],"created_at":1586945830000,"updated_at":1594227586000,"closed_at":1588572717000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":null,"body":"It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata.","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/4\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/3","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/3\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/3\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/3\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/3","id":600180050,"node_id":"MDU6SXNzdWU2MDAxODAwNTA=","number":3,"title":"[Feature] More dataset outputs","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["Yes!\r\n- pandas will be a one-liner in `arrow_dataset`: https:\/\/arrow.apache.org\/docs\/python\/generated\/pyarrow.Table.html#pyarrow.Table.to_pandas\r\n- for Spark I have no idea. let's investigate that at some point","For Spark it looks to be pretty straightforward as well https:\/\/spark.apache.org\/docs\/latest\/sql-pyspark-pandas-with-arrow.html but looks to be having a dependency to Spark is necessary, then nevermind we can skip it","Now Pandas is available."],"created_at":1586945294000,"updated_at":1588572747000,"closed_at":1588572747000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":null,"body":"Add the following dataset outputs:\r\n\r\n- Spark\r\n- Pandas","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/3\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/issues\/2","id":599767671,"node_id":"MDU6SXNzdWU1OTk3Njc2NzE=","number":2,"title":"Issue to read a local dataset","user":{"login":"jplu","id":959590,"node_id":"MDQ6VXNlcjk1OTU5MA==","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/959590?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/jplu","html_url":"https:\/\/github.com\/jplu","followers_url":"https:\/\/api.github.com\/users\/jplu\/followers","following_url":"https:\/\/api.github.com\/users\/jplu\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/jplu\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/jplu\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/jplu\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/jplu\/orgs","repos_url":"https:\/\/api.github.com\/users\/jplu\/repos","events_url":"https:\/\/api.github.com\/users\/jplu\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/jplu\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":["My first bug report \u2764\ufe0f\r\nLooking into this right now!","Ok, there are some news, most good than bad :laughing: \r\n\r\nThe dataset script now became:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def __init__(self, **config):\r\n self.train = config.pop(\"train\", None)\r\n self.validation = config.pop(\"validation\", None)\r\n super(Bbc, self).__init__(**config)\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(builder=self, description=\"bla\", features=nlp.features.FeaturesDict({\"id\": nlp.int32, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": self.train}),\r\n nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": self.validation})]\r\n\r\n def _generate_examples(self, filepath):\r\n with open(filepath) as f:\r\n reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n lines = list(reader)[1:]\r\n\r\n for idx, line in enumerate(lines):\r\n yield idx, {\"id\": idx, \"text\": line[1], \"label\": line[0]}\r\n\r\n```\r\n\r\nAnd the dataset folder becomes:\r\n```\r\n.\r\n\u251c\u2500\u2500 bbc\r\n\u2502 \u251c\u2500\u2500 bbc.py\r\n\u2502 \u2514\u2500\u2500 data\r\n\u2502 \u251c\u2500\u2500 test.csv\r\n\u2502 \u2514\u2500\u2500 train.csv\r\n```\r\nI can load the dataset by using the keywords arguments like this:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc\/data\/train.csv\", \"validation\": \"bbc\/data\/test.csv\"})\r\n```\r\n\r\nThat was the good part ^^ Because it took me some time to understand that the script itself is put in cache in `datasets\/src\/nlp\/datasets\/some-hash\/bbc.py` which is very difficult to discover without checking the source code. It means that doesn't matter the changes you do to your original script it is taken into account. I think instead of doing a hash on the name (I suppose it is the name), a hash on the content of the script itself should be a better solution.\r\n\r\nThen by diving a bit in the code I found the `force_reload` parameter [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/nlp\/load.py#L50) but the call of this `load_dataset` method is done with the `builder_kwargs` as seen [here](https:\/\/github.com\/huggingface\/datasets\/blob\/master\/src\/nlp\/load.py#L166) which is ok until the call to the builder is done as the builder do not have this `force_reload` parameter. To show as example, the previous load becomes:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc\/data\/train.csv\", \"validation\": \"bbc\/data\/test.csv\", \"force_reload\": True})\r\n```\r\nRaises\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/load.py\", line 283, in load\r\n dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/load.py\", line 170, in builder\r\n builder_instance = builder_cls(**builder_kwargs)\r\n File \"\/home\/jplu\/dev\/jplu\/datasets\/src\/nlp\/datasets\/84d638d2a8ca919d1021a554e741766f50679dc6553d5a0612b6094311babd39\/bbc.py\", line 12, in __init__\r\n super(Bbc, self).__init__(**config)\r\nTypeError: __init__() got an unexpected keyword argument 'force_reload'\r\n```\r\nSo yes the cache is refreshed with the new script but then raises this error.","Ok great, so as discussed today, let's:\r\n- have a main dataset directory inside the lib with sub-directories hashed by the content of the file\r\n- keep a cache for downloading the scripts from S3 for now\r\n- later: add methods to list and clean the local versions of the datasets (and the distant versions on S3 as well)\r\n\r\nSide question: do you often use `builder_kwargs` for other things than supplying file paths? I was thinking about having a more easy to read and remember `data_files` argument maybe.","Good plan!\r\n\r\nYes I do use `builder_kwargs` for other things such as:\r\n- dataset name\r\n- properties to know how to properly read a CSV file: do I have to skip the first line in a CSV, which delimiter is used, and the columns ids to use.\r\n- properties to know how to properly read a JSON file: which properties in a JSON object to read","Done!"],"created_at":1586888331000,"updated_at":1589223323000,"closed_at":1589223322000,"author_association":"COLLABORATOR","active_lock_reason":null,"pull_request":null,"body":"Hello,\r\n\r\nAs proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:\r\n```python\r\nimport os\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass BbcConfig(nlp.BuilderConfig):\r\n def __init__(self, **kwargs):\r\n super(BbcConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n _DIR = \".\/data\"\r\n _DEV_FILE = \"test.csv\"\r\n _TRAINING_FILE = \"train.csv\"\r\n\r\n BUILDER_CONFIGS = [BbcConfig(name=\"bbc\", version=nlp.Version(\"1.0.0\"))]\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({\"id\": nlp.string, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n def _split_generators(self, dl_manager):\r\n files = {\"train\": os.path.join(self._DIR, self._TRAINING_FILE), \"dev\": os.path.join(self._DIR, self._DEV_FILE)}\r\n\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": files[\"train\"]}),\r\n nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": files[\"dev\"]})]\r\n\r\n def _generate_examples(self, filepath):\r\n with open(filepath) as f:\r\n reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n lines = list(reader)[1:]\r\n\r\n for idx, line in enumerate(lines):\r\n yield idx, {\"idx\": idx, \"text\": line[1], \"label\": line[0]}\r\n\r\n```\r\n\r\nThe dataset is attached to this issue as well:\r\n[data.zip](https:\/\/github.com\/huggingface\/datasets\/files\/4476928\/data.zip)\r\n\r\nNow the steps to reproduce what I would like to do:\r\n1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible)\r\n2. create the `bbc.py` script as above at the same location than the unziped `data` folder.\r\n\r\nNow I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS:\r\n```python\r\nimport nlp\r\nfrom bbc import Bbc\r\ndataset = nlp.load(\"bbc\")\r\n```\r\n\r\nI get:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 280, in load\r\n dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 166, in builder\r\n builder_cls = load_dataset(path, name=name, **builder_kwargs)\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 88, in load_dataset\r\n local_files_only=local_files_only,\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/utils\/file_utils.py\", line 214, in cached_path\r\n if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/zipfile.py\", line 203, in is_zipfile\r\n with open(filename, \"rb\") as fp:\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n```\r\n\r\nBut @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc.py\")\r\n```\r\nAnd\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\".\/bbc.py\")\r\n```\r\nAnd\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"\/absolute\/path\/to\/bbc.py\")\r\n```\r\n\r\nThese three ways gives me:\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 280, in load\r\n dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 166, in builder\r\n builder_cls = load_dataset(path, name=name, **builder_kwargs)\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/site-packages\/nlp\/load.py\", line 124, in load_dataset\r\n dataset_module = importlib.import_module(module_path)\r\n File \"\/opt\/anaconda3\/envs\/transformers\/lib\/python3.7\/importlib\/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 965, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc'\r\n```\r\nAny idea of what I'm missing? or I might have spot a bug :)","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/2\/timeline","performed_via_github_app":null,"is_pull_request":false} {"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1","repository_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets","labels_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1\/labels{\/name}","comments_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1\/comments","events_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1\/events","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1","id":599457467,"node_id":"MDExOlB1bGxSZXF1ZXN0NDAzMDk1NDYw","number":1,"title":"changing nlp.bool to nlp.bool_","user":{"login":"mariamabarham","id":38249783,"node_id":"MDQ6VXNlcjM4MjQ5Nzgz","avatar_url":"https:\/\/avatars.githubusercontent.com\/u\/38249783?v=4","gravatar_id":"","url":"https:\/\/api.github.com\/users\/mariamabarham","html_url":"https:\/\/github.com\/mariamabarham","followers_url":"https:\/\/api.github.com\/users\/mariamabarham\/followers","following_url":"https:\/\/api.github.com\/users\/mariamabarham\/following{\/other_user}","gists_url":"https:\/\/api.github.com\/users\/mariamabarham\/gists{\/gist_id}","starred_url":"https:\/\/api.github.com\/users\/mariamabarham\/starred{\/owner}{\/repo}","subscriptions_url":"https:\/\/api.github.com\/users\/mariamabarham\/subscriptions","organizations_url":"https:\/\/api.github.com\/users\/mariamabarham\/orgs","repos_url":"https:\/\/api.github.com\/users\/mariamabarham\/repos","events_url":"https:\/\/api.github.com\/users\/mariamabarham\/events{\/privacy}","received_events_url":"https:\/\/api.github.com\/users\/mariamabarham\/received_events","type":"User","site_admin":false},"labels":[],"state":"closed","locked":false,"assignee":null,"assignees":[],"milestone":null,"comments":[],"created_at":1586859482000,"updated_at":1586865700000,"closed_at":1586865700000,"author_association":"CONTRIBUTOR","active_lock_reason":null,"pull_request":{"url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/pulls\/1","html_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1","diff_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1.diff","patch_url":"https:\/\/github.com\/huggingface\/datasets\/pull\/1.patch"},"body":"","timeline_url":"https:\/\/api.github.com\/repos\/huggingface\/datasets\/issues\/1\/timeline","performed_via_github_app":null,"is_pull_request":true}